deepxde.gradients

deepxde.gradients.gradients_reverse module

Compute gradients using reverse-mode autodiff.

deepxde.gradients.gradients_reverse.hessian(ys, xs, component=0, i=0, j=0)[source]

Compute Hessian matrix H: H[i][j] = d^2y / dx_i dx_j, where i,j = 0,…, dim_x-1.

Use this function to compute second-order derivatives instead of tf.gradients() or torch.autograd.grad(), because

  • It is lazy evaluation, i.e., it only computes H[i][j] when needed.

  • It will remember the gradients that have already been computed to avoid duplicate computation.

Parameters:
  • ys – Output Tensor of shape (batch_size, dim_y).

  • xs – Input Tensor of shape (batch_size, dim_x).

  • componentys[:, component] is used as y to compute the Hessian.

  • i (int) –

  • j (int) –

Returns:

H[i][j].

deepxde.gradients.gradients_reverse.jacobian(ys, xs, i=0, j=None)[source]

Compute Jacobian matrix J: J[i][j] = dy_i / dx_j, where i = 0, …, dim_y - 1 and j = 0, …, dim_x - 1.

Use this function to compute first-order derivatives instead of tf.gradients() or torch.autograd.grad(), because

  • It is lazy evaluation, i.e., it only computes J[i][j] when needed.

  • It will remember the gradients that have already been computed to avoid duplicate computation.

Parameters:
  • ys – Output Tensor of shape (batch_size, dim_y).

  • xs – Input Tensor of shape (batch_size, dim_x).

  • i (int) –

  • j (int or None) –

Returns:

J[i][j] in Jacobian matrix J. If j is None, returns the gradient of y_i, i.e., J[i].