deepxde.gradients

deepxde.gradients.gradients module

Compute gradients using reverse-mode or forward-mode autodiff.

deepxde.gradients.gradients.hessian(ys, xs, component=0, i=0, j=0)[source]

Compute Hessian matrix H as H[i, j] = d^2y / dx_i dx_j, where i,j = 0, …, dim_x - 1.

Use this function to compute second-order derivatives instead of tf.gradients() or torch.autograd.grad(), because

  • It is lazy evaluation, i.e., it only computes H[i, j] when needed.

  • It will remember the gradients that have already been computed to avoid duplicate computation.

Parameters:
  • ys – Output Tensor of shape (batch_size, dim_y).

  • xs – Input Tensor of shape (batch_size, dim_x).

  • componentys[:, component] is used as y to compute the Hessian.

  • i (int) – `i`th row.

  • j (int) – `j`th column.

Returns:

H[i, j].

deepxde.gradients.gradients.jacobian(ys, xs, i=None, j=None)[source]

Compute Jacobian matrix J as J[i, j] = dy_i / dx_j, where i = 0, …, dim_y - 1 and j = 0, …, dim_x - 1.

Use this function to compute first-order derivatives instead of tf.gradients() or torch.autograd.grad(), because

  • It is lazy evaluation, i.e., it only computes J[i, j] when needed.

  • It will remember the gradients that have already been computed to avoid duplicate computation.

Parameters:
  • ys – Output Tensor of shape (batch_size, dim_y).

  • xs – Input Tensor of shape (batch_size, dim_x).

  • i (int or None) – i`th row. If `i is None, returns the j`th column J[:, `j].

  • j (int or None) – j`th column. If `j is None, returns the i`th row J[`i, :], i.e., the gradient of y_i. i and j cannot be both None, unless J has only one element, which is returned.

Returns:

], or j`th column J[:, `j].

Return type:

(i, j)th entry J[i, j], i`th row J[`i,