deepxde.data

deepxde.data.constraint module

class deepxde.data.constraint.Constraint(constraint, train_x, test_x)[source]

Bases: Data

General constraints.

losses(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses, i.e., constraints.

test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

deepxde.data.data module

class deepxde.data.data.Data[source]

Bases: ABC

Data base class.

losses(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses, i.e., constraints.

losses_test(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses for test dataset, i.e., constraints.

losses_train(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses for training dataset, i.e., constraints.

abstract test()[source]

Return a test dataset.

abstract train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

class deepxde.data.data.Tuple(train_x, train_y, test_x, test_y)[source]

Bases: Data

Dataset with each data point as a tuple.

Each data tuple is split into two parts: input tuple (x) and output tuple (y).

losses(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses, i.e., constraints.

test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

deepxde.data.dataset module

class deepxde.data.dataset.DataSet(X_train=None, y_train=None, X_test=None, y_test=None, fname_train=None, fname_test=None, col_x=None, col_y=None, standardize=False)[source]

Bases: Data

Fitting Data set.

Parameters:
  • col_x – List of integers.

  • col_y – List of integers.

losses(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses, i.e., constraints.

test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

transform_inputs(x)[source]

deepxde.data.fpde module

class deepxde.data.fpde.FPDE(geometry, fpde, alpha, bcs, resolution, meshtype='dynamic', num_domain=0, num_boundary=0, train_distribution='Hammersley', anchors=None, solution=None, num_test=None)[source]

Bases: PDE

Fractional PDE solver.

D-dimensional fractional Laplacian of order alpha/2 (1 < alpha < 2) is defined as: (-Delta)^(alpha/2) u(x) = C(alpha, D) int_{||theta||=1} D_theta^alpha u(x) d theta, where C(alpha, D) = gamma((1-alpha)/2) * gamma((D+alpha)/2) / (2 pi^((D+1)/2)), D_theta^alpha is the Riemann-Liouville directional fractional derivative, and theta is the differentiation direction vector. The solution u(x) is assumed to be identically zero in the boundary and exterior of the domain. When D = 1, C(alpha, D) = 1 / (2 cos(alpha * pi / 2)).

This solver does not consider C(alpha, D) in the fractional Laplacian, and only discretizes int_{||theta||=1} D_theta^alpha u(x) d theta. D_theta^alpha is approximated by Grunwald-Letnikov formula.

References

G. Pang, L. Lu, & G. E. Karniadakis. fPINNs: Fractional physics-informed neural networks. SIAM Journal on Scientific Computing, 41(4), A2603–A2626, 2019.

get_int_matrix(training)[source]
losses_test(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses for test dataset, i.e., constraints.

losses_train(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses for training dataset, i.e., constraints.

test()[source]

Return a test dataset.

test_points()[source]
train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

class deepxde.data.fpde.Scheme(meshtype, resolution)[source]

Bases: object

Fractional Laplacian discretization.

Discretize fractional Laplacian uisng quadrature rule for the integral with respect to the directions and Grunwald-Letnikov (GL) formula for the Riemann-Liouville directional fractional derivative.

Parameters:
  • meshtype (string) – “static” or “dynamic”.

  • resolution – A list of integer. The first number is the number of quadrature points in the first direction, …, and the last number is the GL parameter.

References

G. Pang, L. Lu, & G. E. Karniadakis. fPINNs: Fractional physics-informed neural networks. SIAM Journal on Scientific Computing, 41(4), A2603–A2626, 2019.

class deepxde.data.fpde.TimeFPDE(geometryxtime, fpde, alpha, ic_bcs, resolution, meshtype='dynamic', num_domain=0, num_boundary=0, num_initial=0, train_distribution='Hammersley', anchors=None, solution=None, num_test=None)[source]

Bases: FPDE

Time-dependent fractional PDE solver.

D-dimensional fractional Laplacian of order alpha/2 (1 < alpha < 2) is defined as: (-Delta)^(alpha/2) u(x) = C(alpha, D) int_{||theta||=1} D_theta^alpha u(x) d theta, where C(alpha, D) = gamma((1-alpha)/2) * gamma((D+alpha)/2) / (2 pi^((D+1)/2)), D_theta^alpha is the Riemann-Liouville directional fractional derivative, and theta is the differentiation direction vector. The solution u(x) is assumed to be identically zero in the boundary and exterior of the domain. When D = 1, C(alpha, D) = 1 / (2 cos(alpha * pi / 2)).

This solver does not consider C(alpha, D) in the fractional Laplacian, and only discretizes int_{||theta||=1} D_theta^alpha u(x) d theta. D_theta^alpha is approximated by Grunwald-Letnikov formula.

References

G. Pang, L. Lu, & G. E. Karniadakis. fPINNs: Fractional physics-informed neural networks. SIAM Journal on Scientific Computing, 41(4), A2603–A2626, 2019.

get_int_matrix(training)[source]
test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

train_points()[source]

deepxde.data.func_constraint module

class deepxde.data.func_constraint.FuncConstraint(geom, constraint, func, num_train, anchors, num_test, dist_train='uniform')[source]

Bases: Data

Function approximation with constraints.

losses(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses, i.e., constraints.

test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

deepxde.data.function module

class deepxde.data.function.Function(geometry, function, num_train, num_test, train_distribution='uniform', online=False)[source]

Bases: Data

Approximate a function via a network.

Parameters:
  • geometry – The domain of the function. Instance of Geometry.

  • function – The function to be approximated. A callable function takes a NumPy array as the input and returns the a NumPy array of corresponding function values.

  • num_train (int) – The number of training points sampled inside the domain.

  • num_test (int)

  • train_distribution (string) – The distribution to sample training points. One of the following: “uniform” (equispaced grid), “pseudo” (pseudorandom), “LHS” (Latin hypercube sampling), “Halton” (Halton sequence), “Hammersley” (Hammersley sequence), or “Sobol” (Sobol sequence).

  • online (bool) – If True, resample the pseudorandom training points every training step, otherwise, use the same training points.

losses(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses, i.e., constraints.

test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

deepxde.data.function_spaces module

class deepxde.data.function_spaces.Chebyshev(N=100, M=1)[source]

Bases: FunctionSpace

Chebyshev polynomial.

p(x) = sum_{i=0}^{N-1} a_i T_i(x), where T_i is Chebyshev polynomial of the first kind. Note: The domain of x is scaled from [-1, 1] to [0, 1].

Parameters:
  • N (int)

  • M (float) – M > 0. The coefficients a_i are randomly sampled from [-M, M].

eval_batch(features, xs)[source]

Evaluate a list of functions at a list of points.

Parameters:
  • features – A NumPy array of shape (n_functions, n_features). A list of the feature vectors of the functions to be evaluated.

  • xs – A NumPy array of shape (n_points, dim). A list of points to be evaluated.

Returns:

A NumPy array of shape (n_functions, n_points). The values of different functions at different points.

eval_one(feature, x)[source]

Evaluate the function at one point.

Parameters:
  • feature – The feature vector of the function to be evaluated.

  • x – The point to be evaluated.

Returns:

The function value at x.

Return type:

float

random(size)[source]

Generate feature vectors of random functions.

Parameters:

size (int) – The number of random functions to generate.

Returns:

A NumPy array of shape (size, n_features).

class deepxde.data.function_spaces.FunctionSpace[source]

Bases: ABC

Function space base class.

Example

space = dde.data.GRF()
feats = space.random(10)
xs = np.linspace(0, 1, num=100)[:, None]
y = space.eval_batch(feats, xs)
abstract eval_batch(features, xs)[source]

Evaluate a list of functions at a list of points.

Parameters:
  • features – A NumPy array of shape (n_functions, n_features). A list of the feature vectors of the functions to be evaluated.

  • xs – A NumPy array of shape (n_points, dim). A list of points to be evaluated.

Returns:

A NumPy array of shape (n_functions, n_points). The values of different functions at different points.

abstract eval_one(feature, x)[source]

Evaluate the function at one point.

Parameters:
  • feature – The feature vector of the function to be evaluated.

  • x – The point to be evaluated.

Returns:

The function value at x.

Return type:

float

abstract random(size)[source]

Generate feature vectors of random functions.

Parameters:

size (int) – The number of random functions to generate.

Returns:

A NumPy array of shape (size, n_features).

class deepxde.data.function_spaces.GRF(T=1, kernel='RBF', length_scale=1, N=1000, interp='cubic')[source]

Bases: FunctionSpace

Gaussian random field (Gaussian process) in 1D.

The random sampling algorithm is based on Cholesky decomposition of the covariance matrix.

Parameters:
  • T (float) – T > 0. The domain is [0, T].

  • kernel (str) – Name of the kernel function. “RBF” (radial-basis function kernel, squared-exponential kernel, Gaussian kernel), “AE” (absolute exponential kernel), or “ExpSineSquared” (Exp-Sine-Squared kernel, periodic kernel).

  • length_scale (float) – The length scale of the kernel.

  • N (int) – The size of the covariance matrix.

  • interp (str) – The interpolation to interpolate the random function. “linear”, “quadratic”, or “cubic”.

eval_batch(features, xs)[source]

Evaluate a list of functions at a list of points.

Parameters:
  • features – A NumPy array of shape (n_functions, n_features). A list of the feature vectors of the functions to be evaluated.

  • xs – A NumPy array of shape (n_points, dim). A list of points to be evaluated.

Returns:

A NumPy array of shape (n_functions, n_points). The values of different functions at different points.

eval_one(feature, x)[source]

Evaluate the function at one point.

Parameters:
  • feature – The feature vector of the function to be evaluated.

  • x – The point to be evaluated.

Returns:

The function value at x.

Return type:

float

random(size)[source]

Generate feature vectors of random functions.

Parameters:

size (int) – The number of random functions to generate.

Returns:

A NumPy array of shape (size, n_features).

class deepxde.data.function_spaces.GRF2D(kernel='RBF', length_scale=1, N=100, interp='splinef2d')[source]

Bases: FunctionSpace

Gaussian random field in [0, 1]x[0, 1].

The random sampling algorithm is based on Cholesky decomposition of the covariance matrix.

Parameters:
  • kernel (str) – The kernel function. “RBF” (radial-basis function) or “AE” (absolute exponential).

  • length_scale (float) – The length scale of the kernel.

  • N (int) – The size of the covariance matrix.

  • interp (str) – The interpolation to interpolate the random function. “linear” or “splinef2d”.

Example

space = dde.data.GRF2D(length_scale=0.1)
features = space.random(3)
x = np.linspace(0, 1, num=500)
y = np.linspace(0, 1, num=500)
xv, yv = np.meshgrid(x, y)
sensors = np.vstack((np.ravel(xv), np.ravel(yv))).T
u = space.eval_batch(features, sensors)
for ui in u:
    plt.figure()
    plt.imshow(np.reshape(ui, (len(y), len(x))))
    plt.colorbar()
plt.show()
eval_batch(features, xs)[source]

Evaluate a list of functions at a list of points.

Parameters:
  • features – A NumPy array of shape (n_functions, n_features). A list of the feature vectors of the functions to be evaluated.

  • xs – A NumPy array of shape (n_points, dim). A list of points to be evaluated.

Returns:

A NumPy array of shape (n_functions, n_points). The values of different functions at different points.

eval_one(feature, x)[source]

Evaluate the function at one point.

Parameters:
  • feature – The feature vector of the function to be evaluated.

  • x – The point to be evaluated.

Returns:

The function value at x.

Return type:

float

random(size)[source]

Generate feature vectors of random functions.

Parameters:

size (int) – The number of random functions to generate.

Returns:

A NumPy array of shape (size, n_features).

class deepxde.data.function_spaces.GRF_KL(T=1, kernel='RBF', length_scale=1, num_eig=10, N=100, interp='cubic')[source]

Bases: FunctionSpace

Gaussian random field (Gaussian process) in 1D.

The random sampling algorithm is based on truncated Karhunen-Loeve (KL) expansion.

Parameters:
  • T (float) – T > 0. The domain is [0, T].

  • kernel (str) – The kernel function. “RBF” (radial-basis function) or “AE” (absolute exponential).

  • length_scale (float) – The length scale of the kernel.

  • num_eig (int) – The number of eigenfunctions in KL expansion to be kept.

  • N (int) – Each eigenfunction is discretized at N points in [0, T].

  • interp (str) – The interpolation to interpolate the random function. “linear”, “quadratic”, or “cubic”.

bases(sensors)[source]

Evaluate the eigenfunctions at a list of points sensors.

eval_batch(features, xs)[source]

Evaluate a list of functions at a list of points.

Parameters:
  • features – A NumPy array of shape (n_functions, n_features). A list of the feature vectors of the functions to be evaluated.

  • xs – A NumPy array of shape (n_points, dim). A list of points to be evaluated.

Returns:

A NumPy array of shape (n_functions, n_points). The values of different functions at different points.

eval_one(feature, x)[source]

Evaluate the function at one point.

Parameters:
  • feature – The feature vector of the function to be evaluated.

  • x – The point to be evaluated.

Returns:

The function value at x.

Return type:

float

random(size)[source]

Generate feature vectors of random functions.

Parameters:

size (int) – The number of random functions to generate.

Returns:

A NumPy array of shape (size, n_features).

class deepxde.data.function_spaces.PowerSeries(N=100, M=1)[source]

Bases: FunctionSpace

Power series.

p(x) = sum_{i=0}^{N-1} a_i x^i

Parameters:
  • N (int)

  • M (float) – M > 0. The coefficients a_i are randomly sampled from [-M, M].

eval_batch(features, xs)[source]

Evaluate a list of functions at a list of points.

Parameters:
  • features – A NumPy array of shape (n_functions, n_features). A list of the feature vectors of the functions to be evaluated.

  • xs – A NumPy array of shape (n_points, dim). A list of points to be evaluated.

Returns:

A NumPy array of shape (n_functions, n_points). The values of different functions at different points.

eval_one(feature, x)[source]

Evaluate the function at one point.

Parameters:
  • feature – The feature vector of the function to be evaluated.

  • x – The point to be evaluated.

Returns:

The function value at x.

Return type:

float

random(size)[source]

Generate feature vectors of random functions.

Parameters:

size (int) – The number of random functions to generate.

Returns:

A NumPy array of shape (size, n_features).

deepxde.data.function_spaces.wasserstein2(space1, space2)[source]

Compute 2-Wasserstein (W2) metric to measure the distance between two GRF.

deepxde.data.helper module

deepxde.data.helper.one_function(dim_outputs)[source]
deepxde.data.helper.zero_function(dim_outputs)[source]

deepxde.data.ide module

class deepxde.data.ide.IDE(geometry, ide, bcs, quad_deg, kernel=None, num_domain=0, num_boundary=0, train_distribution='Hammersley', anchors=None, solution=None, num_test=None)[source]

Bases: PDE

IDE solver.

The current version only supports 1D problems with the integral int_0^x K(x, t) y(t) dt.

Parameters:

kernel – (x, t) –> R.

get_int_matrix(training)[source]
losses_test(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses for test dataset, i.e., constraints.

losses_train(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses for training dataset, i.e., constraints.

quad_points(X)[source]
test()[source]

Return a test dataset.

test_points()[source]
train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

deepxde.data.mf module

class deepxde.data.mf.MfDataSet(X_lo_train=None, X_hi_train=None, y_lo_train=None, y_hi_train=None, X_hi_test=None, y_hi_test=None, fname_lo_train=None, fname_hi_train=None, fname_hi_test=None, col_x=None, col_y=None, standardize=False)[source]

Bases: Data

Multifidelity function approximation from data set.

Parameters:
  • col_x – List of integers.

  • col_y – List of integers.

losses_test(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses for test dataset, i.e., constraints.

losses_train(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses for training dataset, i.e., constraints.

test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

class deepxde.data.mf.MfFunc(geom, func_lo, func_hi, num_lo, num_hi, num_test, dist_train='uniform')[source]

Bases: Data

Multifidelity function approximation.

losses(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses, i.e., constraints.

test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

deepxde.data.pde module

class deepxde.data.pde.PDE(geometry, pde, bcs, num_domain=0, num_boundary=0, train_distribution='Hammersley', anchors=None, exclusions=None, solution=None, num_test=None, auxiliary_var_function=None)[source]

Bases: Data

ODE or time-independent PDE solver.

Parameters:
  • geometry – Instance of Geometry.

  • pde – A global PDE or a list of PDEs. None if no global PDE.

  • bcs – A boundary condition or a list of boundary conditions. Use [] if no boundary condition.

  • num_domain (int) – The number of training points sampled inside the domain.

  • num_boundary (int) – The number of training points sampled on the boundary.

  • train_distribution (string) – The distribution to sample training points. One of the following: “uniform” (equispaced grid), “pseudo” (pseudorandom), “LHS” (Latin hypercube sampling), “Halton” (Halton sequence), “Hammersley” (Hammersley sequence), or “Sobol” (Sobol sequence).

  • anchors – A Numpy array of training points, in addition to the num_domain and num_boundary sampled points.

  • exclusions – A Numpy array of points to be excluded for training.

  • solution – The reference solution.

  • num_test – The number of points sampled inside the domain for testing PDE loss. The testing points for BCs/ICs are the same set of points used for training. If None, then the training points will be used for testing.

  • auxiliary_var_function – A function that inputs train_x or test_x and outputs auxiliary variables.

Warning

The testing points include points inside the domain and points on the boundary, and they may not have the same density, and thus the entire testing points may not be uniformly distributed. As a result, if you have a reference solution (solution) and would like to compute a metric such as

Model.compile(metrics=["l2 relative error"])

then the metric may not be very accurate. To better compute a metric, you can sample the points manually, and then use Model.predict() to predict the solution on thess points and compute the metric:

x = geom.uniform_points(num, boundary=True)
y_true = ...
y_pred = model.predict(x)
error= dde.metrics.l2_relative_error(y_true, y_pred)
train_x_all

A Numpy array of points for PDE training. train_x_all is unordered, and does not have duplication. If there is PDE, then train_x_all is used as the training points of PDE.

train_x_bc

A Numpy array of the training points for BCs. train_x_bc is constructed from train_x_all at the first step of training, by default it won’t be updated when train_x_all changes. To update train_x_bc, set it to None and call bc_points, and then update the loss function by model.compile().

num_bcs

num_bcs[i] is the number of points for bcs[i].

Type:

list

train_x

A Numpy array of the points fed into the network for training. train_x is ordered from BC points (train_x_bc) to PDE points (train_x_all), and may have duplicate points.

train_aux_vars

Auxiliary variables that associate with train_x.

test_x

A Numpy array of the points fed into the network for testing, ordered from BCs to PDE. The BC points are exactly the same points in train_x_bc.

test_aux_vars

Auxiliary variables that associate with test_x.

add_anchors(anchors)[source]

Add new points for training PDE losses.

The BC points will not be updated.

bc_points()[source]
losses(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses, i.e., constraints.

replace_with_anchors(anchors)[source]

Replace the current PDE training points with anchors.

The BC points will not be changed.

resample_train_points(pde_points=True, bc_points=True)[source]

Resample the training points for PDE and/or BC.

test()[source]

Return a test dataset.

test_points()[source]
train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

train_points()[source]
class deepxde.data.pde.TimePDE(geometryxtime, pde, ic_bcs, num_domain=0, num_boundary=0, num_initial=0, train_distribution='Hammersley', anchors=None, exclusions=None, solution=None, num_test=None, auxiliary_var_function=None)[source]

Bases: PDE

Time-dependent PDE solver.

Parameters:

num_initial (int) – The number of training points sampled on the initial location.

train_points()[source]

deepxde.data.pde_operator module

class deepxde.data.pde_operator.PDEOperator(pde, function_space, evaluation_points, num_function, function_variables=None, num_test=None)[source]

Bases: Data

PDE solution operator.

Parameters:
  • pde – Instance of dde.data.PDE or dde.data.TimePDE.

  • function_space – Instance of dde.data.FunctionSpace.

  • evaluation_points – A NumPy array of shape (n_points, dim). Discretize the input function sampled from function_space using pointwise evaluations at a set of points as the input of the branch net.

  • num_function (int) – The number of functions for training.

  • function_variablesNone or a list of integers. The functions in the function_space may not have the same domain as the PDE. For example, the PDE is defined on a spatio-temporal domain (x, t), but the function is IC, which is only a function of x. In this case, we need to specify the variables of the function by function_variables=[0], where 0 indicates the first variable x. If None, then we assume the domains of the function and the PDE are the same.

  • num_test – The number of functions for testing PDE loss. The testing functions for BCs/ICs are the same functions used for training. If None, then the training functions will be used for testing.

train_bc

A triple of three Numpy arrays (v, x, vx) fed into PIDeepONet for training BCs/ICs.

num_bcs

num_bcs[i] is the number of points for bcs[i].

Type:

list

train_x

A tuple of two Numpy arrays (v, x) fed into PIDeepONet for training. v is the function input to the branch net; x is the point input to the trunk net. train_x is ordered from BCs/ICs (train_bc) to PDEs.

train_aux_vars

v(x), i.e., the value of v evaluated at x.

bc_inputs(func_feats, func_vals)[source]
gen_inputs(func_feats, func_vals, points)[source]
losses(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses, i.e., constraints.

resample_train_points(pde_points=True, bc_points=True)[source]

Resample the training points for the operator.

test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

class deepxde.data.pde_operator.PDEOperatorCartesianProd(pde, function_space, evaluation_points, num_function, function_variables=None, num_test=None, batch_size=None)[source]

Bases: Data

PDE solution operator with data in the format of Cartesian product.

Parameters:
  • pde – Instance of dde.data.PDE or dde.data.TimePDE.

  • function_space – Instance of dde.data.FunctionSpace.

  • evaluation_points – A NumPy array of shape (n_points, dim). Discretize the input function sampled from function_space using pointwise evaluations at a set of points as the input of the branch net.

  • num_function (int) – The number of functions for training.

  • function_variablesNone or a list of integers. The functions in the function_space may not have the same domain as the PDE. For example, the PDE is defined on a spatio-temporal domain (x, t), but the function is IC, which is only a function of x. In this case, we need to specify the variables of the function by function_variables=[0], where 0 indicates the first variable x. If None, then we assume the domains of the function and the PDE are the same.

  • num_test – The number of functions for testing PDE loss. The testing functions for BCs/ICs are the same functions used for training. If None, then the training functions will be used for testing.

  • batch_size – Integer or None.

train_x

A tuple of two Numpy arrays (v, x) fed into PIDeepONet for training. v is the function input to the branch net and has the shape (N1, dim1); x is the point input to the trunk net and has the shape (N2, dim2).

train_aux_vars

v(x), i.e., the value of v evaluated at x, has the shape (N1, N2).

losses_test(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses for test dataset, i.e., constraints.

losses_train(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses for training dataset, i.e., constraints.

test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

deepxde.data.quadruple module

class deepxde.data.quadruple.Quadruple(X_train, y_train, X_test, y_test)[source]

Bases: Data

Dataset with each data point as a quadruple.

The couple of the first three elements are the input, and the fourth element is the output. This dataset can be used with the network MIONet for operator learning.

Parameters:
  • X_train – A tuple of three NumPy arrays.

  • y_train – A NumPy array.

losses(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses, i.e., constraints.

test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

class deepxde.data.quadruple.QuadrupleCartesianProd(X_train, y_train, X_test, y_test)[source]

Bases: Data

Cartesian Product input data format for MIONet architecture.

This dataset can be used with the network MIONetCartesianProd for operator learning.

Parameters:
  • X_train – A tuple of three NumPy arrays. The first element has the shape (N1, dim1), the second element has the shape (N1, dim2), and the third element has the shape (N2, dim3).

  • y_train – A NumPy array of shape (N1, N2).

losses(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses, i.e., constraints.

test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

deepxde.data.sampler module

class deepxde.data.sampler.BatchSampler(num_samples, shuffle=True)[source]

Bases: object

Samples a mini-batch of indices.

The indices are repeated indefinitely. Has the same effect as:

indices = tf.data.Dataset.range(num_samples)
indices = indices.repeat().shuffle(num_samples).batch(batch_size)
iterator = iter(indices)
batch_indices = iterator.get_next()

However, tf.data.Dataset.__iter__() is only supported inside of tf.function or when eager execution is enabled. tf.data.Dataset.make_one_shot_iterator() supports graph mode, but is too slow.

This class is not implemented as a Python Iterator, so that it can support dynamic batch size.

Parameters:
  • num_samples (int) – The number of samples.

  • shuffle (bool) – Set to True to have the indices reshuffled at every epoch.

property epochs_completed
get_next(batch_size)[source]

Returns the indices of the next batch.

Parameters:

batch_size (int) – The number of elements to combine in a single batch.

deepxde.data.triple module

class deepxde.data.triple.Triple(X_train, y_train, X_test, y_test)[source]

Bases: Data

Dataset with each data point as a triple.

The couple of the first two elements are the input, and the third element is the output. This dataset can be used with the network DeepONet for operator learning.

Parameters:
  • X_train – A tuple of two NumPy arrays.

  • y_train – A NumPy array.

References

L. Lu, P. Jin, G. Pang, Z. Zhang, & G. E. Karniadakis. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3, 218–229, 2021.

losses(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses, i.e., constraints.

test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.

class deepxde.data.triple.TripleCartesianProd(X_train, y_train, X_test, y_test)[source]

Bases: Data

Dataset with each data point as a triple. The ordered pair of the first two elements are created from a Cartesian product of the first two lists. If we compute the Cartesian product of the first two arrays, then we have a Triple dataset.

This dataset can be used with the network DeepONetCartesianProd for operator learning.

Parameters:
  • X_train – A tuple of two NumPy arrays. The first element has the shape (N1, dim1), and the second element has the shape (N2, dim2).

  • y_train – A NumPy array of shape (N1, N2).

losses(targets, outputs, loss_fn, inputs, model, aux=None)[source]

Return a list of losses, i.e., constraints.

test()[source]

Return a test dataset.

train_next_batch(batch_size=None)[source]

Return a training dataset of the size batch_size.