deepxde.nn.paddle

deepxde.nn.paddle.deeponet module

class deepxde.nn.paddle.deeponet.DeepONet(layer_sizes_branch, layer_sizes_trunk, activation, kernel_initializer, use_bias=True)[source]

Bases: NN

Deep operator network.

Lu et al. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nat Mach Intell, 2021.

Parameters:
  • layer_sizes_branch – A list of integers as the width of a fully connected network, or (dim, f) where dim is the input dimension and f is a network function. The width of the last layer in the branch and trunk net should be equal.

  • layer_sizes_trunk (list) – A list of integers as the width of a fully connected network.

  • activation – If activation is a string, then the same activation is used in both trunk and branch nets. If activation is a dict, then the trunk net uses the activation activation[“trunk”], and the branch net uses activation[“branch”].

forward(inputs)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

class deepxde.nn.paddle.deeponet.DeepONetCartesianProd(layer_sizes_branch, layer_sizes_trunk, activation, kernel_initializer, regularization=None)[source]

Bases: NN

Deep operator network for dataset in the format of Cartesian product.

Parameters:
  • layer_sizes_branch – A list of integers as the width of a fully connected network, or (dim, f) where dim is the input dimension and f is a network function. The width of the last layer in the branch and trunk net should be equal.

  • layer_sizes_trunk (list) – A list of integers as the width of a fully connected network.

  • activation – If activation is a string, then the same activation is used in both trunk and branch nets. If activation is a dict, then the trunk net uses the activation activation[“trunk”], and the branch net uses activation[“branch”].

forward(inputs)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

deepxde.nn.paddle.fnn module

class deepxde.nn.paddle.fnn.FNN(layer_sizes, activation, kernel_initializer)[source]

Bases: NN

Fully-connected neural network.

forward(inputs)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

class deepxde.nn.paddle.fnn.PFNN(layer_sizes, activation, kernel_initializer)[source]

Bases: NN

Parallel fully-connected network that uses independent sub-networks for each network output.

Parameters:
  • layer_sizes – A nested list that defines the architecture of the neural network (how the layers are connected). If layer_sizes[i] is an int, it represents one layer shared by all the outputs; if layer_sizes[i] is a list, it represents len(layer_sizes[i]) sub-layers, each of which is exclusively used by one output. Note that len(layer_sizes[i]) should equal the number of outputs. Every number specifies the number of neurons in that layer.

  • activation – A string represent activation used in fully-connected net.

  • kernel_initializer – Initializer for the kernel weights matrix.

forward(inputs)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

deepxde.nn.paddle.msffn module

class deepxde.nn.paddle.msffn.MsFFN(layer_sizes, activation, kernel_initializer, sigmas, dropout_rate=0)[source]

Bases: NN

Multi-scale fourier feature networks.

Parameters:

sigmas – List of standard deviation of the distribution of fourier feature embeddings.

References

S. Wang, H. Wang, & P. Perdikaris. On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural networks. Computer Methods in Applied Mechanics and Engineering, 384, 113938, 2021.

forward(inputs)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

class deepxde.nn.paddle.msffn.STMsFFN(layer_sizes, activation, kernel_initializer, sigmas_x, sigmas_t, dropout_rate=0)[source]

Bases: MsFFN

Spatio-temporal multi-scale fourier feature networks.

References

S. Wang, H. Wang, & P. Perdikaris. On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural networks. Computer Methods in Applied Mechanics and Engineering, 384, 113938, 2021.

forward(inputs)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters:
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

deepxde.nn.paddle.nn module

class deepxde.nn.paddle.nn.NN[source]

Bases: Layer

Base class for all neural network modules.

apply_feature_transform(transform)[source]

Compute the features by appling a transform to the network inputs, i.e., features = transform(inputs). Then, outputs = network(features).

apply_output_transform(transform)[source]

Apply a transform to the network outputs, i.e., outputs = transform(inputs, outputs).