deepxde.nn.pytorch

deepxde.nn.pytorch.deeponet module

class deepxde.nn.pytorch.deeponet.DeepONet(layer_sizes_branch, layer_sizes_trunk, activation, kernel_initializer)[source]

Bases: deepxde.nn.pytorch.nn.NN

Deep operator network.

Parameters:
  • layer_sizes_branch – A list of integers as the width of a fully connected network, or (dim, f) where dim is the input dimension and f is a network function. The width of the last layer in the branch and trunk net should be equal.
  • layer_sizes_trunk (list) – A list of integers as the width of a fully connected network.
  • activation – If activation is a string, then the same activation is used in both trunk and branch nets. If activation is a dict, then the trunk net uses the activation activation[“trunk”], and the branch net uses activation[“branch”].
forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class deepxde.nn.pytorch.deeponet.DeepONetCartesianProd(layer_sizes_branch, layer_sizes_trunk, activation, kernel_initializer, regularization=None)[source]

Bases: deepxde.nn.pytorch.nn.NN

Deep operator network for dataset in the format of Cartesian product.

Parameters:
  • layer_sizes_branch – A list of integers as the width of a fully connected network, or (dim, f) where dim is the input dimension and f is a network function. The width of the last layer in the branch and trunk net should be equal.
  • layer_sizes_trunk (list) – A list of integers as the width of a fully connected network.
  • activation – If activation is a string, then the same activation is used in both trunk and branch nets. If activation is a dict, then the trunk net uses the activation activation[“trunk”], and the branch net uses activation[“branch”].
forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class deepxde.nn.pytorch.deeponet.PODDeepONet(pod_basis, layer_sizes_branch, activation, kernel_initializer, layer_sizes_trunk=None, regularization=None)[source]

Bases: deepxde.nn.pytorch.nn.NN

Deep operator network with proper orthogonal decomposition (POD) for dataset in the format of Cartesian product.

Parameters:
  • pod_basis – POD basis used in the trunk net.
  • layer_sizes_branch – A list of integers as the width of a fully connected network, or (dim, f) where dim is the input dimension and f is a network function. The width of the last layer in the branch and trunk net should be equal.
  • activation – If activation is a string, then the same activation is used in both trunk and branch nets. If activation is a dict, then the trunk net uses the activation activation[“trunk”], and the branch net uses activation[“branch”].
  • layer_sizes_trunk (list) – A list of integers as the width of a fully connected network. If None, then only use POD basis as the trunk net.

References

L. Lu, X. Meng, S. Cai, Z. Mao, S. Goswami, Z. Zhang, & G. E. Karniadakis. A comprehensive and fair comparison of two neural operators (with practical extensions) based on FAIR data. arXiv preprint arXiv:2111.05512, 2021.

forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

deepxde.nn.pytorch.fnn module

class deepxde.nn.pytorch.fnn.FNN(layer_sizes, activation, kernel_initializer)[source]

Bases: deepxde.nn.pytorch.nn.NN

Fully-connected neural network.

forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class deepxde.nn.pytorch.fnn.PFNN(layer_sizes, activation, kernel_initializer)[source]

Bases: deepxde.nn.pytorch.nn.NN

Parallel fully-connected network that uses independent sub-networks for each network output.

Parameters:layer_sizes – A nested list that defines the architecture of the neural network (how the layers are connected). If layer_sizes[i] is an int, it represents one layer shared by all the outputs; if layer_sizes[i] is a list, it represents len(layer_sizes[i]) sub-layers, each of which is exclusively used by one output. Note that len(layer_sizes[i]) should equal the number of outputs. Every number specifies the number of neurons in that layer.
forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

deepxde.nn.pytorch.mionet module

class deepxde.nn.pytorch.mionet.MIONetCartesianProd(layer_sizes_branch1, layer_sizes_branch2, layer_sizes_trunk, activation, kernel_initializer, regularization=None, trunk_last_activation=False, merge_operation='mul', layer_sizes_merger=None, output_merge_operation='mul', layer_sizes_output_merger=None)[source]

Bases: deepxde.nn.pytorch.nn.NN

MIONet with two input functions for Cartesian product format.

forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class deepxde.nn.pytorch.mionet.PODMIONet(pod_basis, layer_sizes_branch1, layer_sizes_branch2, activation, kernel_initializer, layer_sizes_trunk=None, regularization=None, trunk_last_activation=False, merge_operation='mul', layer_sizes_merger=None)[source]

Bases: deepxde.nn.pytorch.nn.NN

MIONet with two input functions and proper orthogonal decomposition (POD) for Cartesian product format.

forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

deepxde.nn.pytorch.nn module

class deepxde.nn.pytorch.nn.NN[source]

Bases: torch.nn.modules.module.Module

Base class for all neural network modules.

apply_feature_transform(transform)[source]

Compute the features by appling a transform to the network inputs, i.e., features = transform(inputs). Then, outputs = network(features).

apply_output_transform(transform)[source]

Apply a transform to the network outputs, i.e., outputs = transform(inputs, outputs).

auxiliary_vars

Any additional variables needed.

Type:Tensors
num_trainable_parameters()[source]

Evaluate the number of trainable parameters for the NN.