deepxde.nn.pytorch
deepxde.nn.pytorch.deeponet module
- class deepxde.nn.pytorch.deeponet.DeepONet(layer_sizes_branch, layer_sizes_trunk, activation, kernel_initializer, num_outputs=1, multi_output_strategy=None)[source]
Bases:
NN
Deep operator network.
- Parameters:
layer_sizes_branch – A list of integers as the width of a fully connected network, or (dim, f) where dim is the input dimension and f is a network function. The width of the last layer in the branch and trunk net should be the same for all strategies except “split_branch” and “split_trunk”.
layer_sizes_trunk (list) – A list of integers as the width of a fully connected network.
activation – If activation is a
string
, then the same activation is used in both trunk and branch nets. If activation is adict
, then the trunk net uses the activation activation[“trunk”], and the branch net uses activation[“branch”].num_outputs (integer) – Number of outputs. In case of multiple outputs, i.e., num_outputs > 1, multi_output_strategy below should be set.
multi_output_strategy (str or None) –
None
, “independent”, “split_both”, “split_branch” or “split_trunk”. It makes sense to set in case of multiple outputs.None
Classical implementation of DeepONet with a single output. Cannot be used with num_outputs > 1.
independent
Use num_outputs independent DeepONets, and each DeepONet outputs only one function.
split_both
Split the outputs of both the branch net and the trunk net into num_outputs groups, and then the kth group outputs the kth solution.
split_branch
Split the branch net and share the trunk net. The width of the last layer in the branch net should be equal to the one in the trunk net multiplied by the number of outputs.
split_trunk
Split the trunk net and share the branch net. The width of the last layer in the trunk net should be equal to the one in the branch net multiplied by the number of outputs.
- forward(inputs)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepxde.nn.pytorch.deeponet.DeepONetCartesianProd(layer_sizes_branch, layer_sizes_trunk, activation, kernel_initializer, num_outputs=1, multi_output_strategy=None)[source]
Bases:
NN
Deep operator network for dataset in the format of Cartesian product.
- Parameters:
layer_sizes_branch – A list of integers as the width of a fully connected network, or (dim, f) where dim is the input dimension and f is a network function. The width of the last layer in the branch and trunk net should be the same for all strategies except “split_branch” and “split_trunk”.
layer_sizes_trunk (list) – A list of integers as the width of a fully connected network.
activation – If activation is a
string
, then the same activation is used in both trunk and branch nets. If activation is adict
, then the trunk net uses the activation activation[“trunk”], and the branch net uses activation[“branch”].num_outputs (integer) – Number of outputs. In case of multiple outputs, i.e., num_outputs > 1, multi_output_strategy below should be set.
multi_output_strategy (str or None) –
None
, “independent”, “split_both”, “split_branch” or “split_trunk”. It makes sense to set in case of multiple outputs.None
Classical implementation of DeepONet with a single output. Cannot be used with num_outputs > 1.
independent
Use num_outputs independent DeepONets, and each DeepONet outputs only one function.
split_both
Split the outputs of both the branch net and the trunk net into num_outputs groups, and then the kth group outputs the kth solution.
split_branch
Split the branch net and share the trunk net. The width of the last layer in the branch net should be equal to the one in the trunk net multiplied by the number of outputs.
split_trunk
Split the trunk net and share the branch net. The width of the last layer in the trunk net should be equal to the one in the branch net multiplied by the number of outputs.
- forward(inputs)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepxde.nn.pytorch.deeponet.PODDeepONet(pod_basis, layer_sizes_branch, activation, kernel_initializer, layer_sizes_trunk=None, regularization=None)[source]
Bases:
NN
Deep operator network with proper orthogonal decomposition (POD) for dataset in the format of Cartesian product.
- Parameters:
pod_basis – POD basis used in the trunk net.
layer_sizes_branch – A list of integers as the width of a fully connected network, or (dim, f) where dim is the input dimension and f is a network function. The width of the last layer in the branch and trunk net should be equal.
activation – If activation is a
string
, then the same activation is used in both trunk and branch nets. If activation is adict
, then the trunk net uses the activation activation[“trunk”], and the branch net uses activation[“branch”].layer_sizes_trunk (list) – A list of integers as the width of a fully connected network. If
None
, then only use POD basis as the trunk net.
References
- forward(inputs)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
deepxde.nn.pytorch.fnn module
- class deepxde.nn.pytorch.fnn.FNN(layer_sizes, activation, kernel_initializer, regularization=None)[source]
Bases:
NN
Fully-connected neural network.
- forward(inputs)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepxde.nn.pytorch.fnn.PFNN(layer_sizes, activation, kernel_initializer)[source]
Bases:
NN
Parallel fully-connected network that uses independent sub-networks for each network output.
- Parameters:
layer_sizes – A nested list that defines the architecture of the neural network (how the layers are connected). If layer_sizes[i] is an int, it represents one layer shared by all the outputs; if layer_sizes[i] is a list, it represents len(layer_sizes[i]) sub-layers, each of which is exclusively used by one output. Note that len(layer_sizes[i]) should equal the number of outputs. Every number specifies the number of neurons in that layer.
- forward(inputs)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
deepxde.nn.pytorch.mionet module
- class deepxde.nn.pytorch.mionet.MIONetCartesianProd(layer_sizes_branch1, layer_sizes_branch2, layer_sizes_trunk, activation, kernel_initializer, regularization=None, trunk_last_activation=False, merge_operation='mul', layer_sizes_merger=None, output_merge_operation='mul', layer_sizes_output_merger=None)[source]
Bases:
NN
MIONet with two input functions for Cartesian product format.
- forward(inputs)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class deepxde.nn.pytorch.mionet.PODMIONet(pod_basis, layer_sizes_branch1, layer_sizes_branch2, activation, kernel_initializer, layer_sizes_trunk=None, regularization=None, trunk_last_activation=False, merge_operation='mul', layer_sizes_merger=None)[source]
Bases:
NN
MIONet with two input functions and proper orthogonal decomposition (POD) for Cartesian product format.
- forward(inputs)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
deepxde.nn.pytorch.nn module
- class deepxde.nn.pytorch.nn.NN[source]
Bases:
Module
Base class for all neural network modules.
- apply_feature_transform(transform)[source]
Compute the features by appling a transform to the network inputs, i.e., features = transform(inputs). Then, outputs = network(features).
- apply_output_transform(transform)[source]
Apply a transform to the network outputs, i.e., outputs = transform(inputs, outputs).
- property auxiliary_vars
Any additional variables needed.
- Type:
Tensors