deepxde.nn.tensorflow_compat_v1
deepxde.nn.tensorflow_compat_v1.deeponet module
- class deepxde.nn.tensorflow_compat_v1.deeponet.DeepONet(layer_sizes_branch, layer_sizes_trunk, activation, kernel_initializer, regularization=None, dropout_rate=0, use_bias=True, stacked=False, trainable_branch=True, trainable_trunk=True, num_outputs=1, multi_output_strategy=None)[source]
Bases:
NN
Deep operator network.
- Parameters:
layer_sizes_branch – A list of integers as the width of a fully connected network, or (dim, f) where dim is the input dimension and f is a network function. The width of the last layer in the branch and trunk net should be the same for all strategies except “split_branch” and “split_trunk”.
layer_sizes_trunk (list) – A list of integers as the width of a fully connected network.
activation – If activation is a
string
, then the same activation is used in both trunk and branch nets. If activation is adict
, then the trunk net uses the activation activation[“trunk”], and the branch net uses activation[“branch”].dropout_rate – If dropout_rate is a
float
between 0 and 1, then the same rate is used in both trunk and branch nets. If dropout_rate is adict
, then the trunk net uses the rate dropout_rate[“trunk”], and the branch net uses dropout_rate[“branch”]. Both dropout_rate[“trunk”] and dropout_rate[“branch”] should befloat
or lists offloat
. The list length should match the length of layer_sizes_trunk - 1 for the trunk net and layer_sizes_branch - 2 for the branch net.trainable_branch – Boolean.
trainable_trunk – Boolean or a list of booleans.
num_outputs (integer) – Number of outputs. In case of multiple outputs, i.e., num_outputs > 1, multi_output_strategy below should be set.
multi_output_strategy (str or None) –
None
, “independent”, “split_both”, “split_branch” or “split_trunk”. It makes sense to set in case of multiple outputs.None
Classical implementation of DeepONet with a single output. Cannot be used with num_outputs > 1.
independent
Use num_outputs independent DeepONets, and each DeepONet outputs only one function.
split_both
Split the outputs of both the branch net and the trunk net into num_outputs groups, and then the kth group outputs the kth solution.
split_branch
Split the branch net and share the trunk net. The width of the last layer in the branch net should be equal to the one in the trunk net multiplied by the number of outputs.
split_trunk
Split the trunk net and share the branch net. The width of the last layer in the trunk net should be equal to the one in the branch net multiplied by the number of outputs.
- property inputs
Return the net inputs (placeholders).
- property outputs
Return the net outputs (tf.Tensor).
- property targets
Return the targets of the net outputs (placeholders).
- class deepxde.nn.tensorflow_compat_v1.deeponet.DeepONetCartesianProd(layer_sizes_branch, layer_sizes_trunk, activation, kernel_initializer, regularization=None, dropout_rate=0, num_outputs=1, multi_output_strategy=None)[source]
Bases:
NN
Deep operator network for dataset in the format of Cartesian product.
- Parameters:
layer_sizes_branch – A list of integers as the width of a fully connected network, or (dim, f) where dim is the input dimension and f is a network function. The width of the last layer in the branch and trunk net should be the same for all strategies except “split_branch” and “split_trunk”.
layer_sizes_trunk (list) – A list of integers as the width of a fully connected network.
activation – If activation is a
string
, then the same activation is used in both trunk and branch nets. If activation is adict
, then the trunk net uses the activation activation[“trunk”], and the branch net uses activation[“branch”].dropout_rate – If dropout_rate is a
float
between 0 and 1, then the same rate is used in both trunk and branch nets. If dropout_rate is adict
, then the trunk net uses the rate dropout_rate[“trunk”], and the branch net uses dropout_rate[“branch”]. Both dropout_rate[“trunk”] and dropout_rate[“branch”] should befloat
or lists offloat
. The list length should match the length of layer_sizes_trunk - 1 for the trunk net and layer_sizes_branch - 2 for the branch net.num_outputs (integer) – Number of outputs. In case of multiple outputs, i.e., num_outputs > 1, multi_output_strategy below should be set.
multi_output_strategy (str or None) –
None
, “independent”, “split_both”, “split_branch” or “split_trunk”. It makes sense to set in case of multiple outputs.None
Classical implementation of DeepONet with a single output. Cannot be used with num_outputs > 1.
independent
Use num_outputs independent DeepONets, and each DeepONet outputs only one function.
split_both
Split the outputs of both the branch net and the trunk net into num_outputs groups, and then the kth group outputs the kth solution.
split_branch
Split the branch net and share the trunk net. The width of the last layer in the branch net should be equal to the one in the trunk net multiplied by the number of outputs.
split_trunk
Split the trunk net and share the branch net. The width of the last layer in the trunk net should be equal to the one in the branch net multiplied by the number of outputs.
- property inputs
Return the net inputs (placeholders).
- property outputs
Return the net outputs (tf.Tensor).
- property targets
Return the targets of the net outputs (placeholders).
deepxde.nn.tensorflow_compat_v1.fnn module
- class deepxde.nn.tensorflow_compat_v1.fnn.FNN(layer_sizes, activation, kernel_initializer, regularization=None, dropout_rate=0, batch_normalization=None, layer_normalization=None, kernel_constraint=None, use_bias=True)[source]
Bases:
NN
Fully-connected neural network.
- property inputs
Return the net inputs (placeholders).
- property outputs
Return the net outputs (tf.Tensor).
- property targets
Return the targets of the net outputs (placeholders).
- class deepxde.nn.tensorflow_compat_v1.fnn.PFNN(layer_sizes, activation, kernel_initializer, regularization=None, dropout_rate=0, batch_normalization=None)[source]
Bases:
FNN
Parallel fully-connected neural network that uses independent sub-networks for each network output.
- Parameters:
layer_sizes – A nested list to define the architecture of the neural network (how the layers are connected). If layer_sizes[i] is int, it represent one layer shared by all the outputs; if layer_sizes[i] is list, it represent len(layer_sizes[i]) sub-layers, each of which exclusively used by one output. Note that len(layer_sizes[i]) should equal to the number of outputs. Every number specify the number of neurons of that layer.
deepxde.nn.tensorflow_compat_v1.mfnn module
- class deepxde.nn.tensorflow_compat_v1.mfnn.MfNN(layer_sizes_low_fidelity, layer_sizes_high_fidelity, activation, kernel_initializer, regularization=None, residue=False, trainable_low_fidelity=True, trainable_high_fidelity=True)[source]
Bases:
NN
Multifidelity neural networks.
- property inputs
Return the net inputs (placeholders).
- property outputs
Return the net outputs (tf.Tensor).
- property targets
Return the targets of the net outputs (placeholders).
deepxde.nn.tensorflow_compat_v1.mionet module
- class deepxde.nn.tensorflow_compat_v1.mionet.MIONet(layer_sizes_branch1, layer_sizes_branch2, layer_sizes_trunk, activation, kernel_initializer, regularization=None)[source]
Bases:
NN
Multiple-input operator network with two input functions.
- property inputs
Return the net inputs (placeholders).
- property outputs
Return the net outputs (tf.Tensor).
- property targets
Return the targets of the net outputs (placeholders).
deepxde.nn.tensorflow_compat_v1.msffn module
- class deepxde.nn.tensorflow_compat_v1.msffn.MsFFN(layer_sizes, activation, kernel_initializer, sigmas, regularization=None, dropout_rate=0, batch_normalization=None, layer_normalization=None, kernel_constraint=None, use_bias=True)[source]
Bases:
FNN
Multi-scale fourier feature networks.
- Parameters:
sigmas – List of standard deviation of the distribution of fourier feature embeddings.
References
- class deepxde.nn.tensorflow_compat_v1.msffn.STMsFFN(layer_sizes, activation, kernel_initializer, sigmas_x, sigmas_t, regularization=None, dropout_rate=0, batch_normalization=None, layer_normalization=None, kernel_constraint=None, use_bias=True)[source]
Bases:
MsFFN
Spatio-temporal multi-scale fourier feature networks.
References
deepxde.nn.tensorflow_compat_v1.nn module
- class deepxde.nn.tensorflow_compat_v1.nn.NN[source]
Bases:
object
Base class for all neural network modules.
- apply_feature_transform(transform)[source]
Compute the features by appling a transform to the network inputs, i.e., features = transform(inputs). Then, outputs = network(features).
- apply_output_transform(transform)[source]
Apply a transform to the network outputs, i.e., outputs = transform(inputs, outputs).
- property auxiliary_vars
Return additional variables needed (placeholders).
- property built
- feed_dict(training, inputs, targets=None, auxiliary_vars=None)[source]
Construct a feed_dict to feed values to TensorFlow placeholders.
- property inputs
Return the net inputs (placeholders).
- num_trainable_parameters()[source]
Evaluate the number of trainable parameters for the NN.
Notice that the function returns the number of trainable parameters for the whole tf.Session, so that it will not be correct if several nets are defined within the same tf.Session.
- property outputs
Return the net outputs (tf.Tensor).
- property targets
Return the targets of the net outputs (placeholders).
deepxde.nn.tensorflow_compat_v1.resnet module
- class deepxde.nn.tensorflow_compat_v1.resnet.ResNet(input_size, output_size, num_neurons, num_blocks, activation, kernel_initializer, regularization=None)[source]
Bases:
NN
Residual neural network.
- property inputs
Return the net inputs (placeholders).
- property outputs
Return the net outputs (tf.Tensor).
- property targets
Return the targets of the net outputs (placeholders).