deepxde

deepxde.callbacks module

class deepxde.callbacks.Callback[source]

Bases: object

Callback base class.

model

instance of Model. Reference of the model being trained.

init()[source]

Init after setting a model.

on_batch_begin()[source]

Called at the beginning of every batch.

on_batch_end()[source]

Called at the end of every batch.

on_epoch_begin()[source]

Called at the beginning of every epoch.

on_epoch_end()[source]

Called at the end of every epoch.

on_predict_begin()[source]

Called at the beginning of prediction.

on_predict_end()[source]

Called at the end of prediction.

on_train_begin()[source]

Called at the beginning of model training.

on_train_end()[source]

Called at the end of model training.

set_model(model)[source]
class deepxde.callbacks.CallbackList(callbacks=None)[source]

Bases: Callback

Container abstracting a list of callbacks.

Parameters:

callbacks – List of Callback instances.

append(callback)[source]
on_batch_begin()[source]

Called at the beginning of every batch.

on_batch_end()[source]

Called at the end of every batch.

on_epoch_begin()[source]

Called at the beginning of every epoch.

on_epoch_end()[source]

Called at the end of every epoch.

on_predict_begin()[source]

Called at the beginning of prediction.

on_predict_end()[source]

Called at the end of prediction.

on_train_begin()[source]

Called at the beginning of model training.

on_train_end()[source]

Called at the end of model training.

set_model(model)[source]
class deepxde.callbacks.DropoutUncertainty(period=1000)[source]

Bases: Callback

Uncertainty estimation via MC dropout.

References

Y. Gal, & Z. Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. International Conference on Machine Learning, 2016.

Warning

This cannot be used together with other techniques that have different behaviors during training and testing, such as batch normalization.

on_epoch_end()[source]

Called at the end of every epoch.

on_train_end()[source]

Called at the end of model training.

class deepxde.callbacks.EarlyStopping(min_delta=0, patience=0, baseline=None, monitor='loss_train')[source]

Bases: Callback

Stop training when a monitored quantity (training or testing loss) has stopped improving. Only checked at validation step according to display_every in Model.train.

Parameters:
  • min_delta – Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement.

  • patience – Number of epochs with no improvement after which training will be stopped.

  • baseline – Baseline value for the monitored quantity to reach. Training will stop if the model doesn’t show improvement over the baseline.

  • monitor – The loss function that is monitored. Either ‘loss_train’ or ‘loss_test’

get_monitor_value()[source]
on_epoch_end()[source]

Called at the end of every epoch.

on_train_begin()[source]

Called at the beginning of model training.

on_train_end()[source]

Called at the end of model training.

class deepxde.callbacks.FirstDerivative(x, component_x=0, component_y=0)[source]

Bases: OperatorPredictor

Generates the first order derivative of the outputs with respect to the inputs.

Parameters:

x – The input data.

class deepxde.callbacks.ModelCheckpoint(filepath, verbose=0, save_better_only=False, period=1, monitor='train loss')[source]

Bases: Callback

Save the model after every epoch.

Parameters:
  • filepath (string) – Prefix of filenames to save the model file.

  • verbose – Verbosity mode, 0 or 1.

  • save_better_only – If True, only save a better model according to the quantity monitored. Model is only checked at validation step according to display_every in Model.train.

  • period – Interval (number of epochs) between checkpoints.

  • monitor – The loss function that is monitored. Either ‘train loss’ or ‘test loss’.

get_monitor_value()[source]
on_epoch_end()[source]

Called at the end of every epoch.

class deepxde.callbacks.MovieDumper(filename, x1, x2, num_points=100, period=1, component=0, save_spectrum=False, y_reference=None)[source]

Bases: Callback

Dump a movie to show the training progress of the function along a line.

Parameters:

spectrum – If True, dump the spectrum of the Fourier transform.

on_epoch_end()[source]

Called at the end of every epoch.

on_train_begin()[source]

Called at the beginning of model training.

on_train_end()[source]

Called at the end of model training.

class deepxde.callbacks.OperatorPredictor(x, op, period=1, filename=None, precision=2)[source]

Bases: Callback

Generates operator values for the input samples.

Parameters:
  • x – The input data.

  • op – The operator with inputs (x, y).

  • period (int) – Interval (number of epochs) between checking values.

  • filename (string) – Output the values to the file filename. The file is kept open to allow instances to be re-used. If None, output to the screen.

  • precision (int) – The precision of variables to display.

get_value()[source]
init()[source]

Init after setting a model.

on_epoch_end()[source]

Called at the end of every epoch.

on_predict_end()[source]

Called at the end of prediction.

on_train_begin()[source]

Called at the beginning of model training.

on_train_end()[source]

Called at the end of model training.

class deepxde.callbacks.PDEPointResampler(period=100, pde_points=True, bc_points=False)[source]

Bases: Callback

Resample the training points for PDE and/or BC losses every given period.

Parameters:
  • period – How often to resample the training points (default is 100 iterations).

  • pde_points – If True, resample the training points for PDE losses (default is True).

  • bc_points – If True, resample the training points for BC losses (default is False; only supported by PyTorch and PaddlePaddle backend currently).

on_epoch_end()[source]

Called at the end of every epoch.

on_train_begin()[source]

Called at the beginning of model training.

class deepxde.callbacks.Timer(available_time)[source]

Bases: Callback

Stop training when training time reaches the threshold. This Timer starts after the first call of on_train_begin.

Parameters:

available_time (float) – Total time (in minutes) available for the training.

on_epoch_end()[source]

Called at the end of every epoch.

on_train_begin()[source]

Called at the beginning of model training.

class deepxde.callbacks.VariableValue(var_list, period=1, filename=None, precision=2)[source]

Bases: Callback

Get the variable values.

Parameters:
  • var_list – A TensorFlow Variable or a list of TensorFlow Variable.

  • period (int) – Interval (number of epochs) between checking values.

  • filename (string) – Output the values to the file filename. The file is kept open to allow instances to be re-used. If None, output to the screen.

  • precision (int) – The precision of variables to display.

get_value()[source]

Return the variable values.

on_epoch_end()[source]

Called at the end of every epoch.

on_train_begin()[source]

Called at the beginning of model training.

on_train_end()[source]

Called at the end of model training.

deepxde.config module

deepxde.config.default_float()[source]

Returns the default float type, as a string.

deepxde.config.disable_xla_jit()[source]

Disables just-in-time compilation with XLA.

  • For backend TensorFlow 1.x, by default, compiles with XLA when running on GPU. XLA compilation can only be enabled when running on GPU.

  • For backend TensorFlow 2.x, by default, compiles with XLA when running on GPU. If compilation with XLA makes your code slower on GPU, in addition to calling disable_xla_jit, you may simultaneously try XLA with auto-clustering via

    $ TF_XLA_FLAGS=–tf_xla_auto_jit=2 path/to/your/program

  • Backend JAX always uses XLA.

  • Backends PyTorch and PaddlePaddle do not support XLA.

This is equivalent with enable_xla_jit(False).

deepxde.config.enable_xla_jit(mode=True)[source]

Enables just-in-time compilation with XLA.

  • For backend TensorFlow 1.x, by default, compiles with XLA when running on GPU. XLA compilation can only be enabled when running on GPU.

  • For backend TensorFlow 2.x, by default, compiles with XLA when running on GPU. If compilation with XLA makes your code slower on GPU, in addition to calling disable_xla_jit, you may simultaneously try XLA with auto-clustering via

    $ TF_XLA_FLAGS=–tf_xla_auto_jit=2 path/to/your/program

  • Backend JAX always uses XLA.

  • Backends PyTorch and PaddlePaddle do not support XLA.

Parameters:

mode (bool) – Whether to enable just-in-time compilation with XLA (True) or disable just-in-time compilation with XLA (False).

deepxde.config.set_default_autodiff(value)[source]

Sets the default automatic differentiation mode.

The default automatic differentiation uses reverse mode.

Parameters:

value (String) – ‘reverse’ or ‘forward’.

deepxde.config.set_default_float(value)[source]

Sets the default float type.

The default floating point type is ‘float32’.

Parameters:

value (String) – ‘float16’, ‘float32’, or ‘float64’.

deepxde.config.set_parallel_scaling(scaling_mode)[source]

Sets the scaling mode for data parallel acceleration. Weak scaling involves increasing the problem size proportionally with the number of processors, while strong scaling involves keeping the problem size fixed and increasing the number of processors.

Parameters:

scaling_mode (str) – Whether ‘weak’ or ‘strong’

deepxde.config.set_random_seed(seed)[source]

Sets all random seeds for the program (Python random, NumPy, and backend), and configures the program to run deterministically.

You can use this to make the program fully deterministic. This means that if the program is run multiple times with the same inputs on the same hardware, it will have the exact same outputs each time. This is useful for debugging models, and for obtaining fully reproducible results.

  • For backend TensorFlow 2.x: Results might change if you run the model several times in the same terminal.

Warning

Note that determinism in general comes at the expense of lower performance and so your model may run slower when determinism is enabled.

Parameters:

seed (int) – The desired seed.

deepxde.losses module

deepxde.losses.get(identifier)[source]

Retrieves a loss function.

Parameters:

identifier – A loss identifier. String name of a loss function, or a loss function.

Returns:

A loss function.

deepxde.losses.mean_absolute_error(y_true, y_pred)[source]
deepxde.losses.mean_absolute_percentage_error(y_true, y_pred)[source]
deepxde.losses.mean_l2_relative_error(y_true, y_pred)[source]
deepxde.losses.mean_squared_error(y_true, y_pred)[source]
deepxde.losses.softmax_cross_entropy(y_true, y_pred)[source]
deepxde.losses.zero(*_)[source]

deepxde.metrics module

deepxde.metrics.absolute_percentage_error_std(y_true, y_pred)[source]
deepxde.metrics.accuracy(y_true, y_pred)[source]
deepxde.metrics.get(identifier)[source]
deepxde.metrics.l2_relative_error(y_true, y_pred)[source]
deepxde.metrics.max_absolute_percentage_error(y_true, y_pred)[source]
deepxde.metrics.mean_absolute_percentage_error(y_true, y_pred)[source]
deepxde.metrics.mean_l2_relative_error(y_true, y_pred)[source]

Compute the average of L2 relative error along the first axis.

deepxde.metrics.mean_squared_error(y_true, y_pred)[source]
deepxde.metrics.nanl2_relative_error(y_true, y_pred)[source]

Return the L2 relative error treating Not a Numbers (NaNs) as zero.

deepxde.model module

class deepxde.model.LossHistory[source]

Bases: object

append(step, loss_train, loss_test, metrics_test)[source]
class deepxde.model.Model(data, net)[source]

Bases: object

A Model trains a NN on a Data.

Parameters:
  • datadeepxde.data.Data instance.

  • netdeepxde.nn.NN instance.

compile(optimizer, lr=None, loss='MSE', metrics=None, decay=None, loss_weights=None, external_trainable_variables=None)[source]

Configures the model for training.

Parameters:
  • optimizer – String name of an optimizer, or a backend optimizer class instance.

  • lr (float) – The learning rate. For L-BFGS, use dde.optimizers.set_LBFGS_options to set the hyperparameters.

  • loss – If the same loss is used for all errors, then loss is a String name of a loss function or a loss function. If different errors use different losses, then loss is a list whose size is equal to the number of errors.

  • metrics – List of metrics to be evaluated by the model during training.

  • decay (tuple) –

    Name and parameters of decay to the initial learning rate. One of the following options:

  • loss_weights – A list specifying scalar coefficients (Python floats) to weight the loss contributions. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients.

  • external_trainable_variables – A trainable dde.Variable object or a list of trainable dde.Variable objects. The unknown parameters in the physics systems that need to be recovered. If the backend is tensorflow.compat.v1, external_trainable_variables is ignored, and all trainable dde.Variable objects are automatically collected.

predict(x, operator=None, callbacks=None)[source]

Generates predictions for the input samples. If operator is None, returns the network output, otherwise returns the output of the operator.

Parameters:
  • x – The network inputs. A Numpy array or a tuple of Numpy arrays.

  • operator – A function takes arguments (inputs, outputs) or (inputs, outputs, auxiliary_variables) and outputs a tensor. inputs and outputs are the network input and output tensors, respectively. auxiliary_variables is the output of auxiliary_var_function(x) in dde.data.PDE. operator is typically chosen as the PDE (used to define dde.data.PDE) to predict the PDE residual.

  • callbacks – List of dde.callbacks.Callback instances. List of callbacks to apply during prediction.

print_model()[source]

Prints all trainable variables.

restore(save_path, device=None, verbose=0)[source]

Restore all variables from a disk file.

Parameters:
  • save_path (string) – Path where model was previously saved.

  • device (string, optional) – Device to load the model on (e.g. “cpu”,”cuda:0”…). By default, the model is loaded on the device it was saved from.

save(save_path, protocol='backend', verbose=0)[source]

Saves all variables to a disk file.

Parameters:
  • save_path (string) – Prefix of filenames to save the model file.

  • protocol (string) –

    If protocol is “backend”, save using the backend-specific method.

    If protocol is “pickle”, save using the Python pickle module. Only the protocol “backend” supports restore().

Returns:

Path where model is saved.

Return type:

string

state_dict()[source]

Returns a dictionary containing all variables.

train(iterations=None, batch_size=None, display_every=1000, disregard_previous_best=False, callbacks=None, model_restore_path=None, model_save_path=None, epochs=None)[source]

Trains the model.

Parameters:
  • iterations (Integer) – Number of iterations to train the model, i.e., number of times the network weights are updated.

  • batch_size

    Integer, tuple, or None.

    • If you solve PDEs via dde.data.PDE or dde.data.TimePDE, do not use batch_size, and instead use dde.callbacks.PDEPointResampler, see an example.

    • For DeepONet in the format of Cartesian product, if batch_size is an Integer, then it is the batch size for the branch input; if you want to also use mini-batch for the trunk net input, set batch_size as a tuple, where the fist number is the batch size for the branch net input and the second number is the batch size for the trunk net input.

  • display_every (Integer) – Print the loss and metrics every this steps.

  • disregard_previous_best – If True, disregard the previous saved best model.

  • callbacks – List of dde.callbacks.Callback instances. List of callbacks to apply during training.

  • model_restore_path (String) – Path where parameters were previously saved.

  • model_save_path (String) – Prefix of filenames created for the checkpoint.

  • epochs (Integer) – Deprecated alias to iterations. This will be removed in a future version.

class deepxde.model.TrainState[source]

Bases: object

disregard_best()[source]
set_data_test(X_test, y_test, test_aux_vars=None)[source]
set_data_train(X_train, y_train, train_aux_vars=None)[source]
update_best()[source]