Laplace equation on a disk

Problem setup

We will solve a Laplace equation in a polar coordinate system:

\[r\frac{dy}{dr} + r^2\frac{dy^2}{dr^2} + \frac{dy^2}{d\theta^2} = 0, \qquad r \in [0, 1], \quad \theta \in [0, 2\pi]\]

with the Dirichlet boundary condition

\[y(1,\theta) = \cos(\theta)\]

and the periodic boundary condition

\[y(r, \theta +2\pi) = y(r, \theta).\]

The reference solution is \(y=r\cos(\theta)\).

Implementation

This description goes through the implementation of a solver for the above described Laplace equation step-by-step.

First, the DeepXDE, NumPy (np), and TensorFlow (tf) modules are imported:

import deepxde as dde
import numpy as np
from deepxde.backend import tf

We begin by defining a computational geometry. We can use a built-in class Rectangle as follows

geom = dde.geometry.Rectangle(xmin=[0, 0], xmax=[1, 2 * np.pi])

Next, we express the PDE residual of the Laplace equation:

def pde(x, y):
    dy_r = dde.grad.jacobian(y, x, i=0, j=0)
    dy_rr = dde.grad.hessian(y, x, i=0, j=0)
    dy_thetatheta = dde.grad.hessian(y, x, i=1, j=1)
    return x[:, 0:1] * dy_r + x[:, 0:1] ** 2 * dy_rr + dy_thetatheta

The first argument to pde is 2-dimensional vector where the first component(x[:,0:1]) is \(r\)-coordinate and the second componenet (x[:,1:]) is the \(\theta\)-coordinate. The second argument is the network output, i.e., the solution \(y(r,\theta)\).

Next, we consider the Dirichlet boundary condition. We need to implement a function, which should return True for points inside the subdomain and False for the points outside. In our case, if the points satisfy \(r=1\) and are on the whole boundary of the rectangle domain, then function boundary returns True. Otherwise, it returns False. (Note that because of rounding-off errors, it is often wise to use dde.utils.isclose to test whether two floating point values are equivalent.)

def boundary(x, on_boundary):
    return on_boundary and dde.utils.isclose(x[0], 1)

The argument x to boundary is the network input and is a \(d\)-dim vector, where \(d\) is the dimension and \(d=2\) in this case. To facilitate the implementation of boundary, a boolean on_boundary is used as the second argument. If the point \((r,\theta)\) (the first argument) is on the entire boundary of the rectangle geometry that created above, then on_boundary is True, otherwise, on_boundary is False.

Using a lambda funtion, the boundary we defined above can be passed to DirichletBC as the third argument. Thus, the Dirichlet boundary condition is

bc_rad = dde.icbc.DirichletBC(
    geom,
    lambda x: np.cos(x[:, 1:2]),
    lambda x, on_boundary: on_boundary and dde.utils.isclose(x[0], 1),
)

Now, we have specified the geometry, PDE residual, and boundary condition. We then define the PDE problem as

data = dde.data.PDE(
    geom, pde, bc_rad, num_domain=2540, num_boundary=80, solution=solution
)

The number 2540 is the number of training residual points sampled inside the domain, and the number 80 is the number of training points sampled on the boundary. The argument solution is the reference solution to compute the error of our solution, and we define it as follows:

def solution(x):
    r, theta = x[:, 0:1], x[:, 1:]
    return r * np.cos(theta)

Next, we choose the network. Here, we use a fully connected neural network of depth 4 (i.e., 3 hidden layers) and width 20:

net = dde.nn.FNN([2] + [20] * 3 + [1], "tanh", "Glorot normal")

If we rewrite this problem in cartesian coordinates, the variables are in the form of \([r\sin(\theta), r\cos(\theta)]\). We use them as features to satisfy the certain underlying physical constraints, so that the network is automatically periodic along the \(\theta\) coordinate and the period is \(2\pi\).

def feature_transform(x):
    return tf.concat(
        [x[:, 0:1] * tf.sin(x[:, 1:2]), x[:, 0:1] * tf.cos(x[:, 1:2])], axis=1
    )

Then we apply feature_transform to the network inputs:

net.apply_feature_transform(feature_transform)

Now, we have the PDE problem and the network. We bulid a Model and choose the optimizer and learning rate:

model = dde.Model(data, net)
model.compile("adam", lr=1e-3, metrics=["l2 relative error"])

We then train the model for 15000 iterations:

losshistory, train_state = model.train(iterations=15000)

We also save and plot the best trained result and loss history.

dde.saveplot(losshistory, train_state, issave=True, isplot=True)

Complete code

"""Backend supported: tensorflow.compat.v1, tensorflow, pytorch, paddle"""
import deepxde as dde
import numpy as np
# Import tf if using backend tensorflow.compat.v1 or tensorflow
from deepxde.backend import tf
# Import torch if using backend pytorch
# import torch
# Import paddle if using backend paddle
# import paddle


def pde(x, y):
    dy_r = dde.grad.jacobian(y, x, i=0, j=0)
    dy_rr = dde.grad.hessian(y, x, i=0, j=0)
    dy_thetatheta = dde.grad.hessian(y, x, i=1, j=1)
    return x[:, 0:1] * dy_r + x[:, 0:1] ** 2 * dy_rr + dy_thetatheta


def solution(x):
    r, theta = x[:, 0:1], x[:, 1:]
    return r * np.cos(theta)


geom = dde.geometry.Rectangle(xmin=[0, 0], xmax=[1, 2 * np.pi])
bc_rad = dde.icbc.DirichletBC(
    geom,
    lambda x: np.cos(x[:, 1:2]),
    lambda x, on_boundary: on_boundary and dde.utils.isclose(x[0], 1),
)
data = dde.data.PDE(
    geom, pde, bc_rad, num_domain=2540, num_boundary=80, solution=solution
)

net = dde.nn.FNN([2] + [20] * 3 + [1], "tanh", "Glorot normal")

# Use [r*sin(theta), r*cos(theta)] as features,
# so that the network is automatically periodic along the theta coordinate.
# Backend tensorflow.compat.v1 or tensorflow
def feature_transform(x):
    return tf.concat(
        [x[:, 0:1] * tf.sin(x[:, 1:2]), x[:, 0:1] * tf.cos(x[:, 1:2])], axis=1
    )
# Backend pytorch
# def feature_transform(x):
#     return torch.cat(
#         [x[:, 0:1] * torch.sin(x[:, 1:2]), x[:, 0:1] * torch.cos(x[:, 1:2])], dim=1
#     )
# Backend paddle
# def feature_transform(x):
#     return paddle.concat(
#         [x[:, 0:1] * paddle.sin(x[:, 1:2]), x[:, 0:1] * paddle.cos(x[:, 1:2])], axis=1
#     )

net.apply_feature_transform(feature_transform)

model = dde.Model(data, net)
model.compile("adam", lr=1e-3, metrics=["l2 relative error"])
losshistory, train_state = model.train(iterations=15000)
dde.saveplot(losshistory, train_state, issave=True, isplot=True)