PINA icon indicating copy to clipboard operation
PINA copied to clipboard

DeepONet tutorial for Advection Problem

Open dario-coscia opened this issue 2 years ago • 2 comments

In this PR I uploaded a new tutorial for the DeepONet class since it was missing.

When writing the tutorial I noticed some bugs which I corrected. Here I report a list of the bugs:

  • The aggregation (MIONet._aggregator) had the problem that the input was already stacked, but sometimes it is not possible to stack (e.g. net1 output shape [T, N, d], net2 output shape [T, d] and aggregation performs einsum('TNd,Td->TNd', ...)). So to avoid this problem now the MIONet._aggregator function takes as input a tuple of tensors (also more simple for the user to understand).
  • I replaced the .reshape with .unsqueeze in output_ = self._reduction(aggregated).reshape(-1, 1) since the final aim of the operation is to add an extra dimension, and not reshape the tensor.

Finally, I remove the check consistency of the network outputs before aggregation and put a warning message that the consistency is up to the user to check. This is not because I believe it should be up to the user the check, but it is a permanent fix since the way we were checking the consistency is wrong. Indeed consider what we were doing:

# check trunk branch nets consistency
shapes = []
for key, value in networks.items():
    check_consistency(value, (str, int))
    check_consistency(key, torch.nn.Module)
    input_ = torch.rand(10, len(value))
    shapes.append(key(input_).shape[-1])

if not all(map(lambda x: x == shapes[0], shapes)):
    raise ValueError('The passed networks have not the same '
                     'output dimension.')

This is wrong if the user wants to specify a custom network whose inputs is not in the form [N, len(value)]. For example, consider the following net:

class Net(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.layer = torch.nn.Linear(2, 100)
    def forward(self, x):
        x_ = x[:, 0, :]
        return self.layer(x_)

Due to the input reshape the line shapes.append(key(input_).shape[-1]) will raise an error since we are trying to slice a two dimensional tensor (input_ = torch.rand(10, len(value))) like a three dimensional one (line x[:, 0, :]).

The problem to check network consistency is more general than just MIONet or DeepONet, as also in FNO we have the same problem. Maybe we can think to make a PR just for this problem.

dario-coscia avatar Nov 28 '23 13:11 dario-coscia

The comments are made because the checks are just correct for 2D inputs. I commented because it is not still clear what is the data format for all models (2d/3d?).

dario-coscia avatar Dec 03 '23 08:12 dario-coscia

@ndem0 can we merge or do we wait for fixing the input format?

dario-coscia avatar Dec 07 '23 14:12 dario-coscia