Valter Fallenius
Valter Fallenius
**Found a bottleneck: the attention layer** I have found a potential bottleneck for why bug #22 occurred. It seems like the axial attention layer is some kind of bottleneck. I...
# Pull Request ## Description Converted to pytorch-lightning for easy parallelization. Needs work on the forward pass. How do we create a forward pass dependent on a random lead time...
**Training loss is not decreasing** I have implemented the network in the PR "lightning" branch with pytorch lightning and tried to find any bugs. The network compiles without issues and...
## Detailed Description I am running on a slightly down sampled dataset with spatial dimensions 448x448. The shape of my input tensor is (None, t = 7, c = 75,...