Hinode

Results 8 comments of Hinode

Hi @jacobbieker , I just re-pull the master branch and follow your "backwords()" example to train the model. My model = MetNet(hidden_dim=32, forecast_steps=18, input_channels=7, output_channels=101, sat_channels=1, input_size=128) And when starting...

@jacobbieker BTW, my model summary is like below: (does this make sense? Why the most parameters are in the "DownSampler" layer?) ===================================== Layer (type:depth-idx) Param # ===================================== MetNet -- ├─MetNetPreprocessor:...

Hi @ValterFallenius, interesting. Based on your analysis, the model looks quite different from the original paper.

@ValterFallenius regarding your initial error message, "RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 39.59 GiB total capacity; 36.22 GiB already allocated; 6.19 MiB free; 37.53...

> @Hinode I dont think someone else was using the GPU, rather the backprop kept gradients for a batch size that was too big, effectively 60 as I mentioned. >...

> std Thanks for sharing Casper's reply. It sounds like the MetNet will end? Since he left google ...

> @Hinode I dont think someone else was using the GPU, rather the backprop kept gradients for a batch size that was too big, effectively 60 as I mentioned. >...

I agree that this is not an issue. But how do the users set 8 axial attention layers (4 x-direction and 4 y-direction. Separate one another)?