Soumick Chatterjee, PhD
Soumick Chatterjee, PhD
Hi @mmuckley I was about to file an issue for a memory leak. I'm not sure about the issue of @834799106 though. I have created a small pieace of code...
I also got a similar behaviour while using the Data Module. ` from fastmri.data.transforms import UnetDataTransform from fastmri.pl_modules import FastMriDataModule from fastmri.data import SliceDataset from fastmri.data.subsample import create_mask_for_mask_type import os...
Sorry @mmuckley I also got the same problem after running memory profiler. I used to different versions of PyTorch. On the contrary to @834799106 I am using more latest versions...
> ```python > sd = SliceDataset( > root=val_path, > challenge="multicoil", > transform=UnetDataTransform("multicoil", mask_func=mask_func, use_seed=False), > ) > dl = DataLoader(sd, batch_size=1, shuffle=False, num_workers=10) > > for e in tqdm(sd): >...
> Hello @soumickmj, I copied the wrong code. The paste I showed was with the dataloader, not the dataset. This is what I get with dataset. You can see the...
Thanks @mmuckley I have tested with your 3.9 conda env and it worked without a problem. Your yml was missing fastMRI: So I installed it using pip install git+https://github.com/facebookresearch/fastMRI.git ...
@mmuckley This is really strange! That bare minimum environment had nothing almost. While running the code, fastMRI did not throw any errors about any missing pacakges. Still, we all got...
Aahahaha, yes, I can confirm that too. I tried with my old work environment, running pytorch nightly (1.11dev2) and saw the same behaviour. Then the problem is with the DataTransformations...
I might have found the source. Conversion from numpy to pytorch tensor. I did not test using your transforms though, but using my transform - but they were showing a...
Thanks @mmuckley I also stumbled upon the same root for our problem. And for me, it got solved by building and installing H5py from the gitrepo. I will check out...