gebbissimo
gebbissimo
@a-jahani : I downloaded your npy file, thank you very much for sharing. My results are slightly different than yours, see images below. My current suspicion is that the checkpoint...
+1, I have also asked myself that. Also, could you tell us whether you use the KITTI raw (backprojected LIDAR) or the KITTI depth dataset (generated through post-processing by Uhrig2017)?
Doesn't the "long-winded" approach have the benefit of checking out the to-be-edited commit? That can be necessary if later commits change similar lines.
I have the same issue. Given the following notebook ```py from tqdm.auto import tqdm import time for x in tqdm(range(3)): print(x) time.sleep(1) ``` when executing it using `papermill --log-output --progress-bar...
it was an out of memory error, but I don't have the logs anymore. Found it a bit strange that it only happened after several hours. Didn't have other tasks...
@tchaton : Thanks, I'll give it a try next time, wasn't aware of that!
@tchaton : Thanks for the super quick reply and pointing me to the `StreamingDataLoader`. Somehow missed this - maybe this will already resole the issue. If not, I will try...
I started using the `StreamingDataLoader` and faced some small obstacles: 1) Inside the `LightningDataModule`, I moved the litdata-dataset creation from `__init__` to e.g. `train_dataloader` based on https://github.com/Lightning-AI/litdata/issues/250 . Otherwise, data...
Thanks again for the rapid answer @tchaton. You recommend setting `drop_last=False`. However, in that case the (combined) dataloader has zero length since `num_gpus * batch_size * num_workers > each_dataset_size =49`....
https://github.com/voxel51/fiftyone/issues/3673 is related, but doesn't have a solution.