BiaPy
BiaPy copied to clipboard
Training dataset sharding
Now the dataset is replicated for each worker that is spawned. We should split the dataset as it is done in "by chunks" inference in order to save memory when training with large datasets