Daniel Franco-Barranco

Results 14 issues of Daniel Franco-Barranco

The idea is to have more options apart from ``DATA.TRAIN.MINIMUM_FOREGROUND_PER``. Intensity based measurements seem to be useful for users, e.g. mean, min/max, std (see [forum.sc thread](https://forum.image.sc/t/neuron-cell-detection-biapy/94198/10)).

enhancement

[imgaug ](https://github.com/aleju/imgaug) project is not being updated. We should implement the transformations used through imgaug to remove its dependecy completely. The transformations are these: - [x] #63 - [x] Random...

enhancement

Add the option to retrain BMZ models, not just to do inference. There are two options we can do: - Use Torchscript directly loaded models and work with that (possible...

enhancement

We need to implement `after_merge_patches_by_chunks_proccess_patch` function ([here](https://github.com/danifranco/BiaPy/blob/97603fbf4b94a22112c6504105fd13cb06d9a954/engine/instance_seg.py#L373)). Instance creation needs to be done by patches and then merge. Example in [cellpose code](https://github.com/MouseLand/cellpose/blob/main/cellpose/contrib/distributed_segmentation.py).

enhancement

Add support for reading `.nii.gz` data. Something was done in [load_ct_data_from_dir](https://github.com/danifranco/BiaPy/blob/befef89d03043489b80dc3b2f4da7d0645248836/utils/util.py#L980) function. Need to incorporate it to `load_data_from_dir` and `load_3d_images_from_dir` function for reading 2D and 3D images respectively. We can...

enhancement

We need to rethink how the final prediction is reconstructed using inference by chunks, as right now we are using so much disk (and time) with the mask that needs...

enhancement

It could be nice to create a CSV file with the calculated metrics for each test sample. Also, at this point I think we need to unify the names of...

enhancement

Now the detection masks are not created by chunks as it is done in the instance segmentation workflow. The whole image is loaded in memory and then dumped into the...

enhancement

Now the dataset is replicated for each worker that is spawned. We should split the dataset as it is done in "by chunks" inference in order to save memory when...

enhancement

Useful for finetunning just a part of the model.

enhancement