Training dataset size
Hi!
Thanks a lot for the great work here.
Would you mind sharing how big was the dataset you used to train your model?
Also, which strategies did you use to create the 440x440 images you used for training?
Simply resizing original pics + alpha mattes or random cropping to preserve size ratios?
Thanks a lot again and have a great one!
Hey @FraPochetti,
thanks for your interest. The augmentation pipeline is part of the configuration files (dataset.yaml). The published model was trained on roughly 30k quite noisy samples. The model is overall able to pick up the details even though the synthesized dataset was quite noisy but you will most likely not reach visually perfect quality with this approach.
Hey @FraPochetti,
thanks for your interest. The augmentation pipeline is part of the configuration files (dataset.yaml). The published model was trained on roughly 30k quite noisy samples. The model is overall able to pick up the details even though the synthesized dataset was quite noisy but you will most likely not reach visually perfect quality with this approach.
Hi~ @dennisbappert, in your project, the mask of training is binary(0/255)?? or alpha mask(0~255)?? thanks best regards in your project, the mask of training is binary(0/255)?? or alpha mask(0~255)?? thanks best regards