Excuse me, may I ask When will the Segmentation training code be updated
Hey! With the help of #109 which provided by @fefespn , I finished the training code, it worked on my datasets. You can visit my repository in branch 'dev' : https://github.com/Sirlanri/Efficientvit/blob/dev/train_seg.py (remember to fork the full repository, because I made a lot of changes, you can refer to the commit record. I'm not ready to release the training code, so there is a lot of nonstandard code and Chinese code comment, never mind please 😝
@Sirlanri Thanks for the fork.
I have some questions and I would be grateful if you can answer them.
I would like to train efficientvit for segmentation on my custom dataset, with custom number of classes. Can you please provide some instructions on how to use your training script (https://github.com/Sirlanri/Efficientvit/blob/dev/train_seg.py) to do the training?
Can I choose any of the pre-trained segmentation models to start with?
How can I define the batch size to train on the GPU ?
How can I evaluate the model on my test dataset after training? Is there another script for that?
How can I convert the resulting model to tensorrt to run on (e.g. Jetson Orin AGX)?
Thank you.
@mzahana Hi there! I'm glad to communicate with you.
Question 1: In theory, it is possible. However, my current code in this part can't achieve this. You need to manually load the pre - trained pth model weights into the code.
Question 2: This is quite simple. Please refer to this file: https://github.com/Sirlanri/Efficientvit/blob/dev/configs/seg/train_seg_configs_sample.py. There are many configuration parameters in it, including batch_size.
Question 3: I don't have a corresponding script. But you can change model.train() in the training source code to model.eval() to perform inference and view data such as loss or IOU.
Question 4: I'm sorry. I haven't dabbled in this field and can't provide you with help. But I don't think it should be too difficult. I believe you can solve the deployment problem through your efforts.
😉