Javier Ródenas
Javier Ródenas
Hi @yuan3ee, You need to do some changes in some files, check this issue https://github.com/jfzhang95/pytorch-deeplab-xception/issues/117#issuecomment-530279794 Hope it helps. Javier
What I did so far: ``` from sklearn.utils import class_weight class_weights = class_weight.compute_class_weight('balanced', np.unique(target_values), target_values.numpy()) class_weights = torch.tensor(class_weights, dtype=torch.float) train_loss_fn = nn.CrossEntropyLoss(weight=class_weights).cuda() ``` See that I am changing the loss...
@Anwarvic first of all, thanks for your answer. Answering your questions: - [x] As **outpath** I have _./SpeakerRecognition/Speaker-Recognition/Merged_Arabic_Corpus_of_Isolated_Words/_ and as **sample_rate** I have **44100** (default value). - [x] On the...
Thanks for replying so fast. I will check the object name again and give some feedback. Interesting this symmetry issue, I will be careful. In my case I don't want...
The object name matchs exactly with the object in the configuration. I didn't explain myself properly. The target beliefmaps show a white point and the output beliefmaps generated are completly...
Find below all the files generated from one example: **001589.png**  **001589 cs**  **001589 depth 16**  **001589 depth cm 8**  **001589...
Thanks for your feedback. Let me explain a little bit my confusion. First of all, I am using a self-generated dataset using NDDS. **Object settings:** ``` { "exported_object_classes": [ "TESTEE"...
I am using the 2080 RTX Ti (12 GB). On the other hand, I am using this data currently (some examples):   My experience with this environment is that...
I was testing different parameters with same type of data: Green: learning_rate 0.0001 and batch_size 16 Blue: learning_rate 0.001 and batch_size 8 **TOTAL TRAIN LOSS**  **AFFINITY TRAIN LOSS** ...
I can try with non symetrical object. Another possible issue here could be the color. I comment that because in another github issue I saw a guy who was playing...