Daniel Bermuth

Results 40 comments of Daniel Bermuth

I did run another test with the code directly before the refactoring (#188a6f2c1ee53dc79acf8abceaf729b5f9a05e7a). This time one epoch takes 4min on average and the whole training took 1:45h. | Dataset |...

I had some time to run some more tests today (with master about two days ago). This time an epoch did take about 4:30min on average. I also tried different...

@tilmankamp Any updates on the accuracy problem?

Were there important changes to the augmentations in between? I didn't check for it. I didn't run further tests, just the ones above. For my own trainings I still use...

Might have found a reason for the accuracy problem. First I did misunderstand the augmentation flag description and the _pitch_ and _tempo_ flags are not converted correctly. Second, the new...

@ftyers I'm working on it right now:) But the approach I'm suggesting is a bit different to yours. It's using both steps. My transfer-learing workflow would look like this: 1....

> So, you freeze all layers apart from the last one, and then train the last layer. Then when that has trained, you train all the layers? Exactly First test...

I did run another test, this time I tried your approach with dropping and reinitializing the last layer. (As noted above, I normally don't drop the layer when training German,...

> I don't see why you need a new flag for `load_frozen_graph Problem is that when loading the checkpoint for the second training the frozen layers have no variables for...

1. Currently I did use this for my German training (0 dropped, 1 frozen). I think this makes sense for all languages with same alphabet like English and and similar...