Francis Lata
Francis Lata
@mthrok I don’t see recent updates on this issue. I can take a look at this issue if there are no conflicts with others?
Sounds good @cantabile-kwok. I’ll go through the current setup you have so that we have the same one. Then feel free to get in touch with me if you have...
Thanks for that. I’ve been looking at the [Coqui TTS tokenizer class](https://github.com/coqui-ai/TTS/blob/dev/TTS/tts/utils/text/tokenizer.py#L10), which will handle phonemization if needed. This takes care of almost everything, including text mormalization. So I’ll give...
> The dice score should be the same as the reference no? this one is an odd one because i was looking at the old PR that introduced this eval...
yeah I agree. the only one time I found `0.86630` was in [Dell's blog regarding MLPerf](https://infohub.delltechnologies.com/en-us/p/mlperf-tm-inference-4-0-on-dell-poweredge-server-with-intel-r-5th-generation-xeon-r-cpu/): and that blog was pretty recent (last month).
@wozeparrot I did a quick run of MLPerf's inference script and it looks like they are getting 0.86172 for mean DICE score: It is interesting that we are getting higher...
I'll close this for now and come back to it once the training code has been merged to see where the difference is between tinygrad and the reference implementation.
@geohot - thanks for the heads up! The only thing I haven’t tried is training on an AMD GPU, so I might be ready to give it a quick run...
I have added support for multiGPU for this one I’ll do some more testing to get an accurate training time duration and will let you know once I’m ready for...
**Recent update:** I ran this training script on my M3 Max using `BS=1` on the original size of `(128, 128, 128)` and it actually trains just fine. Synced with @chaosagent...