[TTS] bug fix - sample rate was being ignored in vocoder dataset
Signed-off-by: Paarth Neekhara [email protected]
What does this PR do ?
Fixes a bug where sample rate was being ignored in VocoderDataset when loading just the audio and audio lengths and not the precomputed mel.
Collection: tts
Changelog
- Add specific line by line info of high level changes in this PR.
Usage
- You can potentially add a usage example below
dataset = VocoderDataset(manifest_filepath=manifest_path, sample_rate=16000)
Before your PR is "Ready for review"
Pre checks:
- [ ] Make sure you read and followed Contributor guidelines
- [ ] Did you write any new necessary tests?
- [ ] Did you add or update any necessary documentation?
- [ ] Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
- [ ] Reviewer: Does the PR have correct import guards for all optional libraries?
PR Type:
- [ ] New Feature
- [ x] Bugfix
- [ ] Documentation
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed. Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information
- Related to # (issue)
@paarthneekhara, thanks for your PR! please address this error:
File "/opt/conda/lib/python3.8/site-packages/nemo/collections/tts/losses/hifigan_losses.py", line 69, in forward
loss += torch.mean(torch.abs(rl - gl))
RuntimeError: The size of tensor a (1882) must match the size of tensor b (1878) at non-singleton dimension 2
@XuesongYang @redoctopus I think I fixed the issue. Adjusted the number of segments to get at the original sampling rate so that we get n_segments at our target sampling rate. Let me know if this looks ok.
It would be great if we have a unit test capturing this bug fix.