fairseq
fairseq copied to clipboard
calculation of the perplexity score
❓ Questions and Help
Before asking:
- search the issues. → couldn't find an answer
- search the docs. → couldn't find an answer
What is your question?
Why is the perplexity score calculated as (2**avg_nll_loss) instead of the regular exp(avg_nll_loss)
https://github.com/facebookresearch/fairseq/blob/bedb259bf34a9fc22073c13a1cee23192fa70ef3/fairseq_cli/eval_lm.py#L200
Was this a deliberate choice made by the fairseq team? or is there some other reason behind it?
cc @b-dickson @zorant
What's your environment?
- fairseq Version (e.g., 1.0 or main): 0.12.2
- PyTorch Version (e.g., 1.0)
- OS (e.g., Linux):
- How you installed fairseq (
pip, source): - Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information: