Hemanth Sai

Results 39 comments of Hemanth Sai

Does this issue have something to do with the operating system? I'm facing this issue on windows.

Hi @msaroufim I would like to implement language translation using encoder-decoder architecture. Can I take this?

@msaroufim can I use the transformers library to use the tokenizers?

Wanted to give updates on my task. I have completed preparing the dataset(tokenization, data-loading, etc) for the translation task and will start with Positional Embeddings and other layers.

I'm seeing a lot of nan values when I print the attn_output_weights in nn.MultiheadAttention in the decoder block. Is it expected or is it due to a fault in the...

```python def generate_square_subsequent_mask(seq_len): mask = (torch.triu(torch.ones(seq_len, seq_len)) == 1).transpose(0, 1) mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask== 1, float(0.0)) return mask ``` I'm unable to understand this way of generating masks...

Are there still any todo loss functions left?

I think the utterances.json1 file is missing a closing quotation mark in the error line. Maybe it can be fixed by adding the quotation mark and saving it.

Hello, I think they have fixed the dataset since I ran the code without any issues.