grinderino
Results
1
comments of
grinderino
Hi, adding .numpy() after the get_vocab_size() functions will solve your issue! `embed_pt = PositionalEmbedding(vocab_size=tokenizers.pt.get_vocab_size().numpy(), d_model=512)` `embed_en = PositionalEmbedding(vocab_size=tokenizers.en.get_vocab_size().numpy(), d_model=512)`