Urchade Zaratiana
Urchade Zaratiana
Hi @AleKitsune98, can you send me a screenshot of the error?
Have you downloaded the correct dataset ? @AleKitsune98 Here is the link: # "NER_datasets": val data from the paper can be obtained from "https://drive.google.com/file/d/1T-5IbocGka35I7X3CE6yKe5N_Xg2lVKT/view"
The training evaluation is intended for benchmark comparison on the data I linked before, but you can modify the code by looking at [this link](https://github.com/urchade/GLiNER/blob/main/examples/finetuning/trainer.py) if you have JSON evaluation...
you can create batches like this ```python # Sample text data all_text = ["sample text 1", "sample text 2", …, "sample text n"] # Define the batch size batch_size =...
you can try the automatic mixed precision (AMP) module in PyTorch for inference. For me it helps speeding-up the training, but I have not tried inference ```python from torch.cuda.amp import...
Ok, that's weird but ok 😅 Did you try to pass `model.to('cuda')` instead of `model.cuda()` ?
Ok, this due to a problem in the data loader You can add an exception in the training loops to avoid stopping the training
Hi, I will try to investigate the problem in details, sorry for the delay @micaelakaplan @vatsaldin
for conll 03 dataset, you can fix the label by setting a key `label` for each sample, that's how I did for supervised fine-tuning. `{'tokenized_text': ['Reiterates', 'previous', '"', 'buy', '"',...
> what can be done for normal dataset? @urchade or if you could release a fix. what do you mean ? you just have to add a key `label` in...