Christina Du
Christina Du
Hi, Luheng: Thanks for your great work! I encountered some strange errors during training. I used the following to start training your model : `python python/train.py --config=./config/srl_config.json --model=./output --train=./sample_data/sentences_with_gold.txt --dev=./sample_data/sentences_with_gold.txt...
Hi, Thanks for your great work ! I am wondering how to set the TRAIN_PATH as mentioned in the training command. I tried to set it to the path that...
Hi Ms strubell, thanks for your great work ! I would like to try LISA with other state-of-the-art word embedding approach (e.g. elmo) to improve the accuracy on a private...
Hello Ms.Strubell :-) I am trying to train and evaluate your LISA model on CoNLL05 dataset. I followed the recipe in this post [https://github.com/strubell/preprocess-conll05](url) for preprocesing ConLL2005 dataset and I...
Hi, thanks for your great work! I am trying to replicate your experiments with Llama-2 chat models, but I find the evaluation results on NQ are quite different from the...
## Question Hi @gsarti , I find that `attribute()` function causes CUDA out of memory issue when the input length exceeds about 2500 tokens. I used Llama2-7b / Mistral-7b models...
## Question Hi, I would like to use inseq to get the attribution from Llama3.1-8b-instruct model, but I got the following error when initialising the inseq model: This is my...