hongjin-su
hongjin-su
Let me try to integrate!
Sure, I could add this!
The performance for prompt retrieval is measured by LLM results in downstream tasks. Back then, the paper used GPT-J. Should we switch to a more up-to-date model, e.g., Llama3-8B or...
I create a [pr](https://github.com/embeddings-benchmark/mteb/pull/608) to include 10 tasks for prompt retrieval. Feel free to check it out!
Thanks a lot for your interest in the INSTRUCTOR! Like other LLMs, the INSTRUCTOR is sensitive to the instructions, which may be worsened by its small size. I would say...
Sorry for the late reply! In our training and evaluation, we may not be very strict on punctuation. We are glad to make it more consistent in our future versions!
Hi, Thanks a lot for your interest in the INSTRUCTOR model! Could you provide a short script for me to reproduce the error?
Hi, Thanks a lot for your interest in the INSTRUCTOR model! From these blogs ([1](https://www.philschmid.de/optimize-sentence-transformers#4-apply-dynamic-quantization-using-ortquantizer-from-optimum), [2](https://stackoverflow.com/questions/69718379/running-pytorch-quantized-model-on-cuda-gpu)), it seems that dynamic quantization is not supported for GPUs. I am not sure...
Hi, the instructions are included in the evaluation. You may refer to the table 1 in our [paper](https://arxiv.org/abs/2212.09741)
Hi, could you share the scripts you print out the sentences? Also, make sure you have correctly installed the InstructorEmbedding library.