FlagEmbedding
FlagEmbedding copied to clipboard
langchain intergration with bge-m3 or llama-idnex?
Not sure how to generate Sparse Embeddings when using langchain.
Sorry, we also don't know how to use sparse vector in langchain. You can use sparse in vespa and Milvus.
i this works for llama-index @staoxiao https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/indices/llama-index-indices-managed-colbert/llama_index/indices/managed/colbert/retriever.py
https://docs.llamaindex.ai/en/stable/examples/vector_stores/qdrant_hybrid/?h=sparse#customizing-sparse-vector-generation