autogen icon indicating copy to clipboard operation
autogen copied to clipboard

[Issue]: QdrantRetrieveUserProxyAgent is missing support for text-embedding-ada-002 embedding model

Open Halpph opened this issue 2 years ago • 2 comments

Describe the issue

Issue Overview: In this GitHub issue, the proposal for implementing QdrantRetrieveUserProxyAgent has been successfully executed. However, upon attempting to use the feature, it was discovered that the current implementation relies on the qdrant_client, which in turn depends on fastembedding. Consequently, only a specific set of models listed in SUPPORTED_EMBEDDING_MODELS are supported. SUPPORTED_EMBEDDING_MODELS: Dict[str, Tuple[int, models.Distance]] = { "BAAI/bge-base-en": (768, models.Distance.COSINE), "sentence-transformers/all-MiniLM-L6-v2": (384, models.Distance.COSINE), "BAAI/bge-small-en": (384, models.Distance.COSINE), "BAAI/bge-small-en-v1.5": (384, models.Distance.COSINE), "BAAI/bge-base-en-v1.5": (768, models.Distance.COSINE), "intfloat/multilingual-e5-large": (1024, models.Distance.COSINE), }.

Enhancement Proposal: It is suggested that the support for additional models be extended beyond the current list. A reference implementation, inspired by the approach taken in Issue 253, is provided below:

from litellm import embedding as test_embedding

embed_response = test_embedding('text-embedding-ada-002', input=query_texts)

all_embeddings: List[List[float]] = []

for item in embed_response['data']:
    all_embeddings.append(item['embedding'])

search_queries: List[SearchRequest] = []

for embedding in all_embeddings:
    search_queries.append(
        SearchRequest(
            vector=embedding,
            filter=Filter(
                must=[
                    FieldCondition(
                        key="page_content",
                        match=MatchText(
                            text=search_string,
                        )
                    )
                ]
            ),
            limit=n_results,
            with_payload=True,
        )
    )

search_response = client.search_batch(
    collection_name="{your collection name}",
    requests=search_queries,
)

This adds dependencies on litellm but I think your contribution to this enhancement would greatly benefit the community by expanding the scope of supported models and enhancing the overall utility of QdrantRetrieveUserProxyAgent.

Steps to reproduce

No response

Screenshots and logs

No response

Additional Information

No response

Halpph avatar Jan 16 '24 15:01 Halpph

We are currently understaffed on the RAG front. Would you be willing to submit a PR to fix this issue?

ekzhu avatar Jan 16 '24 19:01 ekzhu

We've come up with a complete pull request for this issue using any general embedding function that returns a list of embeddings. We'll post our pull request in the next few hours.

ykim-isabel avatar Jan 17 '24 19:01 ykim-isabel

We've come up with a complete pull request for this issue using any general embedding function that returns a list of embeddings. We'll post our pull request in the next few hours.

Is this implemented?

vitorsabbagh avatar Feb 06 '24 17:02 vitorsabbagh

We'll submit the draft pull request for review.

ykim-isabel avatar Feb 16 '24 11:02 ykim-isabel

I have also just tried to use ada-embedding by performing the vectorization of chunks outside of Autogen, using LlamaIndex. And then trying to query the populated QDrant database using the notebook example at https://github.com/microsoft/autogen/blob/main/notebook/agentchat_qdrant_RetrieveChat.ipynb using "embedding_model": "default", and docs_path: None.

However, the error message is that embedding_model must be one of the 12 options at https://qdrant.github.io/fastembed/examples/Supported_Models/, and ada is not in it.

joshkyh avatar Feb 19 '24 23:02 joshkyh