chatdocs icon indicating copy to clipboard operation
chatdocs copied to clipboard

Featurerequest: ConversationalRetrival and definition of chain_type

Open 94bb494nd41f opened this issue 2 years ago • 8 comments

I think conversationalRetrival would be a great feature on this awesome project. https://python.langchain.com/docs/modules/chains/popular/chat_vector_db Unfortunately i am not good enough to make it work, i dont know how to pass the "memory" around.

Additionaly i think some would profit from "threads" for the ctransformers as well as the chain_type or the search_type of the retriever.

94bb494nd41f avatar Jun 27 '23 11:06 94bb494nd41f

I will look into this. If the performance is good, I will make it as default, otherwise will add a config option to enable this. It will also solve #22

marella avatar Jun 29 '23 17:06 marella

Looking forward to this @marella . Is this still on your to do?

Ananderz avatar Jul 19 '23 10:07 Ananderz

Hi, yes. I was out of station with a slow internet for the past few days, so the progress has slowed down. I will start looking into the pending issues next week.

marella avatar Jul 29 '23 10:07 marella

Hi, yes. I was out of station with a slow internet for the past few days, so the progress has slowed down. I will start looking into the pending issues next week.

Great to have you back @marella

Ananderz avatar Jul 31 '23 11:07 Ananderz

@marella been trying to implement this function on my own. I think I might almost be there and have it functional. The problem is that I can't get it to remember the prompt, only the answer of the prompt so far. Let me know if you want to see my changes.

Ananderz avatar Aug 14 '23 13:08 Ananderz

Great! Please share link to your repo/branch.

marella avatar Aug 20 '23 19:08 marella

I did this @marella

Here is my chains.py:

from typing import Any, Callable, Dict, Optional

from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory

from .llms import get_llm
from .vectorstores import get_vectorstore

def get_retrieval_qa(
    config: Dict[str, Any],
    *,
    callback: Optional[Callable[[str], None]] = None,
) -> ConversationalRetrievalChain:
    db = get_vectorstore(config)
    retriever = db.as_retriever(**config["retriever"])
    llm = get_llm(config, callback=callback)
    memory=ConversationBufferMemory(memory_key="chat_history", return_messages=True, input_key="question", output_key="answer")
    return ConversationalRetrievalChain.from_llm(
        llm=llm,
        retriever=retriever,
        return_source_documents=True,
        memory=memory,
    )

In Index.html I changed this:

answer.innerText = res.result; --> answer.innerText = res.answer;

In ui.py I changed this: res["result"] = data["result"] --> res["answer"] = data["answer"]

Ananderz avatar Aug 24 '23 12:08 Ananderz

It repeats the question before it gives an answer and then the repeated question is just removed

Ananderz avatar Aug 24 '23 12:08 Ananderz