langchain-chat-websockets icon indicating copy to clipboard operation
langchain-chat-websockets copied to clipboard

Async generation not implemented for this LLM.

Open akashAD98 opened this issue 2 years ago • 1 comments

i tried mistral & llama7b from ctransofrmer & getting this issue,is there any way to add support for this? how can we implement it with websocket?

    streaming_llm = CTransformers(model='TheBloke/Mistral-7B-v0.1-GGUF', model_file='mistral-7b-v0.1.Q4_K_M.gguf',model_type="mistral")

    streaming_llm = CTransformers(model="llama-2-7b.ggmlv3.q5_0.bin",model_type="llama",
                    config={'max_new_tokens':128,'temperature':0.01})

can we use TGI here for doing this ? by passing the url link will it support?

akashAD98 avatar Oct 18 '23 13:10 akashAD98

Mistral 7B rulez

Imho it's the langchain that is outdated on this project that's causing an issue

Did you find a workaround or a different solution? I too am in need to proxy SSE over to Websocket in my app..

janfilips avatar Nov 06 '23 20:11 janfilips