Imran Ullah
Imran Ullah
> condense_question_prompt ?!
> qa = ConversationalRetrievalChain.from_llm( ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"), retriever=vectorstore.as_retriever(search_kwargs={"k": 3}), return_source_documents=True, verbose=True, chain_type="stuff", get_chat_history=lambda h : h, combine_docs_chain_kwargs={'prompt': base_template}, memory = memory ) @shivanipatel7, You can try this way. It works. This...
I think this is so bad library, no one care about to update the documents.
> Hi, > > I've uploaded some notebooks illustrating how to fine-tune LayoutLMv2/LayoutXLM for relation extraction here: https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutXLM hey NielsRogge, i want to get output in json formate. how i...
> Hi @imrankh46 Thanks for the issue! We are aware of the issue, for now the solution is to pass `device_map={"":0}` when calling `PeftModel.from_pretrained`, we will work on a proper...
> > Hi @imrankh46 Thanks for the issue! We are aware of the issue, for now the solution is to pass `device_map={"":0}` when calling `PeftModel.from_pretrained`, we will work on a...
> @YSLLYW Can you try by installing `accelerate` from source? > > ```shell > pip install git+https://github.com/huggingface/accelerate > ``` I already solved the issues. Thanks
> > > @YSLLYW Can you try by installing `accelerate` from source? > > > ```shell > > > pip install git+https://github.com/huggingface/accelerate > > > ``` > > > >...
> +1, got the same error when running alpaca lora Why they give me this error. The same code is working on colab but not kaggle notebook.
> +1 It used to work on Kaggle, I literally changed nothing about the env, and suddenly I discovered it broke today. So the solution is ? Leave it