Nason

Results 10 issues of Nason

Hi there, First I wanted to say fantastic work, I'm looking forward to hopefully implementing this on some projects. I've just run your example code: `python evaluate.py --target-variable='income' --train-data-path=./data/adult_processed_train.csv --test-data-path=./data/adult_processed_test.csv...

I am trying to follow this guide on evaluation of agents (https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html), but I'm seeing the following error: `ImportError: cannot import name 'ChainManagerMixin' from 'langchain.callbacks.base'` I am using langchain==0.0.154 with...

I have tried to run a mistral model with the search api but the web results don't seem to be making it to the model. I'm hosting the model through...

support
websearch

**Description:** I encountered an error when trying to run the run_prewriting.py script with the gpt-3.5-turbo engine. I followed the setup instructions in the README, including: - Creating and activating a...

### What is the issue? **Description** When attempting to run the llama3:instruct model using the ollama run command, I encountered an error indicating that the executable ollama_llama_server could not be...

bug
macos

I have an hour long meeting which I would like to transcribe. Looking at the text and srt outputted, I can see that only the first 11 minutes have been...

I have an hour long meeting which I would like to transcribe. I've attempted to do so with: ``` import whisper_timestamped as whisper audio = whisper.load_audio("/content/Meeting Recording.mp4") model = whisper.load_model("medium",...

**env:** transformers ==4.35.2 ctransformers==0.2.27+cu121 ``` from ctransformers import AutoModelForCausalLM, AutoTokenizer model_name = "/home/me/project/search_engine/text-generation-webui/models/OpenHermes-2.5-Mistral-7B-GGUF/openhermes-2.5-mistral-7b.Q5_K_M.gguf" def load_model(model_name: str): model = AutoModelForCausalLM.from_pretrained(model_name, hf=True) tokenizer = AutoTokenizer.from_pretrained(model) return model, tokenizer tokenizer, model = load_model(model_name)...

## 📝 Describe the Output Issue Performance is extremely slow on RTX 3090 - taking 5-10 minutes to process a single 10-page PDF. GPU has 24GB VRAM but only using...

# VLM Pipeline Hangs Indefinitely During Document Processing ### Bug The VLM pipeline with Granite-Docling hangs indefinitely during document processing on NVIDIA RTX 3090 GPU. The process gets stuck at...

bug