"Query failed: 'NoneType' object is not iterable" Error when starting Verba Chat
Description
Application is up and running, but Verba Chat is not working. It is showing "Something went wrong: 'NoneType' object is not iterable", although Verba variables are available.
If I look at the logs, I see the following:. Included some lines above and below for context.
INFO: 127.0.0.1:64363 - "POST /api/set_config HTTP/1.1" 200 OK
INFO: ('127.0.0.1', 64374) - "WebSocket /ws/generate_stream" [accepted]
INFO: connection open
INFO: 127.0.0.1:64363 - "POST /api/suggestions HTTP/1.1" 200 OK
✔ Received query: What is verba?
⚠ Query failed: 'NoneType' object is not iterable
INFO: 127.0.0.1:64386 - "POST /api/query HTTP/1.1" 200 OK
Is this a bug or a feature?
- [x] Bug
Steps to Reproduce
Follow the steps,
set
.env and run the project.
Same here.
Have you tried to add documents?
Have you tried to add documents?
I have tried it but no luck. I have attached an image of this issue.
(INFO) Importing...
(INFO) Importing 1 files with BasicReader
(INFO) Importing Building REST APIs with Flask.pdf
(SUCCESS) Loaded 1 documents in 0.7s
(INFO) Starting Chunking with TokenChunker
(SUCCESS) Chunking completed with 102 chunks in 0.1s
(INFO) Starting Embedding with ADAEmbedder
(ERROR) Embedding not successful: Chunk mismatch for fdf24eb7-2c09-439b-b7ce-4169a3c1e49f 0 != 102
(ERROR) Chunk mismatch for fdf24eb7-2c09-439b-b7ce-4169a3c1e49f 0 != 102
I added a document successfully. My test involved just uploading a PDF and then trying to ask a simple question about it. Got the error when trying to chat.
I do notice that https://github.com/weaviate/Verba/issues/171 and https://github.com/weaviate/Verba/issues/167 seem to be the very same issue.
@Badhansen try clicking the greyed out button under "Select an Embedder". That should allow you to select an embedder and should fix that issue.
I still get the Query failed: 'NoneType' object is not iterable problem even after successful embedding.
@cha0s It's working. Thank you.
@cha0s Did you manage to solve your issue after embedding?
@zotttttttt After embedding, it's working. Now I can successfully load the document.
Sadly, no, it doesn't work for me. If I can't find some other solution that works, I may try to debug it.
Hi! @cha0s Can you follow the below steps? I think it will work for you as well.
-
Check if
llama3is installed and running in the background or not. You can check the image; it's running and answering questions. -
Set up the
.envfile from the.env.examplefile. -
Re-install it again by using this command.
pip install -e .
- As you have already mentioned, select the embedding option.
Hope this time it will work for you. Thanks.
Let me know if this fix helps for now! I'm looking into debugging this
@thomashacker It's working
- llama is running, the embedder is working fine
- I'm using docker compose
- ''
- Embedding works fine. Chat is broken.
I notice that every time I refresh it forces me back to GPT3, even though I only put a llama model and no OpenAI key:
That seems like a bug and a possible culprit. I always set it back to ollama before I test. It persists as "Ollama" until next time I f5.
My docker compose looks like:
environment:
- WEAVIATE_URL_VERBA=http://weaviate:8080
- OLLAMA_URL=<MY OLLAMA URL>
- OLLAMA_MODEL=llama3
(Yes, the ollama URL works, I use it for other AI apps I am researching.)
It would help if the logs actually had a backtrace for instance. My log:
May 21 06:11:42 HOSTNAME docker-compose[2950244]: verba_1 | INFO: IP:59578 - "POST /api/suggestions HTTP/1.1" 200 OK
May 21 06:11:42 HOSTNAME docker-compose[2950244]: verba_1 | ✔ Received query: MY QUERY
May 21 06:11:42 HOSTNAME docker-compose[2950244]: verba_1 | ⚠ Query failed: 'NoneType' object is not iterable
May 21 06:11:42 HOSTNAME docker-compose[2950244]: verba_1 | INFO: IP:59578 - "POST /api/query HTTP/1.1" 200 OK
This project looks so interesting! It's a shame that it's broken for me with little clue of where to start looking.
I have the same problem with NoneType. Ollama is running through docker and Verba is running through docker. I tried different models in Ollama, but I still get an error.
Watching the new issues, I believe my error may be related to https://github.com/weaviate/Verba/issues/184
After seeing this I cleared all my documents and tried to embed them again. This time it worked. I suspect when they were first embedded they were not using the ollama model.
I believe that may have been why f5'ing the page kept resetting the generator. That also seems to have gone away after re-embedding the documents.
I had the same issue but then realized that autocorrect messed up when I wrote llama3 and instead, it wrote llame3. Please make sure that you have set the model correctly:
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
Then try again.
Watching the new issues, I believe my error may be related to #184
After seeing this I cleared all my documents and tried to embed them again. This time it worked. I suspect when they were first embedded they were not using the ollama model.
I believe that may have been why
f5'ing the page kept resetting the generator. That also seems to have gone away after re-embedding the documents.
@cha0s I'm glad to know that it's working for you.
Thanks a lot everyone for the feedback, we'll make sure to update the README and make the error logs more useful!
Same problem here - doesn't work, get the same type error. Can't upload docs - just getting errors that mean nothing. Running in docker, documentation disjointed, i've wasted enough time.
Hello! @Benniepie, running in Docker has some issues. You can check using virtual env, and it's working.
Virtual env option doesnt work on windows machine due to embedded db problem with windows. so we are forced to use docker. is there any solution to this error? Are there folks using docker and not having issues?
Hi! @zbalsara21 and @Benniepie, you can use this PR. I think the issue is fixed with this PR. Thanks.
PR Link: https://github.com/weaviate/Verba/pull/204
Hello, I had this error but I've finally managed to fix it. I got it to work with OpenAI GPT3 and Ollama (llama3 and mxbai-embed-large)
The problem for me was that Verba wasn't able to reach a LLM. For GPT3 I needed to add OPENAI_BASE_URL=https://api.openai.com/v1 to my environment.
For Ollama I found that I just hadn't pulled the images. I had Ollama in a docker compose with weaviate and verba so had to manually pull the images. I added a persistent volume to pull into.
ollama:
image: ollama/ollama:latest
volumes:
- YOUR_DATA_DIR/ollama_data:/root/.ollama
ports:
- 11434:11434
Then ran
docker exec -it verba-ollama-1 ollama pull mxbai-embed-large
docker exec -it verba-ollama-1 ollama pull llama3
You could add the command directly in the docker compose yaml though.
Now I can add docs and use the chat with either model with no errors.
If this is common, maybe a validation checking if the all models needed are available would be good :)
I was able to solve this issue in this following steps:
Installing also embed language (if you are using ollama you need this installed)
ollama pull mxbai-embed-large
Later add this in your .env
OLLAMA_EMBED_MODEL=mxbai-embed-large
It was simple but in the Youtube Tutorial is not saying that is needed to specify OLLAMA_EMBED_MODEL
unsubscribe
@fcanfora Thanks! You're definitely right, adding a validation step seems super useful here, we'll add it to the list 🚀