Verba icon indicating copy to clipboard operation
Verba copied to clipboard

"Query failed: 'NoneType' object is not iterable" Error when starting Verba Chat

Open Badhansen opened this issue 1 year ago • 26 comments

Description

Application is up and running, but Verba Chat is not working. It is showing "Something went wrong: 'NoneType' object is not iterable", although Verba variables are available.

If I look at the logs, I see the following:. Included some lines above and below for context.

INFO:     127.0.0.1:64363 - "POST /api/set_config HTTP/1.1" 200 OK
INFO:     ('127.0.0.1', 64374) - "WebSocket /ws/generate_stream" [accepted]
INFO:     connection open
INFO:     127.0.0.1:64363 - "POST /api/suggestions HTTP/1.1" 200 OK
✔ Received query: What is verba?
⚠ Query failed: 'NoneType' object is not iterable
INFO:     127.0.0.1:64386 - "POST /api/query HTTP/1.1" 200 OK

Is this a bug or a feature?

  • [x] Bug

Steps to Reproduce

Follow the steps, Screenshot 2024-05-20 at 1 20 20 AM Issue set .env and run the project.

Badhansen avatar May 20 '24 00:05 Badhansen

Same here.

cha0s avatar May 20 '24 07:05 cha0s

Have you tried to add documents?

zotttttttt avatar May 20 '24 10:05 zotttttttt

Have you tried to add documents?

I have tried it but no luck. I have attached an image of this issue.

(INFO) Importing...
(INFO) Importing 1 files with BasicReader
(INFO) Importing Building REST APIs with Flask.pdf
(SUCCESS) Loaded 1 documents in 0.7s
(INFO) Starting Chunking with TokenChunker
(SUCCESS) Chunking completed with 102 chunks in 0.1s
(INFO) Starting Embedding with ADAEmbedder
(ERROR) Embedding not successful: Chunk mismatch for fdf24eb7-2c09-439b-b7ce-4169a3c1e49f 0 != 102
(ERROR) Chunk mismatch for fdf24eb7-2c09-439b-b7ce-4169a3c1e49f 0 != 102
Screenshot 2024-05-20 at 12 19 52 PM

Badhansen avatar May 20 '24 11:05 Badhansen

I added a document successfully. My test involved just uploading a PDF and then trying to ask a simple question about it. Got the error when trying to chat.

I do notice that https://github.com/weaviate/Verba/issues/171 and https://github.com/weaviate/Verba/issues/167 seem to be the very same issue.

cha0s avatar May 20 '24 11:05 cha0s

@Badhansen try clicking the greyed out button under "Select an Embedder". That should allow you to select an embedder and should fix that issue.

I still get the Query failed: 'NoneType' object is not iterable problem even after successful embedding.

cha0s avatar May 20 '24 11:05 cha0s

@cha0s It's working. Thank you.

Screenshot 2024-05-20 at 6 45 23 PM

Badhansen avatar May 20 '24 17:05 Badhansen

@cha0s Did you manage to solve your issue after embedding?

Badhansen avatar May 20 '24 17:05 Badhansen

@zotttttttt After embedding, it's working. Now I can successfully load the document.

Badhansen avatar May 20 '24 17:05 Badhansen

Sadly, no, it doesn't work for me. If I can't find some other solution that works, I may try to debug it.

cha0s avatar May 20 '24 17:05 cha0s

Hi! @cha0s Can you follow the below steps? I think it will work for you as well.

  1. Check if llama3 is installed and running in the background or not. You can check the image; it's running and answering questions. Screenshot 2024-05-20 at 7 01 48 PM

  2. Set up the .env file from the .env.example file.

  3. Re-install it again by using this command.

pip install -e .
  1. As you have already mentioned, select the embedding option.

Hope this time it will work for you. Thanks.

Badhansen avatar May 20 '24 18:05 Badhansen

Let me know if this fix helps for now! I'm looking into debugging this

thomashacker avatar May 20 '24 18:05 thomashacker

@thomashacker It's working

Badhansen avatar May 20 '24 19:05 Badhansen

  1. llama is running, the embedder is working fine
  2. I'm using docker compose
  3. ''
  4. Embedding works fine. Chat is broken.

image

I notice that every time I refresh it forces me back to GPT3, even though I only put a llama model and no OpenAI key:

image

That seems like a bug and a possible culprit. I always set it back to ollama before I test. It persists as "Ollama" until next time I f5.

My docker compose looks like:

    environment:
      - WEAVIATE_URL_VERBA=http://weaviate:8080
      - OLLAMA_URL=<MY OLLAMA URL>
      - OLLAMA_MODEL=llama3

(Yes, the ollama URL works, I use it for other AI apps I am researching.)

It would help if the logs actually had a backtrace for instance. My log:

May 21 06:11:42 HOSTNAME docker-compose[2950244]: verba_1     | INFO:     IP:59578 - "POST /api/suggestions HTTP/1.1" 200 OK
May 21 06:11:42 HOSTNAME docker-compose[2950244]: verba_1     | ✔ Received query: MY QUERY
May 21 06:11:42 HOSTNAME docker-compose[2950244]: verba_1     | ⚠ Query failed: 'NoneType' object is not iterable
May 21 06:11:42 HOSTNAME docker-compose[2950244]: verba_1     | INFO:     IP:59578 - "POST /api/query HTTP/1.1" 200 OK

This project looks so interesting! It's a shame that it's broken for me with little clue of where to start looking.

cha0s avatar May 21 '24 11:05 cha0s

I have the same problem with NoneType. Ollama is running through docker and Verba is running through docker. I tried different models in Ollama, but I still get an error.

AntipatternCorp avatar May 21 '24 11:05 AntipatternCorp

Watching the new issues, I believe my error may be related to https://github.com/weaviate/Verba/issues/184

After seeing this I cleared all my documents and tried to embed them again. This time it worked. I suspect when they were first embedded they were not using the ollama model.

I believe that may have been why f5'ing the page kept resetting the generator. That also seems to have gone away after re-embedding the documents.

cha0s avatar May 21 '24 22:05 cha0s

I had the same issue but then realized that autocorrect messed up when I wrote llama3 and instead, it wrote llame3. Please make sure that you have set the model correctly:

OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3

Then try again.

eddieespinal avatar May 22 '24 00:05 eddieespinal

Watching the new issues, I believe my error may be related to #184

After seeing this I cleared all my documents and tried to embed them again. This time it worked. I suspect when they were first embedded they were not using the ollama model.

I believe that may have been why f5'ing the page kept resetting the generator. That also seems to have gone away after re-embedding the documents.

@cha0s I'm glad to know that it's working for you.

Badhansen avatar May 22 '24 01:05 Badhansen

Thanks a lot everyone for the feedback, we'll make sure to update the README and make the error logs more useful!

thomashacker avatar May 22 '24 10:05 thomashacker

Same problem here - doesn't work, get the same type error. Can't upload docs - just getting errors that mean nothing. Running in docker, documentation disjointed, i've wasted enough time.

Benniepie avatar May 27 '24 15:05 Benniepie

Hello! @Benniepie, running in Docker has some issues. You can check using virtual env, and it's working.

Badhansen avatar May 27 '24 23:05 Badhansen

Virtual env option doesnt work on windows machine due to embedded db problem with windows. so we are forced to use docker. is there any solution to this error? Are there folks using docker and not having issues?

zbalsara21 avatar May 28 '24 03:05 zbalsara21

Hi! @zbalsara21 and @Benniepie, you can use this PR. I think the issue is fixed with this PR. Thanks.

PR Link: https://github.com/weaviate/Verba/pull/204

Badhansen avatar May 28 '24 10:05 Badhansen

Hello, I had this error but I've finally managed to fix it. I got it to work with OpenAI GPT3 and Ollama (llama3 and mxbai-embed-large)

The problem for me was that Verba wasn't able to reach a LLM. For GPT3 I needed to add OPENAI_BASE_URL=https://api.openai.com/v1 to my environment.

For Ollama I found that I just hadn't pulled the images. I had Ollama in a docker compose with weaviate and verba so had to manually pull the images. I added a persistent volume to pull into.

  ollama:
    image: ollama/ollama:latest
    volumes:
     - YOUR_DATA_DIR/ollama_data:/root/.ollama
    ports:
      - 11434:11434

Then ran

docker exec -it verba-ollama-1 ollama pull mxbai-embed-large
docker exec -it verba-ollama-1 ollama pull llama3

You could add the command directly in the docker compose yaml though.

Now I can add docs and use the chat with either model with no errors.

If this is common, maybe a validation checking if the all models needed are available would be good :)

fcanfora avatar Jun 02 '24 19:06 fcanfora

I was able to solve this issue in this following steps:

Installing also embed language (if you are using ollama you need this installed) ollama pull mxbai-embed-large

Later add this in your .env OLLAMA_EMBED_MODEL=mxbai-embed-large

It was simple but in the Youtube Tutorial is not saying that is needed to specify OLLAMA_EMBED_MODEL

lucasrocha7111 avatar Jun 05 '24 04:06 lucasrocha7111

unsubscribe

13777469818 avatar Jun 05 '24 06:06 13777469818

@fcanfora Thanks! You're definitely right, adding a validation step seems super useful here, we'll add it to the list 🚀

thomashacker avatar Jun 06 '24 10:06 thomashacker