WrenAI icon indicating copy to clipboard operation
WrenAI copied to clipboard

docker-compose file error

Open JunChenMoCode opened this issue 11 months ago • 1 comments

ERROR An error occurred: 1 error(s) decoding:

      * 'services[wren-ai-service].ports[0]' expected a map, got 'string

JunChenMoCode avatar Feb 11 '25 04:02 JunChenMoCode

@202252197 hi, have u tried launching Wren AI using release artifact?

Please check the official installation method here: https://docs.getwren.ai/oss/installation#using-wren-ai-launcher

cyyeh avatar Feb 11 '25 04:02 cyyeh

I used the link provided above to install wren ai and got the same bug! it occurs when you try to use your own local model instead of default gpt api.

Ahmed-ao avatar Feb 21 '25 16:02 Ahmed-ao

I guess that you didn't create the .env file with all the parameters. Use this file as example: https://github.com/Canner/WrenAI/blob/main/docker/.env.example

joanteixi avatar Feb 25 '25 21:02 joanteixi

I used your .env file and it worked. However the is a problem with yaml config file; it doesn't connect to local model. I'm trying to use Deepseek:14b locally with Ollama. and I followed the instruction provided here https://docs.getwren.ai/oss/installation/custom_llm

Yaml Config

`type: llm provider: litellm_llm timeout: 600 models:

  • model: openai/deepseek-r1:14b api_base: http://docker.host.internal:11434/v1 api_key_name: LLM_OLLAMA_API_KEY kwargs: temperature: 0.8 n: 1

    for better consistency of llm response

    seed: 0 max_tokens: 4096 response_format: type: text
  • model: openai/deepseek-r1:14b api_base: http://docker.host.internal:11434/v1 api_key_name: LLM_OLLAMA_API_KEY kwargs: temperature: 0.8 n: 1

    for better consistency of llm response

    seed: 0 max_tokens: 4096 response_format: type: text

type: embedder provider: litellm_embedder models:

  • model: openai/nomic-embed-text dimension: 768 url: http://host.docker.internal:11434/v1 timeout: 120

type: engine provider: wren_ui endpoint: http://localhost:3000


type: engine provider: wren_ibis endpoint: http://localhost:8000 source: bigquery manifest: '' # base64 encoded string of the MDL connection_info: '' # base64 encoded string of the connection info


type: engine provider: wren_engine endpoint: http://localhost:8080 manifest: ''


type: document_store provider: qdrant location: http://qdrant:6333 embedding_model_dim: 768 timeout: 120 recreate_index: true


type: pipeline pipes:

  • name: deepseek_pipline llm: litellm_llm.openai/deepseek-r1:14b embedder: litellm_embedder.openai/nomic-embed-text

    other pipeline configurations

  • name: db_schema_indexing embedder: litellm_embedder.openai/nomic-embed-text document_store: qdrant
  • name: historical_question_indexing embedder: litellm_embedder.openai/nomic-embed-text document_store: qdrant
  • name: table_description_indexing embedder: litellm_embedder.openai/nomic-embed-text document_store: qdrant
  • name: db_schema_retrieval llm: litellm_llm.openai/deepseek-r1:14b embedder: litellm_embedder.openai/nomic-embed-text document_store: qdrant
  • name: historical_question_retrieval embedder: litellm_embedder.openai/nomic-embed-text document_store: qdrant
  • name: sql_generation llm: litellm_llm.openai/deepseek-r1:14b engine: wren_ui
  • name: sql_correction llm: litellm_llm.openai/deepseek-r1:14b engine: wren_ui
  • name: followup_sql_generation llm: litellm_llm.openai/deepseek-r1:14b engine: wren_ui
  • name: sql_summary llm: litellm_llm.openai/deepseek-r1:14b
  • name: sql_answer llm: litellm_llm.openai/deepseek-r1:14b engine: wren_ui
  • name: sql_breakdown llm: litellm_llm.openai/deepseek-r1:14b engine: wren_ui
  • name: sql_expansion llm: litellm_llm.openai/deepseek-r1:14b engine: wren_ui
  • name: sql_explanation llm: litellm_llm.openai/deepseek-r1:14b
  • name: sql_regeneration llm: litellm_llm.openai/deepseek-r1:14b engine: wren_ui
  • name: semantics_description llm: litellm_llm.openai/deepseek-r1:14b
  • name: relationship_recommendation llm: litellm_llm.openai/deepseek-r1:14b engine: wren_ui
  • name: question_recommendation llm: litellm_llm.openai/deepseek-r1:14b
  • name: question_recommendation_db_schema_retrieval llm: litellm_llm.openai/deepseek-r1:14b embedder: litellm_embedder.openai/nomic-embed-text document_store: qdrant
  • name: question_recommendation_sql_generation llm: litellm_llm.openai/deepseek-r1:14b engine: wren_ui
  • name: chart_generation llm: litellm_llm.openai/deepseek-r1:14b
  • name: chart_adjustment llm: litellm_llm.openai/deepseek-r1:14b
  • name: intent_classification llm: litellm_llm.openai/deepseek-r1:14b embedder: litellm_embedder.openai/nomic-embed-text document_store: qdrant
  • name: data_assistance llm: litellm_llm.openai/deepseek-r1:14b
  • name: sql_pairs_indexing document_store: qdrant embedder: litellm_embedder.openai/nomic-embed-text
  • name: sql_pairs_deletion document_store: qdrant embedder: litellm_embedder.openai/nomic-embed-text
  • name: sql_pairs_retrieval document_store: qdrant embedder: litellm_embedder.openai/nomic-embed-text llm: litellm_llm.openai/deepseek-r1:14b
  • name: preprocess_sql_data llm: litellm_llm.openai/deepseek-r1:14b
  • name: sql_executor engine: wren_ui
  • name: sql_question_generation llm: litellm_llm.openai/deepseek-r1:14b
  • name: sql_generation_reasoning llm: litellm_llm.openai/deepseek-r1:14b

settings: host: 127.0.0.1 port: 5556 column_indexing_batch_size: 50 table_retrieval_size: 10 table_column_retrieval_size: 100 query_cache_maxsize: 1000 allow_using_db_schemas_without_pruning: false query_cache_ttl: 3600 langfuse_host: https://cloud.langfuse.com langfuse_enable: true logging_level: INFO development: false `

Ahmed-ao avatar Feb 26 '25 21:02 Ahmed-ao

Hi @Ahmed-ao, here is my config.yaml for Ollama model and embedder. I think you can base it on it and modify it to your version.

models:
- api_base: http://host.docker.internal:11434/
  kwargs:
    n: 1
    temperature: 0
  model: ollama/phi4
provider: litellm_llm
timeout: 120
type: llm
---
models:
- api_base: http://host.docker.internal:11434/
  model: ollama/nomic-embed-text
  timeout: 120
provider: litellm_embedder
type: embedder

paopa avatar Mar 03 '25 10:03 paopa