WrenAI icon indicating copy to clipboard operation
WrenAI copied to clipboard

how to config to use deepseek (or other ai providers)

Open starascendin opened this issue 10 months ago • 3 comments

hi friends, i've tried following the example deepseek.config.yaml, and i cannot get it to work.

Below is my config file. What am i not doing correctly?

I'm using the docker-compose file in ./docker, and i've added the API keys to .env.

Running it with docker-compose --env-file .env up -d

# you should rename this file to config.yaml and put it in ~/.wrenai
# please pay attention to the comments starting with # and adjust the config accordingly, 3 steps basically:
# 1. you need to use your own llm and embedding models
# 2. you need to use the correct pipe definitions based on https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
# 3. you need to fill in correct llm and embedding models in the pipe definitions

type: llm
provider: litellm_llm
models:
  # put DEEPSEEK_API_KEY=<your_api_key> in ~/.wrenai/.env
  - api_base: https://api.deepseek.com/v1
    model: deepseek/deepseek-reasoner
    alias: default
    timeout: 120
    kwargs:
      n: 1
      temperature: 0
      response_format:
        type: text
  - api_base: https://api.deepseek.com/v1
    model: deepseek/deepseek-chat
    timeout: 120
    kwargs:
      n: 1
      temperature: 0
      response_format:
        type: text
  - api_base: https://api.deepseek.com/v1
    model: deepseek/deepseek-coder
    timeout: 120
    kwargs:
      n: 1
      temperature: 0
      response_format:
        type: json_object

---
type: embedder
provider: litellm_embedder
models:
  # define OPENAI_API_KEY=<api_key> in ~/.wrenai/.env if you are using openai embedding model
  # please refer to LiteLLM documentation for more details: https://docs.litellm.ai/docs/providers
  - model: text-embedding-3-large # put your embedding model name here, if it is not openai embedding model, should be <provider>/<model_name>
    alias: default
    api_base: https://api.openai.com/v1 # change this according to your embedding model
    timeout: 120

---
type: engine
provider: wren_ui
endpoint: http://wren-ui:3000

---
type: engine
provider: wren_ibis
endpoint: http://wren-ibis:8000

---
type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 3072 # put your embedding model dimension here
timeout: 120
recreate_index: true

---
# please change the llm and embedder names to the ones you want to use
# the format of llm and embedder should be <provider>.<model_name> such as litellm_llm.gpt-4o-2024-08-06
# the pipes may be not the latest version, please refer to the latest version: https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
---
type: pipeline
pipes:
  - name: db_schema_indexing
    embedder: litellm_embedder.text-embedding-3-large
    document_store: qdrant
  - name: historical_question_indexing
    embedder: litellm_embedder.text-embedding-3-large
    document_store: qdrant
  - name: table_description_indexing
    embedder: litellm_embedder.text-embedding-3-large
    document_store: qdrant
  - name: db_schema_retrieval
    llm: litellm_llm.deepseek-chat
    embedder: litellm_embedder.text-embedding-3-large
    document_store: qdrant
  - name: historical_question_retrieval
    embedder: litellm_embedder.text-embedding-3-large
    document_store: qdrant
  - name: sql_generation
    llm: litellm_llm.deepseek-chat
    engine: wren_ui
  - name: sql_correction
    llm: litellm_llm.deepseek-chat
    engine: wren_ui
  - name: followup_sql_generation
    llm: litellm_llm.deepseek-chat
    engine: wren_ui
  - name: sql_summary
    llm: litellm_llm.deepseek-chat
  - name: sql_answer
    llm: litellm_llm.deepseek-chat
  - name: sql_breakdown
    llm: litellm_llm.deepseek-chat
    engine: wren_ui
  - name: sql_expansion
    llm: litellm_llm.deepseek-chat
    engine: wren_ui
  - name: semantics_description
    llm: litellm_llm.deepseek-chat
  - name: relationship_recommendation
    llm: litellm_llm.deepseek-chat
    engine: wren_ui
  - name: question_recommendation
    llm: litellm_llm.deepseek-chat
  - name: question_recommendation_db_schema_retrieval
    llm: litellm_llm.deepseek-chat
    embedder: litellm_embedder.text-embedding-3-large
    document_store: qdrant
  - name: question_recommendation_sql_generation
    llm: litellm_llm.deepseek-chat
    engine: wren_ui
  - name: intent_classification
    llm: litellm_llm.deepseek-chat
    embedder: litellm_embedder.text-embedding-3-large
    document_store: qdrant
  - name: data_assistance
    llm: litellm_llm.deepseek-chat
  - name: sql_pairs_indexing
    document_store: qdrant
    embedder: litellm_embedder.text-embedding-3-large
  - name: sql_pairs_retrieval
    document_store: qdrant
    embedder: litellm_embedder.text-embedding-3-large
    llm: litellm_llm.deepseek-chat
  - name: preprocess_sql_data
    llm: litellm_llm.deepseek-chat
  - name: sql_executor
    engine: wren_ui
  - name: chart_generation
    llm: litellm_llm.deepseek-chat
  - name: chart_adjustment
    llm: litellm_llm.deepseek-chat
  - name: sql_question_generation
    llm: litellm_llm.deepseek-chat
  - name: sql_generation_reasoning
    llm: litellm_llm.deepseek-chat
  - name: sql_regeneration
    llm: litellm_llm.deepseek-chat
    engine: wren_ui


---
settings:
  engine_timeout: 30
  column_indexing_batch_size: 50
  table_retrieval_size: 10
  table_column_retrieval_size: 100
  allow_using_db_schemas_without_pruning: false
  query_cache_maxsize: 1000
  query_cache_ttl: 3600
  langfuse_host: https://cloud.langfuse.com
  langfuse_enable: true
  logging_level: DEBUG
  development: false

starascendin avatar Mar 20 '25 22:03 starascendin

hi,Has this problem been sloved?

zhenglu696 avatar Mar 27 '25 09:03 zhenglu696

how to config to use lm_studio give config.yaml please !

Marsedward avatar Mar 31 '25 06:03 Marsedward

@zhenglu696 @starascendin

please use WREN_AI_SERVICE_VERSION=0.19.2 in ~/.wrenai/.env and use this config yaml as example

https://github.com/Canner/WrenAI/blob/main/wren-ai-service/docs/config_examples/config.deepseek.yaml

cyyeh avatar Apr 07 '25 04:04 cyyeh