LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

405 for POST queries from browser, even with CORS ALLOW ORIGIN *

Open scenaristeur opened this issue 2 years ago • 7 comments

I'm bugging form 4 hours. LocalAi is running on laptop CPU . and i and to post to the API from a Vuejs app. all works fine from node :

const axios = require("axios");



console.log("\nHEALTH");
axios
  .get("http://localhost:8080/readyz")
  .then(function (response) {
    // en cas de réussite de la requête
    console.log("readyz:", response.data);
  })
  .catch(function (error) {
    // en cas d’échec de la requête
    console.log(error);
  })
  .finally(function () {
    // dans tous les cas
  });

console.log("\nModels");
axios
  .get("http://localhost:8080/v1/models")
  .then(function (response) {
    // en cas de réussite de la requête
    console.log("models:", response.data);
  })
  .catch(function (error) {
    // en cas d’échec de la requête
    console.log(error);
  })
  .finally(function () {
    // dans tous les cas
  });

  console.log("\nText Completion");
  axios.post('http://localhost:8080/v1/chat/completions', {
    model: 'ggml-gpt4all-j',
    "messages": [{"role": "user", "content": "Say this is a test!"}],
    "temperature": 0.7
  })
  .then(function (response) {
    console.log(response.data);
    console.log(response.data.choices[0].message);
  })
  .catch(function (error) {
    console.log(error);
  });

  console.log("\n Image Generation");
  axios.post('http://localhost:8080/v1/images/generations', {
    "prompt": "A cute baby sea otter",
    "size": "256x256"
  })
  .then(function (response) {
    console.log(response.data);
  })
  .catch(function (error) {
    console.log(error);
  });

gives me chat-completion response and image generation

HEALTH

Models

Text Completion

 Image Generation
readyz: OK
models: {
  object: 'list',
  data: [
    { id: 'animagine-xl', object: 'model' },
    { id: 'text-embedding-ada-002', object: 'model' },
    { id: 'camembert-large', object: 'model' },
    { id: 'stablediffusion', object: 'model' },
    {
      id: 'thebloke__vigogne-2-7b-chat-ggml__vigogne-2-7b-chat.ggmlv3.q8_0.bin',
      object: 'model'
    },
    { id: 'camembert-large', object: 'model' },
    { id: 'ggml-gpt4all-j', object: 'model' }
  ]
}
{
  object: 'chat.completion',
  model: 'ggml-gpt4all-j',
  choices: [ { index: 0, finish_reason: 'stop', message: [Object] } ],
  usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 }
}
{
  role: 'assistant',
  content: "I'm sorry, I don't understand what you mean. Can you please provide more context or clarify your question?"
}
{
  data: [
    {
      embedding: null,
      index: 0,
      url: 'http://localhost:8080/generated-images/b643464075870.png'
    }
  ],
  usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 }

But in the browser, only the GET works : nothing for the POST: basic fetch or axios.

with axios, I got a 405 CORS error, even when settings "CORS_ALLOW_ORIGINS=*" in .env and in docker-compose. with fetch i got 422 error unprocessable data Did someone succed to do that ? Could someone give me a simple browser example for getting a chat completion and a image generation from a browser page ?

scenaristeur avatar Oct 04 '23 14:10 scenaristeur

Hi @scenaristeur Here are all the examples that we have https://github.com/go-skynet/LocalAI/tree/master/examples. Is there any one of them can help you on this?

Aisuko avatar Oct 09 '23 01:10 Aisuko

Hi @scenaristeur. According to api.go CORS env should be set to true in order to apply CORS_ALLOW_ORIGINS=... you set

Vozhak avatar Jan 29 '24 21:01 Vozhak

@scenaristeur have you found any solution for this?

gustavostz avatar Mar 04 '24 20:03 gustavostz

no i didn't have time for this , do you have the same issue ?

scenaristeur avatar Mar 04 '24 22:03 scenaristeur

Yes, same problem. I will probably download the LLM directly to avoid this kind of frustration.

gustavostz avatar Mar 04 '24 22:03 gustavostz

Add --cors at the end of the command to resolve this issue.

sudo docker run --net=host --name local-ai -ti -v $PWD/models:/models localai/localai:latest-aio-cpu --models-path /models --context-size 700 --threads 4 --cors

image

ReasonDuan avatar Aug 09 '24 08:08 ReasonDuan