FreeAskInternet icon indicating copy to clipboard operation
FreeAskInternet copied to clipboard

Project is Not Completely Local Nor Private

Open joeyame opened this issue 1 year ago • 7 comments

The description claims the following:

  • 💻 Completely LOCAL (no GPU need, any computer can run )
  • 🔐 Completely PRIVATE (all thing runing locally)

Both points are lies. While the search is performed locally, piping all that data into OpenAI's servers is not local nor private in the slightest. It's a cool idea, but at this point this repo doesn't provide anything more than me just using OpenAI's interface directly.

You do not get to claim that everything is local and private when you depend on an external web API. That goes against the whole meaning of those two words.

joeyame avatar Apr 08 '24 06:04 joeyame

I apologize if our description has caused any misunderstanding. You are right that relying on an external web API, in this case, OpenAI's, negates the notion of being entirely local and private.

This project was initially conceived as a proof-of-concept or experimental endeavor, with only 3 hour coding and opensourced. The original privacy concern is mostly on search, so I using online Free GPT3.5 to make most of using can running this without fancy hardware.

I'm working diligently to improve the project and plan to introduce more configuration options in the near future. This will allow users to choose whether they want to utilize a local deployment of LLM ( like llama.cpp or ollama) for increased privacy and localization.

Thx for your remind, I'll keep update and improve this project .

nashsu avatar Apr 08 '24 20:04 nashsu

actually I already complete design the whole new UI and about to complete dev. With new web UI you can set system using local running llm .

demo

nashsu avatar Apr 08 '24 20:04 nashsu

Yes please suppoer LLAMA and Mistral! Open models ASAP! I will not use openai because I want to seaarch anything!

keithorange avatar Apr 09 '24 03:04 keithorange

Why not just use Flowise + Ollama? Flowise itself already has web scraping / searching ability, and Ollama can host any LLM including Mistal, Gemma, etc.

I don't see what this project is doing that those two together cannot already.

automaton82 avatar Apr 09 '24 13:04 automaton82

@nashsu

Can you add a guide on running it without Docker?

i486 avatar Apr 09 '24 16:04 i486

Why not just use Flowise + Ollama? Flowise itself already has web scraping / searching ability, and Ollama can host any LLM including Mistal, Gemma, etc.

I don't see what this project is doing that those two together cannot already.

Flowise is so confusing compared to simple clean python API's ! But if you say so I will now learn FlowWise !

keithorange avatar Apr 10 '24 00:04 keithorange

Why not just use Flowise + Ollama? Flowise itself already has web scraping / searching ability, and Ollama can host any LLM including Mistal, Gemma, etc. I don't see what this project is doing that those two together cannot already.

Flowise is so confusing compared to simple clean python API's ! But if you say so I will now learn FlowWise !

Hm not sure, I guess an opinion. Here is the exact docs on doing a Web Scrape QnA in Flowise, just swap the ChatGPT LLM with Ollama or LocalAI and you're good:

https://docs.flowiseai.com/use-cases/web-scrape-qna

Same flow is in the 'marketplace' in Flowise, and there are videos documenting how to do it too.

automaton82 avatar Apr 12 '24 02:04 automaton82