Two new Wiki addition suggestions. Configuring with LLM providers and Local model quality/experiences.
A feature request for the wiki
-
A section on configuring CrewAI with different LLM providers, as well as local model providers such as LM-Studio, llama.cpp etc. So far, I have been able to get local models working with Ollama, mistral-medium on the Mistral AI platform and mistral-small with Anyscale. Nevertheless, I had to search through the Discord community to get hints to configure the later.
-
A section on local models, which ones work, how well they work. Which models to avoid, and so forth. In my limited testing so far, it's obvious some open source models punch way above their weight. A section like this could shorten the time to develop a useful implementation and potentially hasten adoption.
@JRBobDobbs do you want to leave some hints how to include llama cpp, I am using python version
100% we have a PR coming soon to add this to the new docs, we will get rid of the wiki in favor of a proper documentation website