Rajakumar05032000

Results 6 issues of Rajakumar05032000

Added llama-cpp-python support for local inference. -> It allows to run local models, eliminating the need for Ollama server. -> Made necessary configuration changes in config.toml file. -> Used Singleton...

changes requested

Issues: -> Even though token usage was getting recorded in Db smoothly after recent code changes, it wasn't showing up properly in UI and also in logs ![1](https://github.com/stitionai/devika/assets/38426657/d55e3e16-cf02-40bf-bc29-55392ef75f82) ![2](https://github.com/stitionai/devika/assets/38426657/bed6d232-1e84-49ac-a41c-cc814b9a3a11) Fixed...

*************** Fixes Groq client ********************* Issues : -> groq package and the groq_client file had same class name. -> __init__ method had api_key as argument and It was not an...

Changes : -> Added support for all the Groq models available( as of 27-Mar-2024) -> Renamed variables to accommodate the changes **Tested Thoroughly**, works smoothly as expected! Thank you !

Issues : -> Current logging method is not efficient, it is just flooding with same messages ( as some APIs were called every second). -> It becomes difficult to analyse...

Changes : -> Updated the UI docker file. Earlier UI docker image used to consume 1.42 GB, now optimised to consume 176 MB. -> Used Multi-stage build ( following industry...