Ashpreet
Ashpreet
@yvrjsharma this is so cool! yes lets build one
@jacobweiss2305 use: ``` string_response = python_assistant.run("What is the average rating of movies?") ``` If you want to return the string response in markdown, you'll need to set it on the...
@jacobweiss2305 i think you're onto something very cool here, I created a cookbook which might help you get more out of this. Try this code: ``` from phi.assistant.python import PythonAssistant...
@CharlieGreenman plans yes, in the works no. Still looking for a partner to build out an equivalent nodejs library
Our team is python heavy so thats why we started with python, not much nodejs expertise
@jacobweiss2305 setting `Assistant(..., tool_call_limit=5)`, after the `tool_call_limit` is reached it will deactivate the function calls
We currently set it to 20, but imo that number is bit high, somewhere around 7-10 is a good number
@balavenkatesh3322 can you share how you're currently running inference on it? does the api to the LLM accept the OpenAI spec?
@balavenkatesh-ai reading this [link](https://learn.microsoft.com/en-us/azure/databricks/large-language-models/llm-serving-intro) looks like Databricks supports the OpenAI spec. Try this and let me know if it works: ``` from os import getenv from phi.assistant import Assistant from...
Testing, I think we need to format the base url