Alejandro Companioni
Alejandro Companioni
@krrishdholakia Thanks for reviewing @devanshrj's PR to integrate Not Diamond into LiteLLM. Don't your `Router`s also benefit from loadbalancing, since they're added as callbacks? Can you confirm whether I understand...
Hey there @krrishdholakia and @ishaan-jaff - curious if you've had a chance to discuss this integration internally? I can pick this back up in the coming days, or support your...
Checking in as another affected party here. Thanks for investigating!
Hi @xingyaoww @neubig, just caught this issue. While our LLMConfigs [accept prices](https://github.com/Not-Diamond/notdiamond-python/blob/main/notdiamond/llms/config.py#L71-L72), they only help tune cost tradeoffs. You won't have to provide that parameter for public models - we...
> Thanks @acompa , I do think we'd be interested in at least running an evaluation where we use NotDiamond as a backend and see if the results are better/cheaper...
Excellent. As you begin your evaluation, note that we offer two approaches to AI model routing: Our [out-of-the-box router](https://www.notdiamond.ai/features#SOTA-model-routing) has been trained on generalist, cross-domain data (including coding and non-coding...
> Thanks for the suggestion. We will look at this as a potential extension of LLMManager. That's exciting to hear. I'm happy to take on this work if you're open...
Getting close here. Test failures for 3.8 and 3.12 seem unrelated to this change. @baskaryan Am I correct in assuming that any new extended_testing_deps.txt of Langchain must support `python>=3.8.1,
Hey there @efriis - we've got this booked for our sprint starting Monday. Will push some updated commits next week. Does that work for your timeline?
@efriis Think we're finally good to go here, thanks to your advice. :) Do you need anything else from me, as long as Vercel doesn't fail?