easyllm
easyllm copied to clipboard
Adds support for setting additional client args to address issue #44. Essentially, I added a module-level `client_args` dictionary, similar to `huggingface.api_key` and `.prompt_builder`. A user can put headers, cookies, etc.,...
When I use `model="meta.llama3-8b-instruct-v1:0"` it says: `ValueError: Model meta.llama3-8b-instruct-v1:0 is not supported. Supported models are: ['anthropic.claude-v2']`. Isn't there support for other models like llama-3 and mistral? Also, the chat format...
See - https://cloud.google.com/vertex-ai/docs/generative-ai/sdk-for-llm/llm-sdk-overview-ref#load_a_foundation_model - https://cloud.google.com/vertex-ai/docs/generative-ai/chat/test-chat-prompts
Hello, If we update pydantic lib to last version V2 we have a model_dump problem. Thanks
So I'm building a class that can alternate between both the huggingface and sagemaker clients and I declare all my os.environs at the top of the class like so: ```...
I am using the `meta-llama/Llama-2-70b-chat-hf` model on a data frame with 3000 rows, each including a 500-token text. But after 10 rows is processed, I get the following error `[](https://localhost:8080/#)...
I'm getting botocore dependency despite I'm just regularly installing the package. Either it should be added dependency or specified in the install like: `easyllm[aws]` Rolling back to `v0.5.0` solved the...
I'm using `huggingface.chatCompletion` and need to be able to provide some cookies to the `InferenceClient`. I don't see a way to pass that in via `create()`, which is where the...
Hello, I have noticed that the interface returns the same generations independently of the number of responses requested (n > 1). Easy reproduction: ```python from easyllm.clients import huggingface # helper...
... like OpenAI function calling or jsonformer?