Alisamar Husain
Alisamar Husain
Closes: #23 
Summary of Changes - Added more information to the `LLMResult` returned by `OpenAI` - Return info in callback handler - More detailed token usage in `OpenAICallbackHandler` Fixes: #1337 Related: #873...
The LLM discards a lot of information while returning the `LLMResult` and only returns a few keys. Especially in the case of the `OpenAI` LLM, only the total token usage...
I've created an example which provides an HTTP interface to LLaMA using [Crow](https://github.com/CrowCpp/Crow). This comes as a single header file which I've committed. Also, this library depends on Boost, so...
I'm trying to use the `inputs_embeds` parameter to run the LLaMA model. This is part of my code. ```python # INPUT = ...embedding of a sequence, ensuring that there are...
I wrote a test to see how fast a simple model can run on Apple M1 chips, using `tch`. The actual model, called `SanityModel`, is just a simple 4 layer...
2.0 Milestone https://reactjs.org/docs/error-boundaries.html
Tests can be set up with Github Actions to test the following - Is the `yarn build` successful? - Is the linting passing without errors? - Is the Docker build...