UmerHA
UmerHA
# Integration with Azure Cognitive Search Allows to use Azure Cognitive Search as retriever ( & thus solves enhancement issue #3317). ## Who can review? Community members can review the...
# Streaming only final output of agent (#2483) As requested in issue #2483, this Callback allows to stream only the final output of an agent (ie not the intermediate steps)....
# Adds Reflexion (solves #2316) Introduces the concept of trials to AgentExecutor and implements Reflexion as re-try method. I.e., when the Agent fails, it thinks about what it could have...
# Added SmartGPT workflow by providing SmartLLM wrapper around LLMs Edit: As @hwchase17 suggested, this should be a chain, not an LLM. I have adapted the PR. It is used...
Fix #5027: Make ChatOpenAI models work with prompts created via ChatPromptTemplate.from_role_strings
# Make `ChatOpenAI` models work with prompts created via `ChatPromptTemplate.from_role_strings` As described in #5027, currently, `ChatOpenAI` models don't work with prompts created via `ChatPromptTemplate.from_role_strings`. The reason is that `ChatPromptTemplate.from_role_strings` creates...
### System Info LangChain version = 0.0.167 Python version = 3.11.0 System = Windows 11 (using Jupyter) ### Who can help? - @hwchase17 - @agola11 - @UmerHA (I have a...
# Add caching to BaseChatModel Fixes #1644 (Sidenote: While testing, I noticed we have multiple implementations of Fake LLMs, used for testing. I consolidated them.) ## Who can review? Community...
# Make FinalStreamingStdOutCallbackHandler more robust by ignoring new lines & white spaces `FinalStreamingStdOutCallbackHandler` doesn't work out of the box with `ChatOpenAI`, as it tokenized slightly differently than `OpenAI`. The response...
Allows using Azure OpenAI instead of OpenAI (solves #325) ``` gpt-engineer [project_path] --azure --engine [your_azureopenai_model_name] ``` To be consistent with Azure OpenAI terminology, the model is called an `engine` instead....
Token usage is now tracked and logged into the file `memory/logs/token_usage`. Solves #322 How it works: - every time `ai.next` is called, the token usage is computed and tracked -...