Amazon Bedrock Client for AutoGen
Why are these changes needed?
Amazon Bedrock also offers a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. The AWS bedrock has acquired all the latest open-source and closed-source models for inference, making it more flexible for the users to manage the complete infrastructure from AWS. this addition will increase the capability of AutoGen to support the models from bedrock.
API Documentation: https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html
For testing:
# Authentication parameters:
# aws_region (mandatory)
# aws_access_key (or environment variable: AWS_ACCESS_KEY)
# aws_secret_key (or environment variable: AWS_SECRET_KEY)
# aws_session_token (or environment variable: AWS_SESSION_TOKEN)
# aws_profile_name
config_list = [
{
"api_type": "bedrock",
"model": "meta.llama3-1-8b-instruct-v1:0",
"aws_region_name": "us-west-2",
"aws_access_key": "",
"aws_secret_key": "",
"price" : [0.003, 0.015]
}
]
Related issue number
Checks
- [X] I've included any doc changes needed for https://microsoft.github.io/autogen/. See https://microsoft.github.io/autogen/docs/Contribute#documentation to build and test documentation locally.
- [X] I've added tests (if relevant) corresponding to the changes introduced in this PR.
- [X] I've made sure all auto checks have passed.
I've committed the first full draft of the client class, largely based on (Discord) @astroalek and @Christian T's code, thanks!
Still plenty of testing to do (have not tested images). Streaming is not currently supported and I think that we could go without that for the first round.
Codecov Report
Attention: Patch coverage is 16.04938% with 204 lines in your changes missing coverage. Please review.
Project coverage is 20.05%. Comparing base (
6279247) to head (467c5fe). Report is 111 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #3232 +/- ##
===========================================
- Coverage 32.90% 20.05% -12.86%
===========================================
Files 94 102 +8
Lines 10235 11012 +777
Branches 2193 2526 +333
===========================================
- Hits 3368 2208 -1160
- Misses 6580 8582 +2002
+ Partials 287 222 -65
| Flag | Coverage Ξ | |
|---|---|---|
| unittests | 20.01% <16.04%> (-12.90%) |
:arrow_down: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Updated to support images in the request, example:
# THIS TESTS: TESTS A MODEL CHECKING AN IMAGE.
altmodel_llm_config = {
"config_list":
[
{
"api_type": "bedrock",
"model": "anthropic.claude-3-sonnet-20240229-v1:0",
"aws_region_name": "us-east-1",
"aws_access_key_id": "",
"aws_secret_access_key": "",
"cache_seed": None
}
]
}
import autogen
from autogen import Agent, AssistantAgent, ConversableAgent, UserProxyAgent
from autogen.agentchat.contrib.capabilities.vision_capability import VisionCapability
from autogen.agentchat.contrib.img_utils import get_pil_image, pil_to_data_uri
from autogen.agentchat.contrib.multimodal_conversable_agent import MultimodalConversableAgent
from autogen.code_utils import content_str
image_agent = MultimodalConversableAgent(
name="image-explainer",
max_consecutive_auto_reply=10,
llm_config=altmodel_llm_config,
)
user_proxy = autogen.UserProxyAgent(
name="User_proxy",
system_message="A human admin.",
human_input_mode="NEVER",
max_consecutive_auto_reply=0,
code_execution_config={
"use_docker": False
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
)
# Ask the question with an image
result = user_proxy.initiate_chat(
image_agent,
message="""What's the breed of this dog?
<img https://th.bing.com/th/id/R.422068ce8af4e15b0634fe2540adea7a?rik=y4OcXBE%2fqutDOw&pid=ImgRaw&r=0>.""",
)
print(result.summary)
@Hk669, I can't add you as a reviewer, but if you are able to review the code it would be great.
Hey @wenngong, @joris-swapfiets, if you are able to help test this dedicated Amazon Bedrock client class it would be appreciated :).
Hey @wenngong, @joris-swapfiets, if you are able to help test this dedicated Amazon Bedrock client class it would be appreciated :).
@marklysze, tested, the bedrock client works fine in my testings.
Oh I just stumbled upon this PR, after we finished implementing in a client our custom bedrock agent (heavily modifying the anthropic.py file) and now our agents, running in lambda, use the bedrock api to connect to any model without issues.
I will wait until this implementation is finished so we can compare aproaches πͺπΌ
Oh I just stumbled upon this PR, after we finished implementing in a client our custom bedrock agent (heavily modifying the anthropic.py file) and now our agents, running in lambda, use the bedrock api to connect to any model without issues.
I will wait until this implementation is finished so we can compare aproaches πͺπΌ
Hey @Bateristico, thanks for the comment. Sounds good, if you do notice any areas of improvement, please feel free to shout them out. :)
Hey @wenngong, @joris-swapfiets, if you are able to help test this dedicated Amazon Bedrock client class it would be appreciated :).
@marklysze, tested, the bedrock client works fine in my testings.
Thanks so much @wenngong! If you get a chance to approve it, that would be great :)
@wenngong, thanks for your review, I've updated accordingly. @Hk669, I'll review tests again when you have had a chance to note which ones can be removed.
Thanks for approving @wenngong, @Hk669 - are you happy to keep tests as is or would you like to have some removed? If you are happy to keep as is I'll mark as approved :).
looks good to meπ, thanks for the efforts @marklysze
looks good to meπ, thanks for the efforts @marklysze
Thanks @Hk669! I'll approve on your behalf :)