anthropic-sdk-python icon indicating copy to clipboard operation
anthropic-sdk-python copied to clipboard

AnthropicVertex stream chat generation is taking too much time

Open DhruvThu opened this issue 1 year ago • 2 comments

Recently, i have started using AnthropicVertex instead of direct anthropic. When I try to generate some data through AnthropicVertex client, it is taking around 2s to start streaming. However, in case of direct anthropic, it is not taking this much time. Also 2s duration is random, sometime it takes quite large amount of time and goes upto 6-10s. In worse case, it goes upto 20s. So, is there any que kind of stuff? I am using same code given in vertex ai anthropic notebook to generate responses. Is there any workaround which i need to complete to get response as fast as direct anthropic? If someone could guide me on this, it would be really helpful.

Thanks !!

DhruvThu avatar Jun 27 '24 19:06 DhruvThu

Hey @DhruvThu, can you share a few ids from the responses you get back on vertex requests? Or share a few request ids? This will help us debug.

aaron-lerner avatar Jun 28 '24 16:06 aaron-lerner

Thanks for responding. Also, I am using streaming response from vertex anthropic and these are some of the message ids which i got in first chunk. msg_01D2jNpu4rUZMXUvwtpipMnx, msg_01CMjRdPAhDQaWELbrgSirS8

DhruvThu avatar Jun 29 '24 15:06 DhruvThu

Hmm, message ids from vertex should look like msg_vrtx_.... The ids you shared are for the direct (1P) Anthropic API.

aaron-lerner avatar Jul 01 '24 19:07 aaron-lerner

Could you check for this? msg_vrtx_01AaDL52fwpTrqFftLMxxQ1e. Sorry for the previous one. In this message, it took around 2.4s to start streaming. The streaming response through direct Anthropic API took around 0.89s. The message id for that is msg_01AWgnspZ2w5NhzE92uL7VZ9.

The code I am using is as follows,

class AnthropicLLM:
    def __init__(self, anthropic_client : Anthropic, anthropic_vertex_client : AnthropicVertex) -> None:
        self.anthropic = anthropic_client
        credentials, project_id = google.auth.load_credentials_from_dict(google_credentials_info, scopes=["https://www.googleapis.com/auth/cloud-platform"],)
        anthropic_vertex_client._credentials = credentials
        self.vertex_anthropic = anthropic_vertex_client
        self.messages = Messages(self)

class Messages:
    def __init__(self, client : AnthropicLLM) -> None:
        self.client = client

    def create(self, model : str, messages : list[Message], temperature : float, system : str, stream : bool, max_tokens : int, tool_choice : str, tools : list[dict]):
        model = model.replace("@", "-")
        import time
        st = time.time()
        if(tools == []):
            response = self.client.vertex_anthropic.messages.create(
                    model=model,
                    messages=messages,
                    temperature=temperature,
                    system=system,
                    stream=True,
                    max_tokens=max_tokens
                )
            print(response)
            print(time.time()-st)
            return response
        else:
            return self.client.vertex_anthropic.messages.create(
                    model=model,
                    messages=messages,
                    temperature=temperature,
                    system=system,
                    stream=stream,
                    max_tokens=max_tokens,
                    tool_choice=tool_choice,
                    tools=tools
                )

DhruvThu avatar Jul 02 '24 12:07 DhruvThu

Hey @DhruvThu, we've identified the root cause of this issue. While we work on a fix you can workaround this issue by explicitly passing an access_token, e.g. AnthropicVertex(access_token=access_token)

RobertCraigie avatar Jul 03 '24 09:07 RobertCraigie

Hey, thanks for the response. I try with access token

DhruvThu avatar Jul 04 '24 16:07 DhruvThu

This will be fixed in the next release, v0.30.2, https://github.com/anthropics/anthropic-sdk-python/pull/573.

Note that you will still see a delay when making the very first request with an AnthropicVertex instance as we need to fetch the access token but subsequent requests will use the cached token.

RobertCraigie avatar Jul 08 '24 08:07 RobertCraigie