ai icon indicating copy to clipboard operation
ai copied to clipboard

`experimental_StreamData` is not streaming data in realtime

Open logan-anderson opened this issue 2 years ago • 7 comments

Description

I was using the chat HN and wanted to add the experimental_StreamData feature so I followed the docs the docs.

The issue

I want to stream to the frontend info about what the backend is doing (ie searching hacker news) but the data is not streamed until the LLM starts responding. I would expect when I call data.append that the data is streamed right away.

How to reproduce

  1. clone and setup the demo repo
  2. run yarn dev
  3. Go to localhost:3000, ask it about hacker news, and see that data does not get streamed to the frontend until the LLM starts responding. I would expect that data.append gets streamed when it is called.

See video demo for more info: https://www.loom.com/share/c98313137f174638a1d1decd400778c0?sid=a5479f18-2739-4440-b0ef-25303cb5bfc9

Code example

Github Repo: https://github.com/logan-anderson/experimental_StreamData-vercel-ai-issue

Relevant code block.

 const data = new experimental_StreamData();
  const stream = OpenAIStream(initialResponse, {
    onFinal: () => {
      data.close();
    },
    experimental_streamData: true,
    experimental_onFunctionCall: async (
      { name, arguments: args },
      createFunctionCallMessages,
    ) => {
      console.log("appending Data");
      // The data should be streamed when`data.append` is called (before `runFunction`)
      data.append({ message: "Searching Hacker News..." });
      const result = await runFunction(name, args);
      const newMessages = createFunctionCallMessages(result);
      data.append({ message: "Done Searching Hacker news" });  
      //  data is not streamed until LLM starts streaming data (Issue)
      return openai.chat.completions.create({
        model: "gpt-3.5-turbo-1106",
        stream: true,
        messages: [...messages, ...newMessages],
      });
    },
  });

Additional context

No response

logan-anderson avatar Dec 13 '23 01:12 logan-anderson

@logan-anderson How are you. I haven't tested it yet, but from the example in the documentation data.append is out of stream, correct? Putting it away doesn't solve your problem but it helps you better understand the flow

tgonzales avatar Dec 13 '23 08:12 tgonzales

I was also looking into making this work but I think it doesn't work that way yet. See this issue, starting with this comment: https://github.com/vercel/ai/pull/425#issuecomment-1682841115

rafalzawadzki avatar Dec 13 '23 20:12 rafalzawadzki

I think it is because of the data stream together with LLM Response. But I would love also to see if it is possible to stream the data first before LLM. Maybe you could look at how streamDataworks under the hood from my open issue https://github.com/vercel/ai/issues/751

nabilfatih avatar Dec 26 '23 12:12 nabilfatih

@logan-anderson concur 100% that I expected it (and need it) to be real time. It seems like with append the provided value is pushed into the internal buffer (this.data). However, this action alone doesn't cause the data to be immediately processed or sent through the stream.

IdoPesok avatar Dec 30 '23 16:12 IdoPesok

Workaround to getting real time messages is to not use the stream data -- rather use a PubSub service and have the client subscribe to the Chat ID and the chat API handler will send messages to the Chat ID.

IdoPesok avatar Jan 01 '24 01:01 IdoPesok

@IdoPesok Have you managed to get experimental_StreamData working or went for some other PubSub service?

mlewandowskim avatar Mar 16 '24 02:03 mlewandowskim

In a time of agents and agent tools, this is absolutely crucial.

Not being able keep the user informed about what the agent is doing in the 10-15 seconds it might spend on invoking different tools, almost renders the data stream useless.

I know this feature is experimental, but we really cannot see an LLM future without some sort of data stream, and it needs to work as soon as the background operations starts.

A bit upvote from us.

proemian avatar Apr 23 '24 13:04 proemian

Fix in 3.1.11 https://github.com/vercel/ai/releases/tag/ai%403.1.11

lgrammel avatar May 23 '24 15:05 lgrammel