ai icon indicating copy to clipboard operation
ai copied to clipboard

Possibility to not use Experimental_StreamData with useChat

Open nabilfatih opened this issue 2 years ago • 2 comments

Description

In the backend, I'm using this AI SDK and want to add additional object data to the response. but I see in the docs that experimental_streamdata only works with const { data } = useChat({}).

I'm using fetch in the frontend for my own logic.. so how can I grab that properly?

Frontend:

const data = response.response.body;
  if (!data) throw new Error("No data");

  const reader = data.getReader();
  const decoder = new TextDecoder();
  let done = false;

  setIsGenerating(true);

  while (!done) {
    try {
      if (stopGenerating.current) {
        response.abortController.abort(); // Abort the fetch request
        await reader.closed; // Wait until the response is fully read
        data.cancel(); // Cancel the stream
        done = true;
        break;
      }
      const { value, done: doneReading } = await reader.read();
      done = doneReading;
      if (doneReading) reader.releaseLock();
      const chunkValue = decoder.decode(value);
      if (chunkValue) {
        // the logic
      }
    } catch (error) {
      done = true;
    }
}

Code example Backend

const response = await openai.chat.completions.create({
      model: model,
      messages: messages,
      temperature: 0.6,
      stream: true,
      functions: functionCall,
      function_call: functionName, // can be auto, none, or the name of the function
    });

    const stream = OpenAIStream(response, {
      experimental_onFunctionCall: async (
        { name, arguments: args },
        createFunctionCallMessages
      ) => {
        const resultFunction = await callFunction(userId, chatId, name, args);
        const newMessages = createFunctionCallMessages(resultFunction);
        return openai.chat.completions.create({
          messages: [...messages, ...newMessages],
          stream: true,
          temperature: 0.6,
          model: model,
          functions: functionCall,
        });
      },
      onStart: async () => {
      },
    });

    return new StreamingTextResponse(stream);

nabilfatih avatar Nov 17 '23 19:11 nabilfatih

Using experimental data opts you into a new streaming protocol that uses prefixed lines to distinguish between message types. You'll need to implement a parser for this on your end. I would look at parse-complex-response.ts and its tests to see how to support it, but we consider it internal and will be changing it over time. However, we should probably create a StreamingTextResponse that parses it on the server and only sends plain text over the network.

Is there a reason you're using fetch and not our provided hooks?

MaxLeiter avatar Nov 20 '23 18:11 MaxLeiter

Thank you for the explanation—I'll definitely look into how it works.

My application has similar capabilities to ChatGPT, such as regenerating and editing messages at specific points in the conversation. I'm also able to view previous messages before regenerating, which allows for a more tailored solution for placing the response. However, I haven't found a solution for this in the current SDK (yet). But please, correct me if I'm mistaken 😄. I'd be thrilled to discover that this SDK offers the functionality I'm looking for.

nabilfatih avatar Nov 20 '23 19:11 nabilfatih