vscode-copilot-release icon indicating copy to clipboard operation
vscode-copilot-release copied to clipboard

Copilot Chat is slow

Open aiday-mar opened this issue 2 years ago • 27 comments

Originally posted by @rafales.

Sometimes the copilot chat requests take a lot of time to resolve. We should look into how to maximally reducing this time. @rafales mentioned the requests take so long, he would rather ask ChatGPT directly for an answer.

aiday-mar avatar Dec 05 '23 10:12 aiday-mar

i have got same problem

nhattruong0000 avatar Dec 05 '23 12:12 nhattruong0000

Is this because GPT-4 is slower than GPT-4 Turbo?

hidaris avatar Dec 05 '23 15:12 hidaris

Extremely slow for me as well since yesterday.

alejoar avatar Dec 05 '23 18:12 alejoar

Extremely slow for me too, however autocompletions are as fast as always.

Raulvalverdeleal avatar Dec 05 '23 23:12 Raulvalverdeleal

Please share your logs via CMD / CTRL + SHIFT + U -> GitHub Copilot Chat

lramos15 avatar Dec 06 '23 13:12 lramos15

@lramos15 here's a log:

2023-12-06T18:32:52.105Z [INFO] [chat fetch] engine https://api.githubcopilot.com/chat
2023-12-06T18:32:52.105Z [INFO] [chat fetch] modelMaxTokenWindow 4096
2023-12-06T18:32:52.105Z [INFO] [chat fetch] chat model gpt-4
2023-12-06T18:32:52.105Z [INFO] [chat fetch] engine https://api.githubcopilot.com/chat
2023-12-06T18:32:52.105Z [INFO] [chat fetch] modelMaxTokenWindow 4096
2023-12-06T18:32:52.105Z [INFO] [chat fetch] chat model gpt-4
2023-12-06T18:32:54.923Z [INFO] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 2816 ms
2023-12-06T18:32:55.037Z [INFO] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 2928 ms
2023-12-06T18:32:57.136Z [INFO] [streamMessages] message 0 returned. finish reason: [stop]
2023-12-06T18:32:57.138Z [INFO] [streamChoices] request done: headerRequestId: [d0cbd06a-9ad6-45e8-be65-f78b57a0389b] model deployment ID: [x3b0892d9e5fc]
2023-12-06T18:33:00.243Z [INFO] [streamMessages] message 0 returned. finish reason: [stop]
2023-12-06T18:33:00.243Z [INFO] [streamChoices] request done: headerRequestId: [98bf1327-2c06-42a8-9c1f-25fd3fe8bfc9] model deployment ID: [x3b0892d9e5fc]
2023-12-06T18:34:07.132Z [INFO] [chat fetch] engine https://api.githubcopilot.com/chat
2023-12-06T18:34:07.132Z [INFO] [chat fetch] modelMaxTokenWindow 4096
2023-12-06T18:34:07.132Z [INFO] [chat fetch] chat model gpt-4
2023-12-06T18:34:07.305Z [INFO] [chat fetch] engine https://api.githubcopilot.com/chat
2023-12-06T18:34:07.305Z [INFO] [chat fetch] modelMaxTokenWindow 8192
2023-12-06T18:34:07.305Z [INFO] [chat fetch] chat model gpt-3.5
2023-12-06T18:34:07.310Z [INFO] [chat fetch] engine https://api.githubcopilot.com/chat
2023-12-06T18:34:07.310Z [INFO] [chat fetch] modelMaxTokenWindow 8192
2023-12-06T18:34:07.310Z [INFO] [chat fetch] chat model gpt-3.5
2023-12-06T18:34:08.124Z [INFO] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 813 ms
2023-12-06T18:34:08.126Z [INFO] [streamMessages] message 0 returned. finish reason: [stop]
2023-12-06T18:34:08.127Z [INFO] [streamChoices] request done: headerRequestId: [959453f2-01a0-4fc4-b9be-2728c161c598] model deployment ID: [x338b9c029b38]
2023-12-06T18:34:08.615Z [INFO] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 1304 ms
2023-12-06T18:34:08.618Z [INFO] [streamMessages] message 0 returned. finish reason: [stop]
2023-12-06T18:34:08.620Z [INFO] [streamChoices] request done: headerRequestId: [b411a205-0813-46ed-bcd8-e757a3de1bcb] model deployment ID: [x338b9c029b38]
2023-12-06T18:34:08.670Z [INFO] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 1537 ms
2023-12-06T18:34:15.313Z [INFO] [streamMessages] message 0 returned. finish reason: [stop]
2023-12-06T18:34:15.314Z [INFO] [streamChoices] request done: headerRequestId: [ea32b584-98df-4ef2-9b31-9469a148d723] model deployment ID: [x3b0892d9e5fc]
2023-12-06T18:35:30.633Z [INFO] [chat fetch] engine https://api.githubcopilot.com/chat
2023-12-06T18:35:30.633Z [INFO] [chat fetch] modelMaxTokenWindow 8192
2023-12-06T18:35:30.633Z [INFO] [chat fetch] chat model gpt-3.5
2023-12-06T18:35:31.115Z [INFO] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 481 ms
2023-12-06T18:35:31.118Z [INFO] [streamMessages] message 0 returned. finish reason: [stop]
2023-12-06T18:35:31.121Z [INFO] [streamChoices] request done: headerRequestId: [6ab589d7-d653-4060-b754-ff80a7a3aee9] model deployment ID: [x338b9c029b38]

A couple comments:

  • I cleared all the logs and this is for a single query
  • The time span in these logs is around 3 minutes, however before the first log message appeared I waited for around 1 minute

I recorded a video as well of the interaction, in case that helps. Let me know if you want me to upload that as well.

alejoar avatar Dec 06 '23 18:12 alejoar

A couple comments:

  • I cleared all the logs and this is for a single query
  • The time span in these logs is around 3 minutes, however before the first log message appeared I waited for around 1 minute

I recorded a video as well of the interaction, in case that helps. Let me know if you want me to upload that as well.

These logs are a bit weird. What version of the extension are you using? Does this happen all the time for you?

lramos15 avatar Dec 06 '23 20:12 lramos15

@lramos15 I'm using version v0.10.2 Preview.

Yes, it's happening all the time since the day before yesterday.

alejoar avatar Dec 06 '23 20:12 alejoar

@lramos15 I'm using version v0.10.2 Preview.

Yes, it's happening all the time since the day before yesterday.

Can you try insiders and the latest pre-release? Then provide those logs as they have a bit more verbosity

lramos15 avatar Dec 06 '23 20:12 lramos15

@lramos15 I don't have time to install insiders right now, but I switched to the pre-release version.

My first interaction attempt with copilot worked instantly, but unfortunately the second and later interactions went back to slow.

Here's the log for a single interaction:

2023-12-06T20:43:57.211Z [INFO] [chat fetch] engine https://api.githubcopilot.com/chat
2023-12-06T20:43:57.211Z [INFO] [chat fetch] modelMaxTokenWindow 4096
2023-12-06T20:43:57.211Z [INFO] [chat fetch] chat model gpt-4
2023-12-06T20:43:57.212Z [INFO] [chat fetch] engine https://api.githubcopilot.com/chat
2023-12-06T20:43:57.212Z [INFO] [chat fetch] modelMaxTokenWindow 4096
2023-12-06T20:43:57.212Z [INFO] [chat fetch] chat model gpt-4
2023-12-06T20:43:57.212Z [INFO] [chat fetch] engine https://api.githubcopilot.com/chat
2023-12-06T20:43:57.212Z [INFO] [chat fetch] modelMaxTokenWindow 4096
2023-12-06T20:43:57.212Z [INFO] [chat fetch] chat model gpt-4
2023-12-06T20:43:59.438Z [INFO] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 2225 ms
2023-12-06T20:44:00.170Z [INFO] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 2956 ms
2023-12-06T20:44:03.233Z [INFO] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 6020 ms
2023-12-06T20:44:15.443Z [INFO] [streamMessages] message 0 returned. finish reason: [stop]
2023-12-06T20:44:15.444Z [INFO] [streamChoices] request done: headerRequestId: [7b71e358-0375-4e9f-8226-65611bf43742] model deployment ID: [x3b0892d9e5fc]
2023-12-06T20:44:17.070Z [INFO] [streamMessages] message 0 returned. finish reason: [stop]
2023-12-06T20:44:17.072Z [INFO] [streamChoices] request done: headerRequestId: [62d9cb81-4ce2-4f26-a387-360eb4bb023e] model deployment ID: [x3b0892d9e5fc]
2023-12-06T20:44:19.884Z [INFO] [streamMessages] message 0 returned. finish reason: [stop]
2023-12-06T20:44:19.885Z [INFO] [streamChoices] request done: headerRequestId: [0bc5817f-e6e8-4031-ad78-99c195461385] model deployment ID: [x3b0892d9e5fc]
2023-12-06T20:45:30.698Z [INFO] [chat fetch] engine https://api.githubcopilot.com/chat
2023-12-06T20:45:30.698Z [INFO] [chat fetch] modelMaxTokenWindow 8192
2023-12-06T20:45:30.698Z [INFO] [chat fetch] chat model gpt-3.5
2023-12-06T20:45:30.699Z [INFO] [chat fetch] engine https://api.githubcopilot.com/chat
2023-12-06T20:45:30.699Z [INFO] [chat fetch] modelMaxTokenWindow 8192
2023-12-06T20:45:30.699Z [INFO] [chat fetch] chat model gpt-3.5
2023-12-06T20:45:30.700Z [INFO] [chat fetch] engine https://api.githubcopilot.com/chat
2023-12-06T20:45:30.700Z [INFO] [chat fetch] modelMaxTokenWindow 8192
2023-12-06T20:45:30.700Z [INFO] [chat fetch] chat model gpt-3.5
2023-12-06T20:45:31.108Z [INFO] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 404 ms
2023-12-06T20:45:31.111Z [INFO] [streamMessages] message 0 returned. finish reason: [stop]
2023-12-06T20:45:31.114Z [INFO] [streamChoices] request done: headerRequestId: [0400ee11-5d49-49bb-bb63-aab8a61dc16b] model deployment ID: [x338b9c029b38]
2023-12-06T20:45:31.151Z [INFO] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 448 ms
2023-12-06T20:45:31.152Z [INFO] [streamMessages] message 0 returned. finish reason: [stop]
2023-12-06T20:45:31.153Z [INFO] [streamChoices] request done: headerRequestId: [b2e2cb98-82d5-4c5c-9a13-abbfda8869e6] model deployment ID: [x338b9c029b38]
2023-12-06T20:45:31.462Z [INFO] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 758 ms
2023-12-06T20:45:31.464Z [INFO] [streamMessages] message 0 returned. finish reason: [stop]
2023-12-06T20:45:31.466Z [INFO] [streamChoices] request done: headerRequestId: [daed98a5-600f-4f63-8b6b-28dac6ff01bf] model deployment ID: [x338b9c029b38]

Again, I waited like two minutes until the first log line appeared this time.

After copilot was done answering, it kept "thinking..." for a while.

Everything after 2023-12-06T20:44:19.885Z showed up after copilot was long done answering.

alejoar avatar Dec 06 '23 20:12 alejoar

I don't have time to install insiders right now, but I switched to the pre-release version.

Pre-release unfortunately doesn't work in stable. You're probably getting pretty much the same version you had before since they're pinned to a given VS Code version. I will try to see if I can reproduce at all in the stable builds, when you do have time to install insiders that would be greatly appreciated.

lramos15 avatar Dec 06 '23 20:12 lramos15

@lramos15 I just installed it because I'm wasting more time without copilot anyways 😬

So with the insiders version it works absolutely fine it seems. I didn't test deeply or open any of the codebases I'm using with the stable version yet though, but I might migrate to insiders for now.

I also ONLY installed copilot and copilot chat, I wonder if maybe some of my settings or other extensions are interfering somehow?

I'll post an update here if I find the issue again on insiders. Let me know if you still want the logs even if I can't replicate with insiders.

alejoar avatar Dec 06 '23 21:12 alejoar

No need for logs if you cannot replicate. Thanks for confirming @alejoar. We should have a new version of the chat extension out by the end of the week which will have the insiders changes so hopefully that'll fix your issue.

lramos15 avatar Dec 06 '23 21:12 lramos15

If anyone has the slowness still and can provide logs please let me know as I believe this has been fixed

lramos15 avatar Dec 20 '23 14:12 lramos15

This is not exclusive to VS, Copilot is extremely slow to respond even if you go to through the browser, so seems like a server side issue.

Kobi-Blade avatar Jan 08 '24 18:01 Kobi-Blade

This is not exclusive to VS, Copilot is extremely slow to respond even if you go to through the browser, so seems like a server side issue.

How are you accessing copilot through the browser?

lramos15 avatar Jan 08 '24 18:01 lramos15

This is not exclusive to VS, Copilot is extremely slow to respond even if you go to through the browser, so seems like a server side issue.

How are you accessing copilot through the browser?

You can also access Copilot on Windows 11 directly as well.. As to answer your question directly, here.

Copilot is pretty much extremely slow to answer everywhere, so issue is not exclusive to VS.

Kobi-Blade avatar Jan 08 '24 23:01 Kobi-Blade

any suggestion on improving response time?

[DEBUG] [proxy-socket-factory] [2024-01-10T09:52:14.765Z] Attempting to establish connection to proxy [DEBUG] [proxy-socket-factory] [2024-01-10T09:52:14.766Z] Socket Connect returned status code,200 [DEBUG] [proxy-socket-factory] [2024-01-10T09:52:14.766Z] Successfully established tunneling connection to proxy [DEBUG] [getCompletions] [2024-01-10T09:53:08.474Z] Requesting completion at position 36:1, between "class PassiveMoveSourceComponent : public Echo::NodeActorComponent\r\n{" and "\r\n DECLAREACTORCOMTYPE_H(PassiveMoveSourceComponent);\r\n". [INFO] [default] [2024-01-10T09:53:08.584Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [DEBUG] [proxy-socket-factory] [2024-01-10T09:53:08.585Z] Attempting to establish connection to proxy [DEBUG] [proxy-socket-factory] [2024-01-10T09:53:08.586Z] Socket Connect returned status code,200 [DEBUG] [proxy-socket-factory] [2024-01-10T09:53:08.586Z] Successfully established tunneling connection to proxy [INFO] [default] [2024-01-10T09:53:09.723Z] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 1139 ms [INFO] [streamChoices] [2024-01-10T09:53:09.723Z] solution 0 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-10T09:53:09.723Z] request done: headerRequestId: [feb43cec-2b9c-4177-a6c9-81d51e018ec4] model deployment ID: [x1c5e8d1294d6] [INFO] [RequestProposalsAsync] 0 completions available. [DEBUG] [getCompletions] [2024-01-10T09:53:16.495Z] Requesting completion at position 39:7, between "\r\npublic:" and "\r\n virtual void init(Echo::ActorComponentInfo* info) override;\r\n". [INFO] [default] [2024-01-10T09:53:16.599Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [DEBUG] [proxy-socket-factory] [2024-01-10T09:53:16.600Z] Attempting to establish connection to proxy [DEBUG] [proxy-socket-factory] [2024-01-10T09:53:16.601Z] Socket Connect returned status code,200 [DEBUG] [proxy-socket-factory] [2024-01-10T09:53:16.601Z] Successfully established tunneling connection to proxy [INFO] [default] [2024-01-10T09:53:17.578Z] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 979 ms [INFO] [streamChoices] [2024-01-10T09:53:17.579Z] solution 0 returned. finish reason: [stop] [INFO] [RequestProposalsAsync] 0 completions available. [INFO] [streamChoices] [2024-01-10T09:53:17.579Z] request done: headerRequestId: [30070293-6dc7-499f-835d-fe9abb11b19d] model deployment ID: [x1c5e8d1294d6] [DEBUG] [proxy-socket-factory] [2024-01-10T09:53:23.517Z] Attempting to establish connection to proxy [DEBUG] [proxy-socket-factory] [2024-01-10T09:53:23.518Z] Socket Connect returned status code,200 [DEBUG] [proxy-socket-factory] [2024-01-10T09:53:23.518Z] Successfully established tunneling connection to proxy [DEBUG] [proxy-socket-factory] [2024-01-10T09:53:24.731Z] Attempting to establish connection to proxy [DEBUG] [proxy-socket-factory] [2024-01-10T09:53:24.732Z] Socket Connect returned status code,200 [DEBUG] [proxy-socket-factory] [2024-01-10T09:53:24.732Z] Successfully established tunneling connection to proxy

luciferxiaozhi avatar Jan 10 '24 09:01 luciferxiaozhi

@luciferxiaozhi Those logs are from Copilot and not Copilot Chat. This repository only focuses on the chat experience.

lramos15 avatar Jan 10 '24 13:01 lramos15

I'm experiencing this issue too. It has not been fixed server-side. I used to be able to get +1000 lines of output in a few seconds. It is now taking over an hour to get 800 lines.

ePaint avatar Jan 22 '24 03:01 ePaint

Yup, I'm about to cancel my subscription, not only Copilot chat is painfully slow, but most times it fails with the @workspace tag saying it can't access the workspace (intermittently), and 80% of the time it gives me the wrong output anyways...

mariuslacatus avatar Feb 19 '24 19:02 mariuslacatus

Yup, I'm about to cancel my subscription, not only Copilot chat is painfully slow, but most times it fails with the @workspace tag saying it can't access the workspace (intermittently), and 80% of the time it gives me the wrong output anyways...

This is most likely a network problem! I use copilot chat at home and have a quick reaction time, but when I use it at school, my reaction time is as slow as a turtle......

liaozhangsheng avatar Feb 26 '24 10:02 liaozhangsheng

Same here, the copilot stays "Thinking" for a long time, even more than a minute sometimes. In some cases, canceling the current request and sending it again do the trick.

ZackStone avatar Mar 09 '24 22:03 ZackStone

截屏2024-04-12 14 43 58

It stop here for 5 mins.

And I didn't see any webrequest in proxifier, is it really working for me?

截屏2024-04-12 14 44 24

linonetwo avatar Apr 12 '24 06:04 linonetwo

@linonetwo Do you have any logs you can share via CMD / CTRL + SHIFT + U -> GitHub Copilot Chat

lramos15 avatar Apr 12 '24 14:04 lramos15

Copilot Studio, mainly the Copilot is too slow, if this slow no one would use Copilot...you guys agree with me?

pavanmanideep avatar May 05 '24 06:05 pavanmanideep

截屏2024-05-08 14 17 58

@lramos15 It doesn't show log when /fix stucks. I've logined with a paid account. Note that I'm using vscode remote server, where the server I ssh into doesn't have good access to github.


It complete after 2 mins this time.

截屏2024-05-08 14 20 13

linonetwo avatar May 08 '24 06:05 linonetwo

Interesting @linonetwo what do you mean by doesn't have good connection? Could the API request just be slow if your server is slow.

cc @aeschli

lramos15 avatar May 16 '24 15:05 lramos15

This issue has been closed automatically because it needs more information and has not had recent activity. See also our issue reporting guidelines.

Happy Coding!

Copilot is still extremely slow to respond on Visual Studio, and we already supplied multiple logs, so I don't know what more information you guys want to look into this.

Kobi-Blade avatar Aug 29 '24 17:08 Kobi-Blade