redux-toolkit icon indicating copy to clipboard operation
redux-toolkit copied to clipboard

[RED-16] Slow intellisense in VSCode

Open dutzi opened this issue 3 years ago • 25 comments

Hey 👋

I'm trying to figure out what slows our VSCode intellisense (TS-backed code auto-complete) down and may have found the main cause.

Not 100% sure it's RTK Query (createApi) or not, but figured it's a decent-enough thing I found that I may as well share it.

Where I work at we have a pretty hefty React/RTK app (not OSS 😞), we've been dealing with slow (not unbearable, but annoying) response rate from VSCode intellisense (feels like >1s until the suggestions list shows up, but looking at the TS Server logs it's probably ~800ms).

I tried a few things, eventually landed on this:

If I any-fy the call to createApi, the TS Server logs report that completionInfo (which is in charge of computing the list of suggested items that show up in VSCode's autocomplete) drops from 840ms to 122ms.

Here's a video before the change (note how slow it takes from the time I hit . to when I see the suggestions:

https://user-images.githubusercontent.com/927310/221375943-1547b820-7f19-40b8-933a-0269d4983faa.mp4

Here it is when I make the following change:

export const api = createApi({

To:

export const api = (createApi as any)({

https://user-images.githubusercontent.com/927310/221375968-ad185389-96b7-4146-98da-c4e072e283ca.mp4

RED-16

dutzi avatar Feb 25 '23 19:02 dutzi

To be honest, we've never tried to do any kind of perf measurements for how long our TS typings take to analyze.

I've only ever seen a couple people even try to do that.

It's something we can look at eventually, but I don't think there's anything we can immediately do.

markerikson avatar Feb 25 '23 19:02 markerikson

@Andarist if you've got any suggestions for analyzing this or improving our types perf, I'm interested :)

markerikson avatar Feb 25 '23 19:02 markerikson

I started examining this after reading this post.

I think it's a good place to start (check out the comment section, it has some interesting discussion with useful links).

Or, tl;dr:

TS-Wiki on Performance Tracing A better tool to inspect TSC's trace.json

Anyhow, I'll try helping!

dutzi avatar Feb 25 '23 19:02 dutzi

Those performance tracings are quite good - I used them at least twice or twice to get more insights into stuff. I could take a look at this if you share your trace.

Andarist avatar Feb 25 '23 21:02 Andarist

Hi there & thanks for opening this issue @dutzi. I'm happy to have come across it, as it confirms my own testing.

We're using RTK Query with the OpenAPI code generator, resulting in about 7000 lines of generated endpoint definitions. I can fully reproduce your observations with VSCode IntelliSense population being significantly slow (1-3s). Changing the API type to any as described above immediately 'solves' the issue.

Unfortunately, I'm lacking the knowledge to provide helpful input here, but I'll be monitoring the issue and happy to help with triage.

joshuajung avatar Apr 20 '23 06:04 joshuajung

Maybe it is not connected directly, but I also experienced a performance degradation (type completion) in a medium sized project (using @rtk-query/codegen-openapi)

In our case we found the culprit to be multiple calls to .enhanceEndpoints(). After refactoring the code to use it only once in the whole application, performance was back to expected levels.

bschuedzig avatar Jun 12 '23 08:06 bschuedzig

createApi seems painfully slow! A cascade starting from EndpointBuilder takes ~652ms by itself.

(percentages relative to EndpointBuilder)
EndpointBuilder: 652ms (100%)
> QueryExtraOptions: 575ms (88%)
1> QueryTypes: 552ms (84%)
2> QueryCacheLifecycleApi: 525ms (81%)
3> QueryDefinition: 425ms (65%)
4> QueryLifecycleApi: 399ms (61%)
5> RootState: 225ms (35%)
6> MutationState: 212ms (33%)
7> BaseMutationSubState 207ms (32%)
8> MutationDefinition 206ms (32%)
9> MutationExtraOptions 193ms (30%)
10> MutationTypes 184ms (28%)

11> MutationCacheLifecycleApi 92ms (14%)
11> MutationLifecycleApi 74ms (11%)

I'll do some experimentation to see if I can find a root cause in our usage of creteApi, or if it's just because creteApi is an inherently expensive operation. Unfortunately we're not OSS, but I can disclose that the createApi call does not use enhanceEndpoints (directly), but consists of

  • createApi with a reducerPath, baseQuery (fetchBaseQuery), endpoint builder (two builder.mutation calls with a single query defined in each) and nothing else. Otherwise the createApi call references just simple types (two interfaces defined without recursive steps, e.g. simply export type Dummy = { object: string; variables?: string; })

This is on latest (1.9.5) with TypeScript 5.1.6

ConcernedHobbit avatar Jul 17 '23 15:07 ConcernedHobbit

Hey, I've also noticed a very significant TS IntelliSense slowdown originating from createApi. As a temp workaround I've been any-fying it while working on anything non-RTK related

Here's a minimal repro https://github.com/rrebase/rtk-query-ts-perf with a few endpoints generated from an open-api schema highlighting the types perf issue. Hopefully someone with experience with the complex types of this codebase can pinpoint the improvement areas from the traces.

Referencing https://github.com/microsoft/TypeScript/issues/34801 as the investigations in MUI about similar TS perf issues might be useful. Personally, I'd always take the trade-off of faster perf over correctness if we're faced with a choice.

rrebase avatar Jul 29 '23 13:07 rrebase

We'd like to use RTK Query with the generated react hooks, but with our roughly 400 endpoints (using a custom queryFn) the performance of TypeScript is so dramatically impacted that I'm afraid it's not usable. In IntelliJ, the autocompletion on the "api" object will run for minutes without returning a suggestion list.

mpressmar avatar Aug 14 '23 15:08 mpressmar

I think I made some progress on this issue, at least for our specific setup.

In our project, we are already using .enhanceEndpoints() only once, to add caching behavior (providesTags/invalidatesTags). Still, @bschuedzig's comment got me thinking whether it wouldn't be possible to short-circuit types from the unenhanced API to the enhanced version. This does not come with any side effects for our case, as our caching enhancements do not modify the API typings in any way.

So here is what I did (only the last line changed):

// The empty API
const emptyApi = createApi({
  reducerPath: "api",
  baseQuery: baseQuery,
  tagTypes: [],
  endpoints: () => ({}),
});

// The API with injected endpoints, as generated by code generation
const generatedApi = emptyApi.injectEndpoints({
  endpoints: (build) => ({
    // (generated endpoints)
  })
});

// Our final API object, as we use it in the project
export const api = generatedApi.enhanceEndpoints({
  // (caching enhancements)
}) as any as typeof generatedApi; // <-- This part did the trick ⚡️

For our project, this improved IntelliSense performance by about 70%, which makes the difference between slow and unusable. Delay until IntelliSense for a regular Query Hook shows up is now down to around 2-3 seconds – still hurts, but no longer enough to grab a coffee. Hope this helps some of you folks as well.

joshuajung avatar Aug 16 '23 11:08 joshuajung

No idea when we'll ever have time to look into this, but slapping a link here for later reference. Tanner Linsley just pointed to some TS perf debugging resources:

  • https://twitter.com/tannerlinsley/status/1694073987255636342

And some more here:

  • https://twitter.com/mattpocockuk/status/1696541381546696735
  • https://twitter.com/aleksandrasays/status/1696543862762713278
  • https://github.com/beerose/tsc-diagnostics-diff-action

markerikson avatar Aug 22 '23 20:08 markerikson

@ConcernedHobbit : how did you generate that step percentage perf information?

markerikson avatar Aug 22 '23 20:08 markerikson

@ConcernedHobbit : how did you generate that step percentage perf information?

I used tsc with the --generateTrace flag and manually took a look at it in the Perfetto.dev web-app.

ConcernedHobbit avatar Aug 23 '23 07:08 ConcernedHobbit

Hey, I think I made some progress here.

I created a small repo that reproduces the issue https://github.com/dutzi/rtk-ts-perf.

Check out this video that demos it https://cln.sh/bHBsDGGm.

I tried playing a bit with the typings, and noticed that if I edit ./node_modules/@reduxjs/toolkit/dist/query/react/module.d.ts removing HooksWithUniqueNames in line 23:

} & HooksWithUniqueNames<Definitions>;

Change to:

}

I get instant intellisense.

I didn't have enough time to improve the utility type, but hope this helps move this forward.

dutzi avatar Sep 18 '23 08:09 dutzi

If you want to get accurate timings for IntelliSense, open the Command Palette in VSCode and choose "TypeScript: Open TS Server log".

It might ask you enable TS logging, approve and then copy the path to the tsserver.log file that just opened (Command Palette → "File: Copy Path of Active File").

Now start a terminal and run:

tail -f "<path_to_tsserver.log>" | grep "completionInfo: elapsed time"

dutzi avatar Sep 18 '23 08:09 dutzi

We did some perf analysis last night, and confirmed that HooksWithUniqueNames seems to be the biggest time sink . We think it has to do with the way this is getting handled as a distributive check, which ends up turning into a union of N individual types (which later gets converted using UnionToIntersection):

export type HooksWithUniqueNames<Definitions extends EndpointDefinitions> =
  keyof Definitions extends infer Keys
    ? Keys extends string
      ? Definitions[Keys] extends { type: DefinitionType.query }
        ? {
            [K in Keys as `use${Capitalize<K>}Query`]: UseQuery<
              Extract<Definitions[K], QueryDefinition<any, any, any, any>>
            >
          } &
            {
              [K in Keys as `useLazy${Capitalize<K>}Query`]: UseLazyQuery<
                Extract<Definitions[K], QueryDefinition<any, any, any, any>>
              >
            }
        : Definitions[Keys] extends { type: DefinitionType.mutation }
        ? {
            [K in Keys as `use${Capitalize<K>}Mutation`]: UseMutation<
              Extract<Definitions[K], MutationDefinition<any, any, any, any>>
            >
          }
        : never
      : never
    : never

Here's a flame graph of the perf from dutzi's example with 2.0-beta.2:

image

We've got a PR up in https://github.com/reduxjs/redux-toolkit/pull/3767 that rewrites it to do 3 mapped object types - one each for queries, lazy queries, and mutations:

export type HooksWithUniqueNames<Definitions extends EndpointDefinitions> = {
  [K in keyof Definitions as Definitions[K] extends {
    type: DefinitionType.query
  }
    ? `use${Capitalize<K & string>}Query`
    : never]: UseQuery<
    Extract<Definitions[K], QueryDefinition<any, any, any, any>>
  >
} &
  {
    [K in keyof Definitions as Definitions[K] extends {
      type: DefinitionType.query
    }
      ? `useLazy${Capitalize<K & string>}Query`
      : never]: UseLazyQuery<
      Extract<Definitions[K], QueryDefinition<any, any, any, any>>
    >
  } &
  {
    [K in keyof Definitions as Definitions[K] extends {
      type: DefinitionType.mutation
    }
      ? `use${Capitalize<K & string>}Mutation`
      : never]: UseMutation<
      Extract<Definitions[K], MutationDefinition<any, any, any, any>>
    >
  }

This appears to be a major improvement! Running a perf check against that same example, the main blocking section drops from 2600ms to 1000ms (still a long time, but 60% better!):

image

Could folks try out that PR and let us know how much of an improvement it feels like in practice? You can install it from the CodeSandbox CI build here:

  • https://ci.codesandbox.io/status/reduxjs/redux-toolkit/pr/3767/builds/423641

Note that the PR is against our v2.0-integration branch, so it will involve an upgrade, but I'm happy to have us backport that to 1.9.x as well.

markerikson avatar Oct 02 '23 15:10 markerikson

here's a version backported to v1.9.x: https://github.com/reduxjs/redux-toolkit/pull/3769

EskiMojo14 avatar Oct 02 '23 16:10 EskiMojo14

Just published https://github.com/reduxjs/redux-toolkit/releases/tag/v1.9.7 with those changes!

There's more work we can do to investigate this, but wanted to get that out given that it's a significant improvement.

Please let us know how it works out!

markerikson avatar Oct 04 '23 22:10 markerikson

Just published https://github.com/reduxjs/redux-toolkit/releases/tag/v1.9.7 with those changes! (...) Please let us know how it works out!

Very neat, thank you! For our code base and using a naive stopwatch test, this resulted in another ~50% lag reduction.

When I combine this with my hacky workaround described in https://github.com/reduxjs/redux-toolkit/issues/3214#issuecomment-1680462843, I'm now down to a quarter of the original lag, which is very enjoyable.

joshuajung avatar Oct 05 '23 08:10 joshuajung

Given that we did just speed up the RTKQ hooks types, I'm going to say this is sufficiently improved for 2.0. We can do more perf testing post-2.0, but in the interest of getting 2.0 wrapped up I'm going to move this out of the 2.0 milestone and not spend any further time on this until 2.0 is out the door.

markerikson avatar Oct 19 '23 13:10 markerikson

I stumbled upon this whilst researching why my intellisense had become unusably slow. So I thought I would give an update that the any trick still makes a huge difference. I haven't got any metrics but it's quote obvious from the video.

https://github.com/user-attachments/assets/f0cfcfd6-1125-4052-b516-ec23da8698f0

I'm sure polymorphBaseQuery along with the 5000 lines of auto generated types is not helping either.

The only problem with the any trick is that everything is red now :( image

"@reduxjs/toolkit": "^2.2.5",
"@rtk-query/graphql-request-base-query": "^2.3.1",

Innders avatar Aug 01 '24 07:08 Innders

@Innders have you considered splitting that generated api into multiple files?

phryneas avatar Aug 01 '24 08:08 phryneas

@Innders have you considered splitting that generated api into multiple files?

I have not, but I definitely could try! But would that make a difference to TS performance?

Innders avatar Aug 01 '24 08:08 Innders

Definitely.

phryneas avatar Aug 02 '24 07:08 phryneas

Tagging in another report of slow perf for later reference:

  • https://github.com/reduxjs/redux-toolkit/discussions/4683

markerikson avatar Oct 30 '24 16:10 markerikson

Hello! Thanks a lot for all the improvements you've made to the type performance so far. I've run into the same issue in a large proprietary RTK project.

I have a minimal reproduction for versions:

    "@reduxjs/toolkit": "^2.11.2",
    "typescript": "^5.9.3"

With very little code:

import { createApi, fakeBaseQuery } from "@reduxjs/toolkit/query/react";

export const apiClientSplitApi = createApi({
  baseQuery: fakeBaseQuery(),
  endpoints: (builder) => {
    const endpoints = {
      a: builder.query<null, void>({
        queryFn: async () => ({ data: null }),
      }),
      b: builder.query<string, void>({
        queryFn: async () => ({ data: "h" }),
      }),
    };
    return endpoints;
  },
});

Basically any sort of endpoints definition or injectEndpoints definition results in about ~300ms check time. This doesn't sound that bad, but it scales linearly with the number of createApi calls or unique response schemas, and almost always ends up being in the critical path to produce the RootState type, which is used all across our codebase, so with just 4 of these we're already looking at over 1 second waiting for types just due to RTK query. This is also the main issue preventing us from using rtk-query more widely across our systems.

Worse yet, even this simple example seems to trigger a silent/hidden recursiveTypeRelatedTo_DepthLimit typescript error. From my understanding that means that some parts of the types might not be fully checked, and it hints at some sort of runaway recursion. Because of this I think there are probably some hints we could provide to typescript that could make the typecheck in this case much faster.

I've attempted to dig into the types within RTK query to try and fix or at least better understand the issue, but I've only got as far as to realize that removing the intersection with QueryExtraOptions here seems to make the typecheck very fast again. Of course that's not viable solution, so I'm asking if you've got any tips or areas I should explore as I'd love to see this issue resolved

I'm also attaching a zip file with a trace and cpuprofile you can open on https://ui.perfetto.dev/ trace.zip

akaltar avatar Dec 17 '25 12:12 akaltar

@akaltar thanks for the repro!

Couple thoughts without having looked yet:

  • Yeah, there's most likely some additional improvements we could make to our types that would make things faster
  • Dimitri Mitropolous's new typeslayer tool might be a useful investigation resource here: https://www.youtube.com/watch?v=IP6EZXzXBzY
  • But also: TS 7.0 ( tsgo ) might essentially make this whole problem a moot point by being so much faster. To be clear I'm not saying we won't try to make any further improvements :) But if TS itself is much faster, then the apparent slowness may mostly go away.

markerikson avatar Dec 17 '25 16:12 markerikson

Thank you for the info. For reference TypeSlayer was the tool that prompted my investigation in the first place, and I used it while trying to fix the issue. It does have some great views on which types are being compared and taking long, I'm just lacking on the examples side of what does a typical optimization look like in typescript. We're already using tsgo preview, and while it makes a huuge difference,if we fully adopted RTK query in our large project, I believe it still wouldn't be fast enough. I'm also quite concerned about the reached type limits as I'm not sure what are the consequences exactly, but I've seen cases in our codebase where types being too complex caused things to be inferred as any, resulting in bugs that we missed

akaltar avatar Dec 17 '25 17:12 akaltar

if we fully adopted RTK query in our large project

What does large mean? We are using roughly 200 generated endpoints with RTK query and it mostly holds up. This is without tsgo.

Innders avatar Dec 17 '25 20:12 Innders

What does large mean? We are using roughly 200 generated endpoints with RTK query and it mostly holds up. This is without tsgo.

We currently have ~11 injectApi calls with about ~60 endpoints total, and it's already the slowest part of type checking. I think if we would fully adopt RTKQ, we'd likely have ~90 injectApi calls with roughly 10 endpoints each totaling around 900 endpoints.

I've further narrowed down the type perf issue to this seemingly super simple operation:

import {
  DefinitionType,
  type BaseQueryFn,
  type QueryDefinition,
} from "@reduxjs/toolkit/query/react";

const a: QueryDefinition<
  void,
  BaseQueryFn<any, unknown, unknown, {}, {}>,
  never,
  null,
  "api",
  unknown
> = {
  type: DefinitionType.query,
  queryFn: async () => ({ data: null }),
};

const endpoint = a satisfies QueryDefinition<any, any, any, any, any, any>;

And I've found a way to fix the performance by explicitly specifying the variance of type parameters in just a couple of types: MutationExtraOptions, QueryLifecycleApi, QueryCacheLifecycleApi, EndpointBuilder While trying to create a PR I noticed however, that this breaks types in all sorts of ways, so I'll keep digging in case I figure out a way to do this in a way that actually works

akaltar avatar Dec 18 '25 14:12 akaltar