informer's cache is empty after `await informer.start()`
Describe the bug
We're migrating from using v0.2X of this library to 1.0, currently 1.0.0-rc7
In the previous version we could rely on informer.list(this.options.namespace) returning the complete list of objects after await informer.start()
With the new version the first call returns undefined.
We use the add, change and delete events to react to changes to this list, and when doing so we can see that the cache/informer is filled up from 0 to N objects.
Is this a new intended behavior, or should it be considered a bug?
Client Version
1.0.0-rc7
Server Version
v1.30.5
To Reproduce
const fn = () => {
return client.list(
config.apiVersionLatest,
config.kind,
namespace,
);
};
const informer = k8s.makeInformer(
kubeConfig,
`/apis/${config.apiVersionLatest}/namespaces/${namespace}/mycrd`,
fn,
);
await informer.start()
this.informer.list(this.options.namespace) // <- returns undefined, but I would have expected a full cache
Expected behavior
Being able to list all objects directly after the informer.start() promise resolves
Example Code See steps to reproduce
Environment (please complete the following information):
- OS: Linux
- NodeJS Version v20.18.0
- Cloud runtime GKE
I haven't been able to reproduce the issue, but maybe I'm doing something wrong. Are you able to provide a runnable reproduction?
I don't know if the example code above is exactly your code for the bug, but I notice that you are using namespace when you create the informer and this.options.namespace when you list from the informer. Is it possible that those two values are different?
The code for the informer is here:
https://github.com/kubernetes-client/javascript/blob/release-1.x/src/cache.ts#L132
And you can see that it does in fact always list and populate the cache on create.
I take it this is not an intended change, and the issue is probably in our code. I'll get back to you when I have looked into this further.
I'll reopen the issue if I can come up with a prop repro. Thanks for the quick support!
Maybe part of my confusion lies within this row:
https://github.com/kubernetes-client/javascript/blob/release-1.x/src/cache.ts#L173
The this.indexCache is only populated for a namespaces if the list actually contains items. Could this in turn lead to this row https://github.com/kubernetes-client/javascript/blob/release-1.x/src/cache.ts#L105 returning undefined until the client has received a non-empty answer within a namespace?
The second part of my confusion had to do with the order of addOrUpdateObject and this.indexObj being called here https://github.com/kubernetes-client/javascript/blob/release-1.x/src/cache.ts#L196
We have code which can be boiled down to this example:
informer.on('add', (obj) => {
const list = informer.list(namespace) // <- obj will not be included in this list for 1.x, but it was in 0.2
prometheusMetric.set('myobjs_count', list.length);
});
If I add an await Promise.resolve() line at the top of my callback, I get my expected (but maybe not desired 😛) behaviour:
informer.on('add', async (obj) => {
await Promise.resolve();
const list = informer.list(namespace) // <- obj is included in the list
prometheusMetric.set('myobjs_count', list.length);
});
Having figured out these two cases I think we'll be able to migrate to 1.0 successfully. Great job in your work with the request->fetch migration 😃
fwiw, the code at HEAD in the release-1.x branch hasn't actually been released out to NPM yet, it was updated in #1995 so you'll want to either clone and update to HEAD or look at an older version of the code.
It is definitely the case that add events will happen before the full list is populated, since add is called for each object received from the list operation.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.