Can't add middleware without passing the entire Configuration Object
Describe the bug
when trying to add a promiseMiddleware to alter the result (post-query) i receive the the following error message:
You need to define an absolute base url for the server
Client Version 1.0.0
To Reproduce run the provided typescript snipped
Expected behavior When no configuration is passed to readNamespace at all it works fine to read the information. As i do not change the configuration of the Target Server or the authentication in any way the addition of a middleware via the createConfiguration method should not change these values. The resulting config should be the same config that would be used if nothing is passed as explicit config but with the sole addition of the middleware
Example Code
const func = async () => {
const kc = new KubeConfig();
kc.loadFromDefault();
const coreApi = kc.makeApiClient(CoreV1Api)
const getNamespaceInformation = (namespace: string) => {
const promiseMiddleware: PromiseMiddleware = {
pre: (c) => Promise.resolve(c),
post: (c) => Promise.resolve(c),
};
const middleware = new PromiseMiddlewareWrapper(promiseMiddleware);
const config: Configuration = createConfiguration({
middleware: [middleware],
});
return coreApi.readNamespace(
{
name: namespace,
},
config
); // throws
};
const namespaceDetails = await getNamespaceInformation('dev'); // throws
console.log(namespaceDetails );
}
func()
Environment (please complete the following information):
- OS: Windows (WSL)
- Node.js version: v22.14.0
- Cloud runtime: minikube
Thanks for the report and the reproduction, we'll try to validate and figure out what is going wrong.
There is a known issue in v1.x around configuration being ignored inside the code generator: https://github.com/kubernetes-client/javascript/issues/2160#issuecomment-2620169494. Is it possible that this is related to that?
v1.1.0 is out https://www.npmjs.com/package/@kubernetes/client-node/v/1.1.0
it should include fixes to make progress on this issue
@JanPfenning does v1.1.2 fix this issue for you?
I looked into this, and the cause is:
- The user code creates a new
Configurationobject via the call tocreateConfiguration(). The new object looks like this:
{
baseServer: ServerConfiguration { url: '', variableConfiguration: {} },
httpApi: IsomorphicFetchHttpLibrary {},
middleware: [],
authMethods: {}
}
- This new configuration is passed to
coreApi.readNamespace(). - In the library code, we eventually end up in
readNamespaceWithHttpInfo()ingen/types/ObservableAPI.js. - In that method, the config to use is computed as shown below (
_optionsis the object from step 1):
if (_options) {
_config = {
baseServer: _options.baseServer || this.configuration.baseServer,
httpApi: _options.httpApi || this.configuration.httpApi,
authMethods: _options.authMethods || this.configuration.authMethods,
middleware: allMiddleware || this.configuration.middleware
};
}
- The
baseServeralways ends up as the object from step 1, which has an empty URL, which eventually leads to the URL error seen.
I'm guessing the expected behavior is either:
- The user provides a fully populated configuration object. This seems a bit tedious though.
- The config merging logic in step 4 above better handles the empty
baseServerobject. If this is the correct behavior, then it will need to be fixed in the code generator.
I would argue that 2. is the desired behavior honestly.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Is there any workaround for this problem?
@feloy you can use the Configuration.middlewareMergeStrategy property to accomplish this by using a partial configuration from v1.1.2 on
ex:
let config = {
middleware: [middleware],
middlewareMergeStrategy: 'append'
}
coreApi.readNamespace(
{
name: namespace,
},
config
);
there have also been improvements to the merge logic to optimize this use case which address the particular baseurl issue mentioned here https://github.com/OpenAPITools/openapi-generator/pull/20430/files#diff-389f7bcbc49f2f04d1f5f13369be5b4505cc07209b396682467b5420da5be952R96
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.