client-node 1.2.0 ignores kubeconfig in VS Code extension
Issue Description
After upgrading from @kubernetes/client-node 0.22.3 to 1.2.0 in a VS Code extension, all API calls are being sent without authentication headers. As a result, requests are processed as system:anonymous and fail with 403 Forbidden errors. The exact same code works correctly with version 0.22.3.
Reproduction
-
Create a minimal VS Code extension that:
- Loads the user’s kubeconfig using
KubeConfig.loadFromDefault() - Creates an API client with
kc.makeApiClient(k8s.CoreV1Api) - Makes a simple call to
listNamespace()
- Loads the user’s kubeconfig using
-
Run the extension against a cluster where the kubeconfig user has permission to list namespaces.
Behavior:
-
0.22.3: Returns the list of namespaces as expected.
-
1.2.0: Server responds with:
{ "message": "namespaces is forbidden: User \"system:anonymous\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" }
Debugging Steps Taken
- Verified the kubeconfig is loaded correctly via logs (context, cluster, user information present).
- Sanitized and printed the exported config to confirm credentials are present.
- Added debug middleware to log HTTP requests, confirming requests are sent to the correct server but without authentication headers.
- Tested both direct API client creation and using the
makeApiClient()method. - Confirmed that rolling back to 0.22.3 with identical code resolves the issue.
Environment
-
TypeScript: 5.8.3
-
ts-loader: 9.5.2
-
VS Code Extension: Bundled with Webpack
-
Node.js: v22.16.0
-
tsconfig.json: Targets ES6 modules with
esModuleInteropenabled -
Webpack config:
module.exports = { target: 'node', entry: './src/extension.ts', output: { path: path.resolve(__dirname, 'dist'), filename: 'extension.js', libraryTarget: 'commonjs2', devtoolModuleFilenameTemplate: '../[resource-path]' }, devtool: 'source-map', externals: { vscode: 'commonjs vscode', bufferutil: 'commonjs bufferutil', 'utf-8-validate': 'commonjs utf-8-validate' }, plugins: [ new webpack.IgnorePlugin({ resourceRegExp: /^electron$/ }) ], resolve: { extensions: ['.ts', '.js', '.json'] }, module: { rules: [ { test: /\.ts$/, exclude: /node_modules/, use: 'ts-loader' } ] }, node: { __dirname: false } };
Question
Is there a breaking change in the authentication handling between versions 0.22.3 and 1.2.0 that requires additional configuration when used in a webpack-bundled VS Code extension? How can I ensure the client includes authentication headers in requests?
Relevant Code
import * as vscode from 'vscode';
import * as k8s from '@kubernetes/client-node';
export function activate(context: vscode.ExtensionContext) {
const cmd = vscode.commands.registerCommand('minimal-k8s.listNamespaces', async () => {
// Load kubeconfig (from KUBECONFIG or ~/.kube/config)
const kc = new k8s.KubeConfig();
kc.loadFromDefault();
// Create CoreV1 API client
const api = kc.makeApiClient(k8s.CoreV1Api);
// List namespaces
try {
const res = await api.listNamespace();
const names = res.body.items.map(ns => ns.metadata?.name).filter(Boolean) as string[];
vscode.window.showInformationMessage(`Namespaces: ${names.join(', ')}`);
} catch (err: any) {
vscode.window.showErrorMessage(`Failed to list namespaces: ${err.message}`);
}
});
context.subscriptions.push(cmd);
}
export function deactivate() {}
=== MINIMAL K8S EXTENSION ACTIVATED ===
=== STARTING NAMESPACE LIST OPERATION ===
Time: 2025-05-25T10:47:49.709Z
--- STEP 1: Getting kubeconfig path ---
[getKubeconfigPath] Determining kubeconfig path
[getKubeconfigPath] useWsl: false
[getKubeconfigPath] configured kubeconfigPath: undefined
[getKubeconfigPath] env KUBECONFIG: undefined
[getKubeconfigPath] defaulting kubeconfig path to /root/.kube/config
[getKubeconfigPath] final kubeconfig path: /root/.kube/config
Kubeconfig path result: {
"pathType": "host",
"hostPath": "/root/.kube/config"
}
--- STEP 2: Loading kubeconfig ---
[getKubeconfigPath] …same as above…
[loadKubeconfig] Using kubeconfig path type: host
[loadKubeconfig] Loading from host path: /root/.kube/config
[loadKubeconfig] Kubeconfig loaded from file
[loadKubeconfig] Kubeconfig loading finished
Kubeconfig loaded successfully
Current context: <redacted-context>
Current cluster: { "server": "<redacted-url>", "skipTLSVerify": false }
Current user: <redacted-user>
Sanitized kubeconfig:
{
"apiVersion": "v1",
"kind": "Config",
"clusters": [
{
"name": "<redacted-cluster-1>",
"cluster": {
"server": "<redacted-url-1>",
"certificate-authority-data": "<redacted>"
}
},
{
"name": "<redacted-cluster-2>",
"cluster": {
"server": "<redacted-url-2>",
"certificate-authority-data": "<redacted>"
}
}
],
"users": [
{
"name": "<redacted-user-1>",
"user": {
"client-certificate-data": "<redacted>",
"client-key-data": "<redacted>"
}
},
{
"name": "<redacted-user-2>",
"user": {
"client-certificate-data": "<redacted>",
"client-key-data": "<redacted>"
}
}
],
"contexts": [
{ "name": "<redacted-context-1>", "context": { "cluster": "<redacted-cluster-1>", "user": "<redacted-user-1>" } },
{ "name": "<redacted-context-2>", "context": { "cluster": "<redacted-cluster-2>", "user": "<redacted-user-2>" } }
],
"current-context": "<redacted-context-2>"
}
--- STEP 3: Creating K8s API client ---
K8s API client created
--- API Request Configuration ---
Server URL: <redacted-url>
Skip TLS Verify: false
Current User: <redacted-user>
--- STEP 4: Making API call to list namespaces ---
API Endpoint: GET /api/v1/namespaces
[HTTP DEBUG][FETCH REQUEST] GET <redacted-url>/api/v1/namespaces
[HTTP DEBUG][FETCH HEADERS] { "Accept": "application/json, /;q=0.8" }
=== ERROR OCCURRED ===
Error type: ApiException
Error message: HTTP-Code: 403
Body:
{
"kind": "Status",
"apiVersion": "v1",
"status": "Failure",
"message": "namespaces is forbidden: User "system:anonymous" cannot list resource "namespaces" at the cluster scope",
"reason": "Forbidden",
"details": { "kind": "namespaces" },
"code": 403
}
Headers: {
"audit-id": "15de250f-789c-4e32-bca5-98be40c7d436",
"cache-control": "no-cache, private",
"content-type": "application/json",
"date": "Sun, 25 May 2025 10:47:49 GMT",
"x-content-type-options": "nosniff",
"x-kubernetes-pf-flowschema-uid": "93670c13-730e-48cc-bffa-f3202ef1c425",
"x-kubernetes-pf-prioritylevel-uid": "b6efe219-bd1c-4811-94dc-105d6ae5eb30"
}
Stack trace (trimmed):
at CoreV1ApiResponseProcessor.listNamespaceWithHttpInfo …/CoreV1Api.js:15325
at processTicksAndRejections …/internal/process/task_queues:95
I don't think that there should be any breaking changes. Do you know what the specific kubeconfig looks like?
It's any config, look at the sanitized config in the comment. As said the config is read, and valid. But not used for the api call
Hrm, it's hard to know what is going wrong here. In general, we know that the 1.x series of libraries works with kubeconfigs, so it's not a general problem, it must be something specific to your environment, or specific to VS Code.
Does it work if you run the code outside of VS Code?
If you have a reproduction of the problem outside of VS Code, I can try to reproduce and debug myself.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.