Support Client Certificate Authentication
I am following these directions to setup an externalIP controller. The logs of the extip-controller pod shows the following error:
E0319 01:36:03.558680 7 reflector.go:214] github.com/Mirantis/k8s-externalipcontroller/vendor/k8s.io/client-go/1.5/tools/cache/reflector.go:109: Failed to list *v1.Service: Get https://10.3.0.1:443/api/v1/services: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-ca")
My kube-apiserver has client cert auth enabled using the client-ca-file config flag. It appears that the extip-controller needs to support k8s-client certificate auth.
We are using incluster authentication, i will check how this method works if certificates are enabled
This document does a good job describing how TLS is setup for my k8s cluster. TLS assets are created and provisioned to controller/worker nodes and for an admin (kubectl). apiserver, kubelet and kubectl use the TLS assets to securely communicate with one another.
kubectl config:
admin@admin-laptop$ cat ~/.kube/config
apiVersion: v1
kind: Config
users:
- name: lab-user
user:
client-certificate-data: <SNIP>
client-key-data: <SNIP>
clusters:
- name: lab-cluster
cluster:
certificate-authority-data: <SNIP>
server: https://master.example.com:443
contexts:
- context:
cluster: lab-cluster
user: lab-user
name: lab-context
current-context: lab-context
kube-apiserver config:
$ sudo ls -al /etc/kubernetes/ssl/
total 20
drwxr-xr-x. 2 root root 4096 Mar 19 02:30 .
drwxr-xr-x. 5 root root 4096 Mar 19 02:30 ..
-rw-r--r--. 1 root root 1679 Mar 19 02:30 apiserver-key.pem
-rw-r--r--. 1 root root 1237 Mar 19 02:30 apiserver.pem
-rw-r--r--. 1 root root 1090 Mar 19 02:30 ca.pem
core@master01 ~ $ openssl rsa -in /etc/kubernetes/ssl/apiserver-key.pem -check
WARNING: can't open config file: /etc/ssl/openssl.cnf
RSA key ok
writing RSA key
-----BEGIN RSA PRIVATE KEY-----
<SNIP>
-----END RSA PRIVATE KEY-----
core@master01 ~ $ openssl x509 -in /etc/kubernetes/ssl/ca.pem -text -noout
WARNING: can't open config file: /etc/ssl/openssl.cnf
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
8a:04:e5:8d:81:38:fa:f7
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kube-ca
Validity
Not Before: Mar 19 01:28:52 2017 GMT
Not After : Aug 4 01:28:52 2044 GMT
Subject: CN=kube-ca
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
<SNIP>
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier:
<SNIP>
X509v3 Authority Key Identifier:
keyid:<SNIP>
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: sha256WithRSAEncryption
<SNIP>
core@master01 ~ $ openssl x509 -in /etc/kubernetes/ssl/apiserver.pem -text -noout
WARNING: can't open config file: /etc/ssl/openssl.cnf
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
a9:ae:23:99:17:de:ae:87
Signature Algorithm: sha1WithRSAEncryption
Issuer: CN=kube-ca
Validity
Not Before: Mar 19 01:28:52 2017 GMT
Not After : Mar 19 01:28:52 2018 GMT
Subject: CN=kube-apiserver
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
<SNIP>
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
X509v3 Subject Alternative Name:
DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:master.example.com, IP Address:10.3.0.1, IP Address:10.30.118.167, IP Address:10.10.129.136
Signature Algorithm: sha1WithRSAEncryption
<SNIP>
core@master01 ~ $ sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: quay.io/coreos/hyperkube:v1.5.4_coreos.0
command:
- /hyperkube
- apiserver
- --bind-address=0.0.0.0
- --etcd-servers=http://master.example.com:2379
- --allow-privileged=true
- --service-cluster-ip-range=10.3.0.0/24
- --secure-port=443
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --runtime-config=extensions/v1beta1/networkpolicies=true,extensions/v1beta1/ipcontroller.ext=true
- --anonymous-auth=false
livenessProbe:
httpGet:
host: 127.0.0.1
port: 8080
path: /healthz
initialDelaySeconds: 15
timeoutSeconds: 15
ports:
- containerPort: 443
hostPort: 443
name: https
- containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
kubelet config:
core@node1 ~ $ ls -al /etc/kubernetes/ssl/
total 20
drwxr-xr-x. 2 root root 4096 Mar 19 18:47 .
drwxr-xr-x. 5 root root 4096 Mar 19 18:47 ..
-rw-r--r--. 1 root root 1090 Mar 19 18:47 ca.pem
-rw-r--r--. 1 root root 1679 Mar 19 18:47 worker-key.pem
-rw-r--r--. 1 root root 1245 Mar 19 18:47 worker.pem
core@node135 ~ $ sudo cat /etc/kubernetes/worker-kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/worker.pem
client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
cluster: local
user: kubelet
name: kubelet-context
current-context: kubelet-context
core@node1 ~ $ systemctl cat kubelet
# /etc/systemd/system/kubelet.service
[Unit]
Description=Kubelet via Hyperkube ACI
Requires=k8s-assets.target
After=k8s-assets.target
[Service]
Environment=KUBELET_VERSION=v1.5.4_coreos.0
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers=https://master.example.com \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--container-runtime=docker \
--rkt-path=/usr/bin/rkt \
--rkt-stage1-image=coreos.com/rkt/stage1-coreos \
--register-node=true \
--allow-privileged=true \
--pod-manifest-path=/etc/kubernetes/manifests \
--hostname-override=node1.example.com \
--cluster_dns=10.3.0.10 \
--cluster_domain=cluster.local \
--kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
--tls-cert-file=/etc/kubernetes/ssl/worker.pem \
--tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
core@node1 ~ $ openssl x509 -in /etc/kubernetes/ssl/ca.pem -text -noout
WARNING: can't open config file: /etc/ssl/openssl.cnf
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
<SNIP>
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kube-ca
Validity
Not Before: Mar 19 01:28:52 2017 GMT
Not After : Aug 4 01:28:52 2044 GMT
Subject: CN=kube-ca
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
<SNIP>
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier:
<SNIP>
X509v3 Authority Key Identifier:
keyid:<SNIP>
X509v3 Basic Constraints:
CA:TRUE
Signature Algorithm: sha256WithRSAEncryption
<SNIP>
core@node1 ~ $ openssl x509 -in /etc/kubernetes/ssl/worker.pem -text -noout
WARNING: can't open config file: /etc/ssl/openssl.cnf
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
<SNIP>
Signature Algorithm: sha1WithRSAEncryption
Issuer: CN=kube-ca
Validity
Not Before: Mar 19 01:28:53 2017 GMT
Not After : Mar 19 01:28:53 2018 GMT
Subject: CN=kube-worker
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
<SNIP>
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
X509v3 Subject Alternative Name:
DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:node2.example.com, DNS:node2.example.com
Signature Algorithm: sha1WithRSAEncryption
<SNIP>
core@node1 ~ $ openssl rsa -in /etc/kubernetes/ssl/worker-key.pem -check
WARNING: can't open config file: /etc/ssl/openssl.cnf
RSA key ok
writing RSA key
-----BEGIN RSA PRIVATE KEY-----
<SNIP>
-----END RSA PRIVATE KEY-----
IMO, the extIP controller should follow the pattern used by kubectl for using TLS assets from a config file when communicating with kube-apiserver. One possible difference is that the extIP controller would use the kube-apiserver service IP, instead of an external node IP.
Are you running kube dashboard in your cluster? I think they are using same incluster auth as we are, is there any problem with it?
k8s-dash is operational:
$ kubectl get po --namespace=kube-system | grep dash
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-3543765157-wr7cc 1/1 Running 1 1d
$ kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard $SVC_IP $EXT_IP 80/TCP 2m
$ kubectl describe po kubernetes-dashboard-3543765157-wr7cc --namespace=kube-system
Name: kubernetes-dashboard-3543765157-wr7cc
Namespace: kube-system
Node: node1.example.com/$NODE_IP
Start Time: Sun, 19 Mar 2017 11:54:43 -0700
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=3543765157
Status: Running
IP: $POD_IP
Controllers: ReplicaSet/kubernetes-dashboard-3543765157
Containers:
kubernetes-dashboard:
Container ID: docker://408ba11b54eb4b43106fc51eea68bd2d8184d633a5949c56c87fe74826337831
Image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0
Image ID: docker-pullable://gcr.io/google_containers/kubernetes-dashboard-amd64@sha256:3bccb9256e8b14ae895d40d829ea45992389af3c1767a21eefbd4b3bf723f325
Port: 9090/TCP
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
State: Running
Started: Sun, 19 Mar 2017 14:10:08 -0700
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Sun, 19 Mar 2017 11:57:56 -0700
Finished: Sun, 19 Mar 2017 14:10:08 -0700
Ready: True
Restart Count: 1
Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-20zz6 (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-20zz6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-20zz6
QoS Class: Guaranteed
Tolerations: CriticalAddonsOnly=:Exists
No events.
$ kubectl logs kubernetes-dashboard-3543765157-wr7cc --namespace=kube-system
Using HTTP port: 9090
Creating API server client for https://10.3.0.1:443
Successful initial request to the apiserver, version: v1.5.4+coreos.0
Creating in-cluster Heapster client
$ sudo cat /srv/kubernetes/manifests/kube-dashboard-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30

Does the extIP Controller create a volume mount similar to k8s-dash?
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-20zz6 (ro)
Yes, this volume is used by incluster authentication. What authentication/authorization methods you are using? I noticed recently that kubeadm deploys kube 1.6 with RBAC enabled. And in this case I had to grant additional permissions to system:serviceaccounts user. But the error was different though..
Can you maybe enable -v=10 for any of pods deployed and paste the logs?
@danehans Is it still the issue for you?
Not sure. I moved on to kube-parrot after experiencing issues with the k8s-externalipcontroller.