cli icon indicating copy to clipboard operation
cli copied to clipboard

`kyma provision k3d --registry-use` does not allow for multiple clusters with the same registry

Open jakobmoellerdev opened this issue 3 years ago • 4 comments

Description

Use Case: Provision 2 Kyma Clusters that have access to the same K3d Registry

Expected result

The registry-use flag should be respected and connect the 2 clusters to one previously created registry via k3d registry create REGISTRY_NAME Actual result

kyma provision k3d kyma1 --registry-use kyma-registry
  Checking if port flags are valid
  Checking if k3d registry of previous kyma installation exists
  Checking if k3d cluster of previous kyma installation exists
  Deleted k3d registry of previous kyma installation
- k3d status verified
- Created k3d registry 'kyma-registry:5001'
- Created k3d cluster 'kyma'
k3d registry list
NAME                ROLE       CLUSTER   STATUS
k3d-kyma-registry   registry             running
kyma provision k3d kyma2 --registry-use kyma-registry
  Checking if port flags are valid
  Checking if k3d registry of previous kyma installation exists
  Checking if k3d cluster of previous kyma installation exists
? Do you want to remove the existing k3d cluster? Type [y/N]:
X Verifying k3d status
Error: User decided not to remove the existing k3d cluster

Steps to reproduce

  1. k3d registry create REGISTRY_NAME
  2. kyma provision k3d k1 --registry-use REGISTRY_NAME
  3. kyma provision k3d k2 --registry-use REGISTRY_NAME

Troubleshooting

N/A

jakobmoellerdev avatar Sep 22 '22 12:09 jakobmoellerdev

I describe the bug as I understand it first, so that the reviewer can better understand the provided fix.

  1. The command used in the issue description has wrong syntax. To provide a cluster name, you have to use --name= flag: kyma provision k3d --name=kyma1

Tomasz-Smelcerz-SAP avatar Oct 28 '22 10:10 Tomasz-Smelcerz-SAP

  1. Unfortunately fixing the syntax exposes another problem. The current CLI implementation always creates a registry named k3d-<cluster-name>-registry Take a look at the following terminal session capture (trimmed for brewity):

    Ensure nothing's there:

    $ k3d registry list
    NAME   ROLE   CLUSTER   STATUS
    
    $ k3d cluster list
    NAME   SERVERS   AGENTS   LOADBALANCER
    

    Create common registry:

    $ k3d registry create kyma-common
    INFO[0000] Creating node 'k3d-kyma-common'
    [...]
    # You can now use the registry like this (example):
    # 1. create a new cluster that uses this registry
    k3d cluster create --registry-use k3d-kyma-common:33853
    [...]
    

    List the registry:

    $ k3d registry list
    NAME              ROLE       CLUSTER   STATUS
    k3d-kyma-common   registry             running
    

    Use the registry to create Kyma cluster. Copy the output from k3d registry create to get the registry part: --registry-use k3d-kyma-common:33853. Note the additional registry created in the verbose command output:

    $ kyma provision k3d --name=kyma1 --registry-use k3d-kyma-common:33853 --verbose
    [...]
    
    k3d registry 'kyma1-registry' does not exist2022-10-28T11:41:27.506+0200        INFO    step/log.go:66  Checking if k3d cluster of previous kyma installation exists
    Executed command:
     k3d registry create kyma1-registry --port 5001
    with output:
      INFO[0000] Creating node 'k3d-kyma1-registry'
    
    [...]
    Executed command:
     k3d cluster create kyma1 --kubeconfig-update-default --timeout 300s --agents 1 --image rancher/k3s:v1.24.6-k3s1 --kubeconfig-switch-context --k3s-arg --disable=traefik@server:0 --k3s-arg --kubelet-arg=conta
    inerd=/run/k3s/containerd/containerd.sock@all:\* --registry-use k3d-kyma-common:33853 --registry-use kyma1-registry:5001 --port 80:80@loadbalancer --port 443:443@loadbalancer
    with output:
    [...]
    2022-10-28T11:41:48.834+0200    INFO    step/log.go:62  Created k3d cluster 'kyma1'
    

    Note the additional registry created: k3d-kyma1-registry:5001

    $ k3d registry list
    NAME                 ROLE       CLUSTER   STATUS
    k3d-kyma-common      registry             running
    k3d-kyma1-registry   registry             running
    

Tomasz-Smelcerz-SAP avatar Oct 28 '22 10:10 Tomasz-Smelcerz-SAP

  1. It looks to fix the problem, we must prevent the CLI from automatically creating a default registry on port 5001. I decided to skip this step if there are any registries explictily provided by the user.
  2. The last "problem" is port conflict. The CLI automatically exposes HTTP/HTTPS ports for the cluster in the local network. The default bindings are: 80:80 and 443:443. The second cluster created must have these redefined, otherwise a conflict occurs.

Tomasz-Smelcerz-SAP avatar Oct 28 '22 10:10 Tomasz-Smelcerz-SAP

  1. The fix in the linked PR allows for the following setup, which is - I think - what @jakobmoellersap wanted:

    Create the registry:

    $ k3d registry create kyma-common
    [...]
    # You can now use the registry like this (example):
    # 1. create a new cluster that uses this registry
    k3d cluster create --registry-use k3d-kyma-common:33799
    

    Provision the first Kyma cluster:

    $ kyma provision k3d --name=kyma1 --registry-use k3d-kyma-common:33799
    [...]
    - k3d status verified
    - Created k3d cluster 'kyma1'
    

    Verify the status:

    $ k3d registry list
    NAME              ROLE       CLUSTER   STATUS
    k3d-kyma-common   registry             running
    
    $ k3d cluster list
    NAME    SERVERS   AGENTS   LOADBALANCER
    kyma1   1/1       1/1      true
    
    

    Provision the second Kyma cluster. Notice the port mapping, it's necessary to avoid collisions with the first cluster:

    $ kyma provision k3d --name=kyma2 --registry-use k3d-kyma-common:33799 --port 82:80@loadbalancer --port 552:443@loadbalancer
    [...]
    - k3d status verified
    - Created k3d cluster 'kyma2'
    
    $ k3d registry list
    NAME              ROLE       CLUSTER   STATUS
    k3d-kyma-common   registry             running
    
    $ k3d cluster list
    NAME    SERVERS   AGENTS   LOADBALANCER
    kyma1   1/1       1/1      true
    kyma2   1/1       1/1      true
    

Tomasz-Smelcerz-SAP avatar Oct 28 '22 10:10 Tomasz-Smelcerz-SAP

I think there is still a minor visualization bug since it shows me the registry got deleted:

kyma-unstable provision k3d --name=kyma2 --registry-use k3d-kyma-registry:58105 --port 82:80@loadbalancer --port 552:443@loadbalancer
  Checking if port flags are valid
  Checking if k3d registry of previous kyma installation exists
  Checking if k3d cluster of previous kyma installation exists
  Deleted k3d registry of previous kyma installation
- k3d status verified
- Created k3d cluster 'kyma2'

Could you verify this again? @Tomasz-Smelcerz-SAP

jakobmoellerdev avatar Nov 03 '22 16:11 jakobmoellerdev

@jakobmoellersap Confirmed. I jwill open another PR to fix it.

Tomasz-Smelcerz-SAP avatar Nov 14 '22 07:11 Tomasz-Smelcerz-SAP

@jakobmoellersap After closer inspection I think that in your case it was just an old kyma-unstable binary, as the jobs updating it were having some problems around Nov 02/03 - when you commented. Please run your test again using latest kyma-unstable to double-check. But I indeed found a bug in the code: https://github.com/kyma-project/cli/issues/1443 - this is why I confirmed initially.

Tomasz-Smelcerz-SAP avatar Nov 15 '22 07:11 Tomasz-Smelcerz-SAP

confirmed closed. everything good with the latest unstable 👍 thanks for looking into it again

jakobmoellerdev avatar Nov 18 '22 10:11 jakobmoellerdev