deploy: remake to use OCI images
Description
With the new operator based lifecycle management, the current deploy command will stop working. The command needs to be migrated to the new approach.
The new command should be a "drop in" replacement for the existing command; usage and flags should change as little as possible and the bare kyma deploy execution with no args or flags has to result in a functioning kyma runtime on the target cluster.
Reasons
Migrate the deploy functionality to the new installation model.
Acceptance Criteria
The main goal is to have a drop in replacement for the current deploy command. The command should at least work on K3d and Gardener clusters. For that, the following goals have to be achieved:
- [x] Setup the kyma environment (
lifecycle-manager&module-manager) -> #1370 - [ ] Given a module template, deploy it to the cluster.
- [ ] Watch the Kyma resource after deploying tracking module progress.
- [ ] Add timeout on the client side to stop the process.
- [ ] Reattach to a running deployment if timed out while watching progress.
- [ ] Design a solution for giving a module list to the deploy command (same as it is now with the component list).
- [ ] Design a solution to provide CR instances for all modules to the command (as part of the module list maybe).
- [ ] Be able to generate module templates on the fly for each module given the desired channel as flag.
- [ ] Deploy a working kyma runtime on K3d.
- [ ] Deploy a working kyma runtime on Gardener.
- [ ] Migrate the rest of existing flags if needed (e.g.
--domain,--value,--tls-crt, etc... )
For now the AC doesn't address the very first steps of the Kyma installation process. In the Open Source scenario, these steps are to install the necessary "generic" operators like Kyma Operator and Manifest Operator in the proper versions, along with necessary configuration values for them to work correctly, like memory/CPU limits, credentials to the artifact repositories (if not public) etc.
Considering just the Kyma Operator, it's not clear yet which version of it is proper in that context (Latest? Branch name? Released artifact?)
The operators are installed by using kustomize tool to configure the deployment settings, like image name/tag, namespace, naming of K8s-wide objects like RBAC's etc.
It looks then that the new deploy command would have to start the installation with these generic steps:
For the Kyma Operator:
- Download the Kyma Operator manifests as they are defined in the project (from where?)
- Use the
kustomizetool to generate the final manifests (for example: image name/tag) - Apply the Kyma CRD to the cluster, if it's not a part of the generated manifests already
- Apply the manifests, along with Kyma CRD to the cluster.
- Observe at least the status of the Kyma Operator deployment. Fail fast (do not proceed) if the deployment fails.
Similar steps must be done for the Manifest Operator and any other "generic" operator that is necessary to install Kyma and cannot be installed by the Kyma Operator itself.
We can still continue with this task provided that in the first iteration the cluster owner will perform the installation of Kyma/Manifest operators manually and kyma alpha deploy only takes care of the rest of the installation process.
Once some details are clear - like the versioning, the image tags, URLs of the source files (manifests) - we can extend the deploy commands with the pre-steps described above, thus completing the command.
As for the: Given a module template, deploy it to the cluster. step:
- I assume it is given as a file, e.g.
kyma alpha deploy module /path/to/the/module_template.yaml - If Kyma object doesn't exist yet it is created based on the provided module template
- If Kyma object exists, new module is added to the
moduleslist, if possible. What can go wrong? For example, channels may be different. - Once the Kyma object is created/extended, the (already running) Kyma Operator will take over the installation, and the CLI just reports the progress.
As discussed, we can split that task in two major parts (which can be developed decoupled):
- Installation of the reconciler ecosystem (process manifest of reconciler with kustomize, deploy manifest, check deployment state etc.)
- Deployment of one or more modules in a cluster (get list of module templates, deploy one or modules templates, create Kyma CR etc.)
- TBC: how/where can we retrieve a list of available module templates (from control-plane maybe?)
To answer the points brought up by @Tomasz-Smelcerz-SAP:
-
For now yes. The module delivery process is not fully fleshed out yet so for now we will develop assuming we have all necessary inputs and manually provide them to the CLI. Finding a solution to delivery and discoverability of modules is in other ACs further below (Design a solution for giving a module list to the deploy command).
-
The deploy command always needs to make sure that the target cluster has the necessary setup to install a module (module-manager, etc...). If the cluster is empty, they need to be deployed before attempting any module deployment. Eventually also upgrades of the setup will need to be managed.
-
This is something to be decided together with @jakobmoellersap and @adityabhatia as part of the task at hand.
-
Correct. The CLI only throws the modules over the fence and then reports progress.