Bundle the new "k8s chaincode builder" with fabric-operator
fabric-builder-k8s offers an exponential increase into the ease of use for consumers and operators working with Fabric Chaincode. With the k8s builder, the (person) operator simply installs a chaincode package with a declaration of a specific and unique Docker image layer, referencing a container at a specific @ digest URL. The builder, in turn, leverages the "external chaincode" (a.k.a. Sykes lifecycle) events, managing the lifecycle of pods in Kubernetes as if they were a local process launched with fork/exec.
The builder, however, is not currently integrated well with the operator. To accept type=k8s chaincode packages, the chaincode builder binaries must be installed onto the file system of the peer pod, and a configuration stanza updated in core.yaml with the location of the binaries. In Phase I, we included the core.yaml configuration for the new builder, but did NOT install the binaries by default into the peer containers. To distribute the builder binaries, we created a custom peer docker image with the latest peer (2.4.4) code line, and overlayed the native Linux binaries onto the image.
Improve the "installation" of the k8s builder, such that "it just works" when creating Fabric networks with the hyperledger/fabric-peer docker image.
There are several techniques to get the k8s-builder binaries into the peer volume. Pick one and make it "right" :
- Update the k8s-builder project to generate a Docker image with ONLY the external builders.
- Launch the k8s-builder image as a sidecar to the peer, mounting binaries to the location specified in core.yaml
- Run the k8s-builder image as an init container, copying the binaries into a persistent volume share visible to the peer.
- Run a one-time "install" Job or script to copy binaries from a release outputs into a PV visible to the peer. (In general we should prefer distribution via Docker images, rather than manually copying pre-built binaries, if possible.)
- Something else?
This will likely collide or overlap with the existing IBP builder (sidecar) logic, which should not be modified by the introduction of the new builder. The new builder should supplement, not replace the existing IBP builder.
This feature can reference the hyperledgendary prototype during development, but ideally will reference the builder once it has migrated to a hyperledger lab.
@mbwhite @jt-nti @mbrandenburger
There is another way - well in the new OpenFabricStack updates to the Ansible Collections, I overrode the regular peer image with the k8s builder version.
Not ideal - but it works!
I'm open to alternatives but I think a decent compromise / option here is to:
- Build a docker container with only the k8s-builder release binaries. (This can be here, or over in the builder git / CI pipeline.)
- Re-use spec.images.chaincodeLauncherImage in the current API to indicate either the existing IBP builder OR the new k8s-builder.
- Conditionally emit a sidecar container in the peer deployment. (either ibp or k8s or neither (null cclauncherimage attribute), but not both.)
- The k8s sidecar must mount the builder (run, detect, etc.) at the location as configured in core.yaml.
There will need to be some fiddling of the peer / container / etc. ENV, or some minor nuance addressed that makes this approach harder than it should be. (And some some unit / integration tests.)
Keeping both the ibp-launcher and the k8s-builder aligned in the operator with a single approach will help keep everything in order behind the scenes. It's also important for a migration path that we preserve both code routes, allowing for a healthy and phased transition from the older, legacy builders.
Say goodbye to ten minute chaincode release iterations! k8s builder is great. Can't wait to see this come online.
The chaincodeLauncherImage is already overloaded and is not a good candidate for specifying the builders.
Current thinking is :
-
Build a docker container with only the k8s-builder release binaries.
-
Use
spec.images.builderImageas an optional field in the peer CRD. If present, copy the builders from a known path in the container into a known path in the peer pod at init time. -
Update operations console to emit the k8s-builder image in peer specs.
This approach retains the current behavior for the "/cclauncher" path, and also allows end-users to easily override the peer with a custom chaincode builder image, should they want to extend the k8s builder or replace it with an alternative build pipeline.
Hello.
I notice that you removed the k8s_builder from the following file:
defaultconfig/peer/v25/core.yaml
Is there a reason? I need to rebuild a new operator image to add again the k8s builder support for HLF 2.5.