Any plans to support binary releases for `controller-gen`?
First, want to say thanks for the great work on the controller-tools repo. It's generally awesome. :)
I work on a project (github.com/aws/aws-controllers-k8s) that makes heavy use of controller-gen to augment our own code generation. However, because there are no binary releases of controller-gen we've seen one issue crop up.
Since the only way to use the controller-gen tool is to "install" it with go get, if a contributor to our project doesn't have controller-tools locally and they run go get to install it, that invariably ends up modifying the go.mod/sum file in our source repository. These are // indirect entries in the go.mod but still, it's kind of annoying to have to tell contributors to undo the changes to their go.mod/sum files due to this.
I was wondering if there are any plans to produce binary artifacts for the controller-gen tool? This would certainly make our lives a bit easier on the downstream consumer side of things.
If not binary artifacts, perhaps publishing Docker images containing pre-built controller-gen binaries?
In metal3 we ended up creating a script to install controller-gen so we could create a temporary directory, run go mod init there, then run go get. https://github.com/metal3-io/baremetal-operator/blob/master/hack/install-controller-gen.sh
We would happily consume binary releases, too, if they existed.
In metal3 we ended up creating a script to install controller-gen so we could create a temporary directory, run
go mod initthere, then rungo get. https://github.com/metal3-io/baremetal-operator/blob/master/hack/install-controller-gen.shWe would happily consume binary releases, too, if they existed.
Sweet. Consider that script copied ;)
Thanks @dhellmann!
In metal3 we ended up creating a script to install controller-gen so we could create a temporary directory, run
go mod initthere, then rungo get. https://github.com/metal3-io/baremetal-operator/blob/master/hack/install-controller-gen.sh We would happily consume binary releases, too, if they existed.Sweet. Consider that script copied ;)
Thanks @dhellmann!
To give credit where due, I'm pretty sure that the approach in the script actually came from the commands kubebuilder put in the Makefile it generated for us. We moved it to a script as part of hacking it to ensure it always installed exactly the version we wanted, without overwriting a version the user may have already had in $GOBIN.
Bumping this. Perhaps tarring all 3 binaries by platform so the release artifact count doesn't blow up, ex.
$ tar --list -f controller-tools_linux_amd64.tar.gz
controller-gen
helpgen
type-scaffold
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Another reason for having this feature is the use of controller-tools behind a company firewall which rate limits/throttles downloads to Github and other sources. It sometimes can take an hour to get the controller tools loaded on a box for doing basic development. We started pre-packaging it into containers so we only have to download once; we're also looking at deploying a Go Proxy to cache them as well. Other methods involve creating a vendor directory on a local git repo and then installing the controller binary. These all seem like tedious workarounds to not having an offline installer binary.
Vote +1, the binary releases can save more time than go get.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@jaypipes: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten
I've been attempting to install controller-gen (and conversion-gen from k8s code-generator) in a Docker container in order to standardize my organization's use of controller-gen (there are eleventy billion different versions of it used throughout the organization and placing a vetted version in a canonical devtest Docker image is the best way to reduce that variance).
Unfortunately, no matter how much I try, I simply cannot get controller-gen to work properly from inside a Docker container. I've tried all ways of "installing" controller-gen (using @dhellmann's tmpdir go mod technique, using GOBIN=/bin go install with various static binary flags). I am able to get controller-gen "installed" but invariably when executing any command other than controller-gen --version, I get the following:
$ docker run -it -v $pwd:/test nc-aks-devtest:latest bash
root [ / ]# controller-gen
Error: load packages in root "/": err: go resolves to executable in current directory (./go): stderr:
Here's the Dockerfile I'm using:
FROM <REDACTED> as builder
WORKDIR /workspace
RUN dnf install -y \
ca-certificates \
tar \
curl \
git \
bash \
wget
RUN mkdir -p bin
ARG K8S_RELEASE=latest
RUN rm -f $(command -v kubectl) && \
export KUBECTL_VERSION=$(curl -L https://dl.k8s.io/release/${K8S_RELEASE}.txt) && \
wget -q https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl -O bin/kubectl && \
chmod +x bin/kubectl
ARG KIND_VERSION=v0.18.0
RUN wget -q -O bin/kind https://kind.sigs.k8s.io/dl/${KIND_VERSION}/kind-linux-amd64 && \
chmod +x bin/kind
ARG KUSTOMIZE_VERSION=v5.0.3
RUN export TARBALL="kustomize_${KUSTOMIZE_VERSION}_linux_amd64.tar.gz" && \
export RELEASE_URL="https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2F${KUSTOMIZE_VERSION}/${TARBALL}" && \
wget -q ${RELEASE_URL} && \
tar xzf ${TARBALL} -C bin && rm ${TARBALL}
ARG JQ_VERSION=jq-1.6
RUN wget -q "https://github.com/stedolan/jq/releases/download/${JQ_VERSION}/jq-linux64" -O bin/jq && \
chmod +x bin/jq
ARG YQ_VERSION=v4.33.3
RUN wget -q "https://github.com/mikefarah/yq/releases/download/${YQ_VERSION}/yq_linux_amd64" -O bin/yq && \
chmod +x bin/yq
ARG HELM_VERSION=v3.12.0
RUN export TARBALL="helm-${HELM_VERSION}-linux-amd64.tar.gz" && \
export RELEASE_URL="https://get.helm.sh/${TARBALL}" && \
wget -q ${RELEASE_URL} && \
tar xzf ${TARBALL} -C bin && \
mv bin/linux-amd64/helm bin/helm && rm -rf bin/linux-amd64 && \
rm ${TARBALL}
ARG CLUSTERCTL_VERSION=v1.3.5
RUN curl -sLo bin/clusterctl https://github.com/kubernetes-sigs/cluster-api/releases/download/${CLUSTERCTL_VERSION}/clusterctl-linux-amd64 && \
chmod +x bin/clusterctl
ENV GOPATH /go
ENV PATH /usr/local/go/bin:$GOPATH/bin:$PATH
ARG GO_VERSION=1.19.9
RUN export TARBALL="go${GO_VERSION}.linux-amd64.tar.gz" && \
wget -q "https://go.dev/dl/${TARBALL}" && \
tar xzf "${TARBALL}" -C /usr/local && \
rm "${TARBALL}"
ARG CONTROLLER_GEN_VERSION=0.11.3
RUN go install "sigs.k8s.io/controller-tools/cmd/controller-gen@v${CONTROLLER_GEN_VERSION}" && cp $(command -v controller-gen) bin/controller-gen
ARG CONVERSION_GEN_VERSION=0.23.6
RUN go install "k8s.io/code-generator/cmd/conversion-gen@v${CONVERSION_GEN_VERSION}" && cp $(command -v conversion-gen) bin/conversion-gen
ARG KFILT_VERSION="v0.0.7"
RUN go install github.com/ryane/kfilt@${KFILT_VERSION}
FROM <REDACTED> as final
COPY --from=builder /workspace/bin /usr/local/bin
# NOTE(jaypipes): gettext installs envsubst
RUN dnf install -y gettext
I've spent hours trying to get this working and am kind of at the end of my rope. Wondering if @dhellmann or anyone else may have been able to solve this dilemma? Certainly having binary releases of controller-gen would (I think) make life a whole lot easier, no?
In looking for a reason why that output is showing up when running controller-gen inside a Docker container, I stumbled across https://github.com/golang/go/issues/43724. Having read through that and all its comments, I suspect that because controller-gen has //go:generate directive that itself calls go there is something wonky going on? Does this mean that controller-gen can never be installed as a binary since it depends on the go executable?
Note that if I don't use a multi-stage Docker build and instead remove these lines from the Docker file:
FROM <REDACTED> as final
COPY --from=builder /workspace/bin /usr/local/bin
# NOTE(jaypipes): gettext installs envsubst
RUN dnf install -y gettext
I do get a working controller-gen in the resulting Docker image. The only problem is that causes the resulting Docker image to balloon from 290MB to 1.6GB :(
(Long-shot disclaimer)
The fact that the command is being run in / and says it finds go in the current directory makes me wonder if the issue is this line
ENV GOPATH /go
Is it possible that directory is being picked up as the go executable by the //go:generate logic?
@jaypipes did you already try GODEBUG=execerrdot=0 when you run controller-gen?
- https://github.com/search?type=code&q=GODEBUG%3Dexecerrdot
@jaypipes did you already try
GODEBUG=execerrdot=0when you runcontroller-gen?* https://github.com/search?type=code&q=GODEBUG%3Dexecerrdot
No, I had no idea about that... I can give it a try.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
/remove-lifecycle rotten
/assign
@sbueringer: Reopened this issue.
In response to this:
/reopen
/remove-lifecycle rotten
/assign
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hi, just stopping by and sharing an antecdote. I'm new to contributing to kubernetes projects (currently interested in kubernetes-sigs/external-dns)
Spinning up a local dev environment and I'm going through the getting started guide. https://kubernetes-sigs.github.io/external-dns/v0.14.2/contributing/getting-started/
Step 2 is make build which I run. I get an error about a missing dep.
which: no controller-gen
So I search the regular places. pamac, brew, github releases, no matches.
So I search the issue trackers. Devs have been struggling with this issue since at least 2020. There's a documentation PR from 2021 which unfortunately never got merged.
Then I find this issue where a suggested workaround https://github.com/metal3-io/baremetal-operator/blob/main/hack/install-controller-gen.sh 404s.
I'm blocked and I'm frustrated.
Granted, there is a solution in the PR at https://github.com/kubernetes-sigs/controller-tools/pull/537/files but I'll be honest. I don't understand golang's GOPATH/GO111MODULE yet. I ran those commands and I still don't have a controller-tools.
I wish there were a controller-gen binary.