Rebulid OBS packages only if sources are changed
I noticed that all packages in all OBS build projects suddenly got rebuilt today. Turns out that this is expected because OBS automatically triggers package rebuilds if any (direct or transitive) dependency is upgraded in the operating system that's used for building packages. For example, we had this situation today:
osc triggerreason isv:kubernetes:core:shared:build/kubernetes-cni rpm ppc64le
https://api.opensuse.org/ isv:kubernetes:core:shared:build kubernetes-cni rpm ppc64le
meta change (at 2023-08-23 14:01:57)
changed keys:
md5sum binutils
md5sum libctf-nobfd0
md5sum libctf0
This is actually not much useful to us because we use prebuilt binaries in packages instead of building binaries inside OBS. Moreover, this can interfere with our release process and tooling. I think that we should disable this behavior and only rebuild packages if sources are updated.
References:
- https://openbuildservice.org/help/manuals/obs-user-guide/cha.obs.build_scheduling_and_dispatching#id-1.5.10.15.5.7
- https://kubernetes.slack.com/archives/C03U7N0VCGK/p1692799540334359
/assign /priority important-soon
Is changing the trigger to local the way to go?
I'm currently not working on this /unassign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale