cmd-buildextend: add command to build extensions container
Initial skeleton for COS-1646 to deliver a container with meta.json as explained in: https://github.com/openshift/os/pull/763#issuecomment-1158978033
Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all
Now that https://github.com/coreos/coreos-assembler/pull/2919 landed - this code could start out in Go. Just an option to consider.
Is this going to need to be run on different architectures? or just on x86_64?
Is this going to need to be run on different architectures? or just on x86_64?
This is part of the effort for Openshift 4.12, I don't see why it would be limited to only x86_64. I assume this will be built for all the architectures we support on Openshift/RHCOS.
If you rebase on https://github.com/coreos/coreos-assembler/pull/3063 it should avoid a lot of the "vendor duplication" when trying to pull in schema/ into the toplevel.
This is looking good to me so far!
If you haven't yet tried this; one problem today is that no CI covers push-container. I've been testing it manually by doing e.g.
$ cosa push-container quay.io/cgwalters/fcos
I'd do the same for your extensions container.
This is looking good to me so far!
If you haven't yet tried this; one problem today is that no CI covers
push-container. I've been testing it manually by doing e.g.$ cosa push-container quay.io/cgwalters/fcosI'd do the same for your extensions container.
Thanks, checked and is working as expected:
[coreos-assembler]$ cosa push-container --image=extensions-container quay.io/jmarrero_rh/my-custom-fcos
skopeo copy --authfile /development/config.json --digestfile=/srv/tmp/push-container-digestfilei3hvgmq3 oci-archive:builds/412.86.202209012000-0/x86_64/extensions-container-412.86.202209012000-0-x86_64.ociarchive docker://quay.io/jmarrero_rh/my-custom-fco4
Getting image source signatures
Copying blob 0c5aff2d7eef done
Copying blob 7b7a31b480d6 done
Copying blob 7fa2ab540ed8 done
Copying blob a1ec893e47ea done
Copying config 79e8f2318b done
Writing manifest to image destination
Storing signatures
[coreos-assembler]$
Testing I found that somehow I am dropping coreos-assembler.image-genver & delayed-meta-merge when writing the JSON. they are part of the schema so I am not sure why yet.
➜ builds rg coreos-assembler.image-genver
412.86.202209012254-0/x86_64/meta.json
14: "coreos-assembler.image-genver": 0,
412.86.202209012240-0/x86_64/meta.json
14: "coreos-assembler.image-genver": 0,
➜ builds cd ..
➜ rhcos cd buil-bad
➜ buil-bad rg coreos-assembler.image-genver
➜ buil-bad
Testing I found that somehow I am dropping
coreos-assembler.image-genver&delayed-meta-mergewhen writing the JSON. they are part of the schema so I am not sure why yet.➜ builds rg coreos-assembler.image-genver 412.86.202209012254-0/x86_64/meta.json 14: "coreos-assembler.image-genver": 0, 412.86.202209012240-0/x86_64/meta.json 14: "coreos-assembler.image-genver": 0, ➜ builds cd .. ➜ rhcos cd buil-bad ➜ buil-bad rg coreos-assembler.image-genver ➜ buil-bad
After Unmarshaling the value is present in the object, but somehow dropped a line after on the Marshal.
fmt.Printf("image ver %d \n",cosaBuild.CosaImageVersion)
newBytes, err := json.MarshalIndent(cosaBuild, "", " ")
wonder if I am hitting some kind of reference issue.
OK first trying this out the find call seems to be traversing the entire cosa directory, which is really expensive and prints errors for me. I think what you were running into here is that because we need to cd src/config for podman, we want the absolute path of the input file. We can do that by just adding $PWD into the filename before we do the cd. I did that and some other cleanups here:
EDIT: OK made some further changes, can you take a look?
index 5f238d5c5..209fada28 100644
--- a/cmd/build-extensions-container.go
+++ b/cmd/build-extensions-container.go
@@ -11,10 +11,16 @@ import (
"io/ioutil"
"os"
"path/filepath"
- "strings"
)
func buildExtensionContainer() error {
+ lastBuild, buildPath, err := cosa.ReadBuild("builds", "", "")
+ if err != nil {
+ return err
+ }
+ buildID := lastBuild.BuildID
+ fmt.Printf("Generating extensions container for build: %s\n", buildID)
+
arch := cosa.BuilderArch()
sh, err := cosash.NewCosaSh()
if err != nil {
@@ -23,29 +29,23 @@ func buildExtensionContainer() error {
if _, err := sh.PrepareBuild(); err != nil {
return err
}
- process := "runvm -- /usr/lib/coreos-assembler/build-extensions-oscontainer.sh " + arch + " $tmp_builddir/output.txt"
- sh.Process(process)
- tmpdir, err := sh.ProcessWithReply("echo $tmp_builddir>&3\n")
- if err != nil {
- return err
- }
- content, err := ioutil.ReadFile(filepath.Join(tmpdir, "output.txt"))
- if err != nil {
+ targetname := "extensions-container-" + buildID + "." + arch + ".ociarchive"
+ process := "runvm -- /usr/lib/coreos-assembler/build-extensions-oscontainer.sh " + arch + " $tmp_builddir/" + targetname
+ if err := sh.Process(process); err != nil {
return err
}
- ociarchive := strings.TrimSpace(string(content))
- workdir := getWorkDir(ociarchive)
- lastBuild, _, err := cosa.ReadBuild(workdir+"/builds", "latest", arch)
+ // Find the temporary directory allocated by the shell process, and put the OCI archive in its final place
+ tmpdir, err := sh.ProcessWithReply("echo $tmp_builddir>&3\n")
if err != nil {
return err
}
- buildID := lastBuild.BuildID
- renamedArchive := filepath.Join(filepath.Dir(ociarchive), "extensions-container-"+buildID+"."+arch+".ociarchive")
- err = os.Rename(ociarchive, renamedArchive)
+ targetPath := filepath.Join(buildPath, targetname)
+ err = os.Rename(filepath.Join(tmpdir, targetname), targetPath)
if err != nil {
return err
}
- file, err := os.Open(renamedArchive)
+ // Gather metadata of the OCI archive (sha256, size)
+ file, err := os.Open(targetPath)
if err != nil {
return err
}
@@ -59,9 +59,9 @@ func buildExtensionContainer() error {
return err
}
sha256 := fmt.Sprintf("%x", hash.Sum(nil))
- builddir := filepath.Join(workdir, "builds", "latest", arch)
- metapath := filepath.Join(builddir, "meta.json")
+ // Update the meta.json to include metadata for our OCI archive
+ metapath := filepath.Join(buildPath, "meta.json")
jsonFile, err := os.Open(metapath)
if err != nil {
fmt.Println(err)
@@ -78,7 +78,7 @@ func buildExtensionContainer() error {
}
cosaBuild.BuildArtifacts.ExtensionsContainer = &cosa.Artifact{
- Path: filepath.Base(renamedArchive),
+ Path: targetname,
Sha256: sha256,
SizeInBytes: float64(stat.Size()),
SkipCompression: false,
@@ -95,9 +95,3 @@ func buildExtensionContainer() error {
}
return nil
}
-
-func getWorkDir(path string) string {
- directories := strings.Split(path, "/")
- //expects path starts with /.
- return "/" + directories[1]
-}
diff --git a/src/build-extensions-oscontainer.sh b/src/build-extensions-oscontainer.sh
index 0afa76499..99b81d424 100755
--- a/src/build-extensions-oscontainer.sh
+++ b/src/build-extensions-oscontainer.sh
@@ -1,14 +1,19 @@
#!/bin/bash
#Used by cmd/build-extensions-container.go
#Find the RHCOS ociarchive.
-path="*/builds/latest/${1}/*-ostree*.ociarchive"
-ostree_ociarchive=$(find -L ~+ -path "${path}")
-cd src/config || exit
-#Start the build replacing the from line.
-podman build --from oci-archive:"$ostree_ociarchive" --network=host --build-arg COSA=true -t localhost/extensions-container -f extensions/Dockerfile .
-#Call skopeo to generate a extensions container ociarchive
-extensions_ociarchive_dir=$(dirname "$ostree_ociarchive")
-extensions_ociarchive="${extensions_ociarchive_dir}/extensions-container.ociarchive"
-skopeo copy containers-storage:localhost/extensions-container oci-archive:"$extensions_ociarchive"
+set -euo pipefail
+buildid=$1
+shift
+filename=$1
+shift
+builddir="$PWD/builds/latest/${buildid}"
+ostree_ociarchive=$(ls ${builddir}/*-ostree*.ociarchive)
+# Build the image, replacing the FROM directive with the local image we have
+(cd src/config
+ set -x
+ podman build --from oci-archive:"$ostree_ociarchive" --network=host --build-arg COSA=true -t localhost/extensions-container -f extensions/Dockerfile .
+)
+# Call skopeo to export it from the container storage to an oci-archive.
+(set -x
+ skopeo copy containers-storage:localhost/extensions-container oci-archive:"$filename" )
-output=$2; echo "$extensions_ociarchive" > "$output"
e.g. parsing the options up front, and with the (set -x; podman ...) we get the command line printed which is useful for debugging, etc.
OK I think the problem here is that the meta.json we want to read is in the "final" builddir, not the temporary one we allocate.
Actually sorry I'm not sure that was related to the delayed-meta-merge problem.
Testing I found that somehow I am dropping coreos-assembler.image-genver & delayed-meta-merge when writing the JSON. they are part of the schema so I am not sure why yet.
This is probably the use of omitempty in the schema...but is it a problem?
Well the issue I hit was the dropping of:
coreos-assembler.image-genver
That is used when we generate a new build on:
src/cosalib/builds.py
131: genver_key = 'coreos-assembler.image-genver'
Without that key the script crashes.
OK right, notice here we have drift between the expected semantics in Python and Go code.
How about
diff --git a/src/cosalib/builds.py b/src/cosalib/builds.py
index 4a378d7ba..4e8ebe41c 100644
--- a/src/cosalib/builds.py
+++ b/src/cosalib/builds.py
@@ -136,7 +136,7 @@ class Builds: # pragma: nocover
with open(metapath) as f:
previous_buildmeta = json.load(f)
previous_commit = previous_buildmeta['ostree-commit']
- previous_image_genver = int(previous_buildmeta[genver_key])
+ previous_image_genver = int(previous_buildmeta.get(genver_key, 0))
if previous_commit == ostree_commit:
image_genver = previous_image_genver + 1
buildid = f"{version}-{image_genver}"
?
diff --git a/src/cosalib/builds.py b/src/cosalib/builds.py index 4a378d7ba..4e8ebe41c 100644 --- a/src/cosalib/builds.py +++ b/src/cosalib/builds.py @@ -136,7 +136,7 @@ class Builds: # pragma: nocover with open(metapath) as f: previous_buildmeta = json.load(f) previous_commit = previous_buildmeta['ostree-commit'] - previous_image_genver = int(prev
testing
Thanks so much for all your work on this!
thank you for all the help!