controller-gen panics when providing schema with nested structs + struct tags
This issue seems to be similar to Issue #442, but i am not sure if the root cause is the same.
When writing a Schema, containing a nested struct, the controller-gen tool panics.
type MySpec struct {
TestStruct struct {
Test1 string `json:"test1,omitempty"`
} `json:"test_struct"`
}
will lead to
➜ memcached-operator make manifests
/Users/wzhjeki/go/src/code.rbi.tech/WZHJEKI/operatorsdktests/memcached-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1613e48]
goroutine 1 [running]:
sigs.k8s.io/controller-tools/pkg/crd.structToSchema(0xc001a4e2d0, 0xc000e5b7e8)
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/schema.go:336 +0xa8
sigs.k8s.io/controller-tools/pkg/crd.typeToSchema(0xc001a4e2d0, {0x18e91b0, 0xc000e5b7e8})
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/schema.go:178 +0x145
sigs.k8s.io/controller-tools/pkg/crd.structToSchema(0xc00145a0f0, 0xc000e5b800)
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/schema.go:393 +0x77a
sigs.k8s.io/controller-tools/pkg/crd.typeToSchema(0xc00145a0f0, {0x18e91b0, 0xc000e5b800})
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/schema.go:178 +0x145
sigs.k8s.io/controller-tools/pkg/crd.infoToSchema(0xc00145a0f0)
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/schema.go:121 +0xed
sigs.k8s.io/controller-tools/pkg/crd.(*Parser).NeedSchemaFor(0xc0005424e0, {0xc0001558e0, {0xc000ed98f0, 0x0}})
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/parser.go:190 +0x2ee
sigs.k8s.io/controller-tools/pkg/crd.(*schemaContext).requestSchema(0xc0004ca0f0, {0x0, 0x17e62f6}, {0xc000ed98f0, 0x0})
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/schema.go:104 +0xd5
sigs.k8s.io/controller-tools/pkg/crd.localNamedToSchema(0xc001a4e1b0, 0xc001250020)
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/schema.go:234 +0x216
sigs.k8s.io/controller-tools/pkg/crd.typeToSchema(0xc001a4e1b0, {0x18e8e80, 0xc001250020})
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/schema.go:168 +0x7c
sigs.k8s.io/controller-tools/pkg/crd.structToSchema(0xc00145aae8, 0xc000e5b950)
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/schema.go:393 +0x77a
sigs.k8s.io/controller-tools/pkg/crd.typeToSchema(0xc00145aae8, {0x18e91b0, 0xc000e5b950})
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/schema.go:178 +0x145
sigs.k8s.io/controller-tools/pkg/crd.infoToSchema(0xc000216ae8)
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/schema.go:121 +0xed
sigs.k8s.io/controller-tools/pkg/crd.(*Parser).NeedSchemaFor(0xc0005424e0, {0xc0001558e0, {0xc000ed9970, 0x9}})
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/parser.go:190 +0x2ee
sigs.k8s.io/controller-tools/pkg/crd.(*Parser).NeedFlattenedSchemaFor(0xc0005424e0, {0xc0001558e0, {0xc000ed9970, 0x17e4833}})
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/parser.go:202 +0xb7
sigs.k8s.io/controller-tools/pkg/crd.(*Parser).NeedCRDFor(0xc0005424e0, {{0xc00032c02d, 0x0}, {0xc000ed9970, 0x0}}, 0x0)
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/spec.go:85 +0x56e
sigs.k8s.io/controller-tools/pkg/crd.Generator.Generate({0x0, 0x0, {0x0, 0x0, 0x0}, 0x0}, 0xc0000af5e0)
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/crd/gen.go:118 +0x2da
sigs.k8s.io/controller-tools/pkg/genall.(*Runtime).Run(0xc0004ac900)
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/genall/genall.go:213 +0x246
main.main.func1(0xc0002c4500, {0xc000145f40, 0x5, 0x5})
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/cmd/controller-gen/main.go:176 +0x6a
github.com/spf13/cobra.(*Command).execute(0xc0002c4500, {0xc000132130, 0x5, 0x5})
/Users/wzhjeki/go/pkg/mod/github.com/spf13/[email protected]/command.go:856 +0x60e
github.com/spf13/cobra.(*Command).ExecuteC(0xc0002c4500)
/Users/wzhjeki/go/pkg/mod/github.com/spf13/[email protected]/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
/Users/wzhjeki/go/pkg/mod/github.com/spf13/[email protected]/command.go:902
main.main()
/Users/wzhjeki/go/pkg/mod/sigs.k8s.io/[email protected]/cmd/controller-gen/main.go:200 +0x2c6
make: *** [manifests] Error 2
Specifically, when adding the json:"test_struct" tag, the panic is triggered. Leaving it out, leads to a proper error: encountered struct field "TestStruct" without JSON tag in type "MySpec".
Workaround
a workaround is possible, by specifying the same struct in an explicit way:
type TestStruct struct {
Test1 string `json:"test1,omitempty"`
}
type MemcachedSpec struct {
TestStruct TestStruct `json:"test_struct"`
}
My environment
operator-sdk version: "v1.16.0", commit: "560044140c4f3d88677e4ef2872931f5bb97f255", kubernetes version: "v1.21", go version: "go1.17.5", GOOS: "darwin", GOARCH: "amd64"
controller-gen version: sigs.k8s.io/controller-tools/cmd/[email protected]. I could reproduce the same error with 0.8.0
I have the same problem with nested structures. Version:
operator-sdk version: "v1.18.0", commit: "c9c61b6921b29d731e64cd3cc33d268215fb3b25", kubernetes version: "v1.21", go version: "go1.17.6", GOOS: "darwin", GOARCH: "arm64"```
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Still occurs in v0.10.0.
/reopen
@alexg-axis: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
Still occurs in v0.10.0.
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Still occurs in v0.11.1 also
/reopen
@errordeveloper: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
Still occurs in v0.11.1 also
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Still occurs in v0.15.0
/reopen
@c3-clement: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
Still occurs in v0.15.0
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Seems at least reasonable to return a proper error instead of a panic.
If we should support it in general probably depends on if it aligns with kubernetes API conventions.
/reopen /lifecycle frozen
@sbueringer: Reopened this issue.
In response to this:
Seems at least reasonable to return a proper error instead of a panic.
If we should support it in generally probably depends on if it aligns with kubernetes API conventions.
/reopen /lifecycle frozen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.