model icon indicating copy to clipboard operation
model copied to clipboard

[WIP] feat: Add Support for CustomResourceDefinitions

Open portellaa opened this issue 4 years ago • 7 comments

⚠️ this is still a work in progress ⚠️

The basic idea is to transform GroupVersionKind into a struct, instead of an enum representing all the possible cases. This could also be achieved using a .custom case for example, although it doesn't seem so "correct", to me at least. We can still achieve the same logic in the enum, but instead of having an huge enum with all the cases, we can use the KubernetesApiResource or KubernetesResource. This is kind of how the go project does it, they don't have a representation of the models in the GroupVersionKind in the apimachinery (the project holding this logic), at least they use 3 projects for the basic interaction

  • client - handles the connection and interaction with the api server
  • apimachinery - holds this simple representations or help parts for the basic interaction with the cluster
  • models - their own models

Bear in mind, i'm still testing these, this works on the fetch, although still fails in the decode part, when i use a type erased type and not a particular type declared by me. Well i will post some examples on how i'm using it, i do have to send the PR into the client too.

Btw what do you think of moving those parts (the ones not attached to the models and the representation), to another project (they call it apimachinery in the go lang project) or the client itself, for a "first iteration"?

portellaa avatar Sep 27 '21 23:09 portellaa

@portellaa Hey there 👋

Well you weren't kidding when you said, that you wanted to do it 😬 This is great stuff.

But one thing FIY, so you won't invest lots of time into prototyping. I believe you've noticed this comment in most of the model files:

///
/// Generated by Swiftkube:ModelGen
/// Kubernetes v1.20.9
///

That's because all of those files were really generated by the SwiftkubeModelGen project. So let's bounce some ideas around and decide on a direction and then just template all of this. You can imagine how tedious it would be otherwise, to maintain all the K8s models.

The basic idea is to transform GroupVersionKind into a struct, instead of an enum representing all the possible cases. ..

I see where you're going there. And I really can't say what I think about it until I delve into the details. In the previous PR I mentioned, that I've been prototyping and playing with a CRD implementation. Here is my idea:

  • Provide an explicit extension point for a custom resource, i.e. a protocol or a base class for example CustomResource
  • Also provide extension points for the CRD Spec and List objects
  • These most likely have to be generic
protocol CustomResourceSpec: KubernetesResource {}

protocol CustomResource: KubernetesAPIResource {

  associatedtype Spec: CustomResourceSpec

  let apiVersion: String
  let kind: String
  var metadata: meta.v1.ObjectMeta?
  var spec: Spec?
}

Then you could define an own CustomResource like this:

struct FooSpec: CustomResourceSpec {
  var bar: String
}

struct Foo: CustomResource {
  var apiVersion: String = "example.v1"
  var kind: String = "Foo"
  var spec: FooSpec?
}

The problem is in the List definition. Due to the several constraints in the generics type system, it is sadly really complicated to provide a one-impl-does-all. So maybe, we could leave this to the library user, i.e. you would have to bring your own List impl like so:

struct FooResourceList: KubernetesResourceList {
  var apiVersion: String = "example.v1"
  var kind: String = "Foo"
  var metadata: meta.v1.ListMeta?
  var items: [Foo]
}

One would have to repeat the apiVersion and `kind.

An advantage of this approach is a less complexity in the client. Here is where your suggestion of GroupVersionKind.custom(...) would also fit.

The client API could look like this:

client.forCustomResources(ofType: Foo.self).list(in: .default).wait().forEach { foo in 
  print(foo.spec?.bar)
}

What do you think? I hope you some pro/contra arguments for both ideas, because I'm not sure of mine either 😸

Well i will post some examples on how i'm using it, i do have to send the PR into the client too.

You can just post them here first and we can discuss them in one PR and then go from there, of course with some bike-shedding included 😜

Btw what do you think of moving those parts (the ones not attached to the models and the representation), to another project (they call it apimachinery in the go lang project) or the client itself, for a "first iteration"?

I'm not sure yet, whether splitting the project any further right now would bring direct benefit. Both model and client are currently relatively small and manageable. But it's something to keep an eye on going forward.

Cheers

iabudiab avatar Sep 28 '21 20:09 iabudiab

Expanding on this even further, one could use the existing protocols to define the behaviour of the CustomResource. So for example the Foo resource could be declared to be namespaced and having a status like this:

struct Foo: CustomResource, NamespacedResource, StatusHavingResource {
  var apiVersion: String = "example.v1"
  var kind: String = "Foo"
  var spec: FooSpec?
  var status: FooStatus?
}

Which is, I don't know, cool but also weird? Maybe it's better to provide two separate protocols: NamespacedCustomResource and ClusterScopedCustomResource, that bring the relevant marker protocols implicitly:

protocol NamespacedCustomResource: KubernetesAPIResource, NamespacedResource, 
   ReadableResource, ListableResource,
   CreatableResource, ReplaceableResource,
   DeletableResource, CollectionDeletableResource {

  associatedtype Spec: CustomResourceSpec

  let apiVersion: String
  let kind: String
  var metadata: meta.v1.ObjectMeta?
  var spec: Spec?
}

I'll try to isolate this idea from my prototypes and will push it into separate branches in model and client with some example usages and I would really appreciate your input. Maybe tomorrow 😄 cause it's late now 🛌

iabudiab avatar Sep 28 '21 22:09 iabudiab

Hi @iabudiab 🤗

Thank you for the support.

Btw perhaps you could create a discussion board where we can discuss this? 🤔 well not sure it's worth.

⚠️ A few disclaimers ⚠️ The approaches i'm discussing, i already tested and i've been using in production, we use a lot of CRDs and it was a first priority necessity to support this. Also i started to write this before i go to sleep and during meetings in the work day, so bear with me 🧸 😂 As i already said, we have a lot of CustomResourceDefinitions and controllers made in GO, so i'm using that "experience" and my ideas are somehow based in that.

Well you weren't kidding when you said, that you wanted to do it 😬 This is great stuff.

If it's to do, let's do it 💪 Well i have an extra motivation 😂 we are using this as a previous iOS developer i love Swift, well not just because of that, besides being the best scripting language i work with (besides swift, i work with GO and Python during my work day) and because i always loved backend stuff, before move to iOS i was a backend working with C#, PHP and JS, so since Perfect, Vapor and Kitura appeared, o decided to move back to backend and help pushing Swift.

Sorry for the extending introduction 🙈 back to the motivation, i'm using Swift in production for 3 years, give or take, and in my current project we have a bunch of CustomResourceDefinitions that are done in GO, the declaration and the controller, but i would like to have a way to interact with them from our API, which runs Swift and Vapor, ... So, that's my extra motivation, extra because i do think this can make a difference in the Swift side, since nowadays everyone uses Kubernetes.

But one thing FIY, so you won't invest lots of time into prototyping. I believe you've noticed this comment in most of the model files:

///
/// Generated by Swiftkube:ModelGen
/// Kubernetes v1.20.9
///

That's because all of those files were really generated by the SwiftkubeModelGen project. So let's bounce some ideas around and decide on a direction and then just template all of this. You can imagine how tedious it would be otherwise, to maintain all the K8s models.

Yes yes i know, this is just for a discussion and share ideas.

The basic idea is to transform GroupVersionKind into a struct, instead of an enum representing all the possible cases. ..

I see where you're going there. And I really can't say what I think about it until I delve into the details. In the previous PR I mentioned, that I've been prototyping and playing with a CRD implementation. Here is my idea:

Please allow me to do a quick intervention here (spoiler alert, i love enums 😂 ), but i believe that we should address GroupVersionKind and it's definition independently from the models, so we can support any kind of resource on the cluster. Don't take me wrong and i love enums, but i think that in this case they remove us some agility. Let's imagine the cli tool client and fetch some custom resource, for example istio EnvoyFilter's we do kubectl get envoyfilter -A for example, and if it doesn't exists, it fails but it's after response and not the client that doesn't has support for it, right? Well, at least the go-client does it that way and actually i think it makes sense. Yes, they don't have the power of enums as Swift does 💪 😂 but in this case i really don't think it helps, even if we have the .custom case.

And for example and again using the GO sdk as an example, it could be the Resource to define the GroupVersionKind and not the other way around. For example, taking a look at a CustomResource defined by Kyverno or Istio, this is part of the resource to define it and not by a big enum, holding all the cases.

  • Provide an explicit extension point for a custom resource, i.e. a protocol or a base class for example CustomResource
  • Also provide extension points for the CRD Spec and List objects
  • These most likely have to be generic
protocol CustomResourceSpec: KubernetesResource {}

protocol CustomResource<Spec: CustomResourceSpec>: KubernetesAPIResource {
  let apiVersion: String
  let kind: String
  var metadata: meta.v1.ObjectMeta?
  var spec: Spec?
}

Then you could define an own CustomResource like this:

struct FooSpec: CustomResourceSpec {
  var bar: String
}

struct Foo: CustomResource {
  var apiVersion: String = "example.v1"
  var kind: String = "Foo"
  var spec: FooSpec?
}

The problem is in the List definition. Due to the several constraints in the generics type system, it is sadly really complicated to provide a one-impl-does-all. So maybe, we could leave this to the library user, i.e. you would have to bring your own List impl like so:

struct FooResourceList: KubernetesResourceList {
  var apiVersion: String = "example.v1"
  var kind: String = "Foo"
  var metadata: meta.v1.ListMeta?
  var items: [Foo]
}

One would have to repeat the apiVersion and `kind.

i think that wouldn't be necessary, i think it's enough to use the protocols that you already have. I don't see the need to really represent that it is a CustomResource, because it's a KubernetesAPIResource and that is enough. Well at least i have this working in my cluster i used just KubernetesAPIResource, KubernetesResource and all the others to fetch my user Profiles and it works. Of course, i have some changes in the client that i didn't pushed yet and even in the model. From my point of view, this new protocols will just introduce complexity that i don't think it's necessary for now.

I think we could introduce "unstructured", so we "support" anything different from the kubernetes standard api. Let's imagine we want to mimic the kubectl cli tool, just like you did with https://github.com/swiftkube/examples/tree/main/swiftkubectl, we need to support any kind of object that is on the cluster, and if i do kubectl get potatoes -A i want to see the potatoes and the client doesn't have a Potato model.

An advantage of this approach is a less complexity in the client. Here is where your suggestion of GroupVersionKind.custom(...) would also fit.

that was the first development i did and i didn't like it, because you have to adapt everything to CRDs and treat them differently, which i don't think they are, at least from a consuming perspective.

The client API could look like this:

client.forCustomResources(ofType: Foo.self).list(in: .default).wait().forEach { foo in 
  print(foo.spec?.bar)
}

Well, i would use (i use) the same API that you already have, example:

self.kubeClient.namespaceScoped(for: Type.self)
      .list(in: .allNamespaces)
      .map { ... }

for example, this a method on my API:

func profiles() -> EventLoopFuture<[Profile]> {
  self.kubeClient.clusterScoped(for: Kubernetes.Profile.self).map(Profile.init(from:))
}

my Profile looks like this:

extension Kubernetes {
  struct Profile: KubernetesAPIResource, NamespacedResource, MetadataHavingResource, ListableResource {
    typealias List = ProfileList

    var apiVersion: String = "kubeflow.org/v1"
    var kind: String = "Profile"

    var metadata: meta.v1.ObjectMeta?
    var spec: Profile.Spec
  }

  ...
}

For example, if the developer wants to decode a pod type, using his own model, he can, using any other oficial client, so i don't think we should control that with this client, we do facilitate and handle all the decoding and encoding.

So, i don't see the necessity and advantage of having a forCustomResource API method.

I'm focusing in support the client functionality first, as if we mimic the kubectl for any kind of component 😃

If you want, i can push a PR to the client with my changes and even a new commit here with the "missing parts".

Cheers 🍻

portellaa avatar Sep 29 '21 14:09 portellaa

Expanding on this even further, one could use the existing protocols to define the behaviour of the CustomResource. So for example the Foo resource could be declared to be namespaced and having a status like this:

struct Foo: CustomResource, NamespacedResource, StatusHavingResource {
  var apiVersion: String = "example.v1"
  var kind: String = "Foo"
  var spec: FooSpec?
  var status: FooStatus?
}

Which is, I don't know, cool but also weird? Maybe it's better to provide two separate protocols: NamespacedCustomResource and ClusterScopedCustomResource, that bring the relevant marker protocols implicitly:

protocol NamespacedCustomResource: KubernetesAPIResource, NamespacedResource, 
   ReadableResource, ListableResource,
   CreatableResource, ReplaceableResource,
   DeletableResource, CollectionDeletableResource {

  associatedtype Spec: CustomResourceSpec

  let apiVersion: String
  let kind: String
  var metadata: meta.v1.ObjectMeta?
  var spec: Spec?
}

I'll try to isolate this idea from my prototypes and will push it into separate branches in model and client with some example usages and I would really appreciate your input. Maybe tomorrow 😄 cause it's late now 🛌

I really think you don't need new protocols for this CustomResources are just Resources 😄

portellaa avatar Sep 29 '21 14:09 portellaa

@portellaa Hey, I've set up discussion boards per you suggestion and summarised the ideas 👉 there 😉

iabudiab avatar Sep 30 '21 20:09 iabudiab

@portellaa Hey, I've set up discussion boards per you suggestion and summarised the ideas 👉 there 😉

hi @iabudiab

sorry it has been crazy days. hopefully i will get more time to pick up on this, i'm really sorry for the delay 🙇 hopefully tomorrow i will take the day off and grab this 🙏

portellaa avatar Oct 07 '21 14:10 portellaa

@portellaa There is no need to apologise whatsoever. I know exactly what you're talking about. Between work, home, family and the pandemic there is hardly enough time 🤪 And don't take a day off just for my sake 😉 I too wanted to address some other points in the discussion but I've been also postponing and hope to find some time soo 😅 S'All Good Man 👍

iabudiab avatar Oct 07 '21 23:10 iabudiab

Support for this landed in v0.5.x and improved in v0.7.x

iabudiab avatar Feb 10 '23 22:02 iabudiab