eventing icon indicating copy to clipboard operation
eventing copied to clipboard

Kubernetes Service as Sink only works with Port 80

Open embano1 opened this issue 4 years ago • 23 comments

Describe the bug When using v1/Service and kind: Service as sink, i.e. Kubernetes Service instead of KService, all eventing components using that specific sink will only work if the Service is exposed on port 80 (targetPort can be different).

According to @n3wscott this is due to the specific Kubernetes Service resolver defaulting to port 80, ignoring the specified port on that Service instance.

Expected behavior Use defined port on Kubernetes Service instead of assuming 80.

To Reproduce Create a Broker and Trigger sending events to a sink defined as plain Kubernetes Service. The Kubernetes Service should be exposed on a port != 80, e.g. 8080. Again, targetPort (pod) does not matter.

Knative release version Latest and master. (all?)

Additional context From a discussion with @n3wscott

refs resolve to port 80, you could use non-:80 if you use an abs url

It would be nice to get the correct behavior out of the box as I wasted an hour troubleshooting why a trigger would not deliver to my Kubernetes Service exposed on 8080 :) .

embano1 avatar Mar 25 '21 08:03 embano1

@embano1 I'm trying to understand how it should work: what happens if the Service specifies multiple ports? Which one should be picked?

slinkydeveloper avatar Mar 25 '21 08:03 slinkydeveloper

The rules on which to pick is interesting. Assuming the svc is always on :80 also seems wrong. Let's collect several examples of services that have interesting config.

n3wscott avatar Mar 25 '21 17:03 n3wscott

How about supporting users to specify ports by themselves in sink.ref ?

zhaojizhuang avatar Mar 26 '21 17:03 zhaojizhuang

@slinkydeveloper we could establish a convention to use named ports, eg "http" in such cases.

@zhaojizhuang we have all the details so it should be possible without additional configuration (complexity).

embano1 avatar Mar 26 '21 19:03 embano1

@embano1 is there any chance you'd be interested in putting a PR together for this?

lberk avatar Mar 29 '21 16:03 lberk

@lberk sure, definitely happy to create a PR. Since time is always limited, from your code experience do you think this is going to be XXL size or smaller? Been mostly on the eventing side, so might need some time to go trough the code.

embano1 avatar Mar 30 '21 20:03 embano1

This issue is stale because it has been open for 90 days with no activity. It will automatically close after 30 more days of inactivity. Reopen the issue with /reopen. Mark the issue as fresh by adding the comment /remove-lifecycle stale.

github-actions[bot] avatar Jun 29 '21 01:06 github-actions[bot]

Is this still of interest? Would using named ports be an implementation the team is fine with?

embano1 avatar Jun 29 '21 06:06 embano1

How about supporting users to specify ports by themselves in sink.ref ?

it would not be a great choice because the source of truth for which port should be the duck type status not the ref.

n3wscott avatar Jul 01 '21 22:07 n3wscott

This issue is stale because it has been open for 90 days with no activity. It will automatically close after 30 more days of inactivity. Reopen the issue with /reopen. Mark the issue as fresh by adding the comment /remove-lifecycle stale.

github-actions[bot] avatar Sep 30 '21 01:09 github-actions[bot]

/remove-lifecycle stale /assign

tzununbekov avatar Oct 21 '21 14:10 tzununbekov

I've played a bit with different port picking rules and got the feeling that the best approach here is to simply pick the first port from the list - easy to remember, understand, and document. All other solutions eventually will just bring more confusion without any significant benefits. Wdyt, @n3wscott, @embano1?

tzununbekov avatar Oct 22 '21 15:10 tzununbekov

IMHO these scenarios can happen on K8s svc:

  1. One port
  2. Two or more ports

(1) is an easy one. (2) could be decided based on port naming (required for multi-ports IMHO) convention, eg pick "http" if that doesn't exist fall back to current behavior (80)?

Btw: do we need to take different service types, eg only ClusterIP into account or is this already fixed in the current serving logic?

embano1 avatar Oct 22 '21 17:10 embano1

My concern is that the name-based approach implies requirements to other's infrastructure that won't fit many realistic cases, such as:

  • I have both http and https ports exposed in the service and I want the sink to be resolved to https port. Or not. Hardcoding http priority won't allow me this,
  • I have my http service exposed under nginx port name. Some other system also has convention/configuration to use nginx named port as a target.

I was thinking of the logic that would suit these cases and got to the conclusion that the most simple approach will work best here - let the user arrange ports so that the first one acts as a sink. But I'm ok with adding hardcoded http name priority if the team thinks that it will work.

if that doesn't exist fall back to current behavior (80)

I'd say that in this case, we should default to the first TCP port in the service. Falling back to 80 even if it does not exist in the service won't make much good.

do we need to take different service types, eg only ClusterIP into account or is this already fixed in the current serving logic?

I think current logic works fine with all types of services.

tzununbekov avatar Oct 23 '21 07:10 tzununbekov

The first port occurrence option does not sounds very right to me:

  • Port order is not expected to play a role (me thinks).
  • It sounds non explicit, hard for a newcomer to figure out intuitively.
  • It is not backwards compatible with how it works today.

For such reasons I would suggest opening a new issue to discuss modifying the Destination duck type.

While that is being worked on, the current PR could:

  • keeping current behavior by default (use port 80 if that one is being exposed).
  • if port 80 is not being exposed and there is only one port, use that one. (In fact I don't like this one very much, HTTPS might require transport configuration that we are not configuring here, but using the only exposed port could help for 8080, 8000, 9090, what you have).
  • we could use an annotation at services, eventing.knative.dev/destination.port, when present we make sure that the port is being exposed at the service and use it.

WDYT?

odacremolbap avatar Oct 24 '21 22:10 odacremolbap

The first port occurrence option does not sounds very right to me:

  • Port order is not expected to play a role (me thinks).
  • It sounds non explicit, hard for a newcomer to figure out intuitively.
  • It is not backwards compatible with how it works today.

The lack of explicitness is the same thing that I don't like about hardcoding http port name priority. I think that port names, as well as their order, should not affect routing logic unless it is explicitly stated somewhere.

For such reasons I would suggest opening a new issue to discuss modifying the Destination duck type.

Are there any other kinds of destinations where we can have multiple ports or ports that we cannot configure for our needs? If it's solely about v1.Service I would hold off updating such a common duck type.

While that is being worked on, the current PR could:

  • keeping current behavior by default (use port 80 if that one is being exposed).
  • if port 80 is not being exposed and there is only one port, use that one. (In fact I don't like this one very much, HTTPS might require transport configuration that we are not configuring here, but using the only exposed port could help for 8080, 8000, 9090, what you have).
  • we could use an annotation at services, eventing.knative.dev/destination.port, when present we make sure that the port is being exposed at the service and use it.

I actually like the idea of using service port annotation. What would annotation point to - port number to use as a target or name of the port from the list? Having said that, I'd go with the following logic:

  1. use the port explicitly defined in Service annotation,
  2. else, check if 80 port is exposed and, if it is, use it for backward compatibility,
  3. if none of the above is true, use the first port from the list.

tzununbekov avatar Oct 25 '21 07:10 tzununbekov

I actually wanted to propose the same ideas (don’t depend on order, use annotation) shared by @odacremolbap :) But I also agree that changing the duck might not be feasible (needed?).

Annotations are usually a good middle-ground for flexibility and prototyping new APIs. So +1 on this.

Having said that, I'd go with the following logic:

  1. use the port explicitly defined in Service annotation,
  2. else, check if 80 port is exposed and, if it is, use it for backward compatibility,
  3. if none of the above is true, use the first port from the list.

I like that order!

embano1 avatar Oct 25 '21 07:10 embano1

I would leave option 3 out, if there are multiple ports none being 80 and the user didn't declare which one to use, just fail resolving instead of providing a non explicit safety net that might lead to hard to track issues.

odacremolbap avatar Oct 26 '21 08:10 odacremolbap

FWIW +1 for adding the annotation on the service.

Currently the addressable resolver treats Service as a special case. Thanks to https://github.com/knative/eventing/issues/5593, it does not have to be done that way. Adding something like this to config-kreference-mapping should do the trick:

v1.Service: https://{{ .Name }}.{{ .Namespace }}.{{ .Domain }}/{{ .Path }}{{ .If {index .Annotations "eventing.knative.dev/destination.port"} {index .Annotations "eventing.knative.dev/destination.port"} {.End} }}

.Path, .Domain and .Annotation are not supported yet but that's easy to add.

lionelvillard avatar Oct 26 '21 13:10 lionelvillard

This issue is stale because it has been open for 90 days with no activity. It will automatically close after 30 more days of inactivity. Reopen the issue with /reopen. Mark the issue as fresh by adding the comment /remove-lifecycle stale.

github-actions[bot] avatar Jan 25 '22 01:01 github-actions[bot]

/remove-lifecycle stale

tzununbekov avatar Jan 25 '22 05:01 tzununbekov

This issue is stale because it has been open for 90 days with no activity. It will automatically close after 30 more days of inactivity. Reopen the issue with /reopen. Mark the issue as fresh by adding the comment /remove-lifecycle stale.

github-actions[bot] avatar Apr 26 '22 01:04 github-actions[bot]

/remove-lifecycle stale /triage accepted

pierDipi avatar Apr 26 '22 07:04 pierDipi