sample-apps icon indicating copy to clipboard operation
sample-apps copied to clipboard

Exposing vespa-internal service via LB

Open fabianotex opened this issue 1 year ago • 2 comments

I'm implementing the Vespa multinode-ha example in an on-prem Kubernetes Cluster. Is there a way to properly expose vespa-internal using a LB? My data scientist team prefer to access/use http://myserversxyz:19071 instead of having to "kubectl port-forward pod/vespa-configserver-0 19071" when deploying their application package.

Vespa multinode-ha example shows headless service configured as clusterIP: None. I've tried to modify that to LB, but my LB does not seem to like it. It keeps bringing service up and down at the LB level. Looking at vespa-config-server pods seems to be fine.

Deploying config/configmap.yml, config/headless.yml (modified to use LB) and config/configserver.yml works well. I can curl http://myservicename:19071/state/v1/health without any problem. Problem happens after deploying config/admin.yml. LB will display service nodes/pool going down and up over and over, so when I'm querying http://myservicename:19071/state/v1/health, sometimes will respond with 200, and sometimes with connection refused.

Any idea?

Thanks, FT

fabianotex avatar Aug 23 '24 15:08 fabianotex

Hi, and sorry for slow reponse! You are describing a situation where the configserver pods are fine, until you deploy the rest of the pods (or at least the admin pod). As these are different pods, it looks like a lack of resource problem?

Can you try reproducing by running the steps in https://github.com/vespa-engine/sample-apps/tree/master/examples/operations/multinode-HA/gke ?

kkraune avatar Sep 26 '24 07:09 kkraune

the logs on the configserver pods should indicate if there are problems causing them to go up and down

kkraune avatar Sep 26 '24 07:09 kkraune