Seed and Master MLA customization uses outdated format
In the current documentation for customizing seed cluster MLAs
- https://github.com/kubermatic/docs/blob/d9c713a3dd5e9cac10d0ad8f7f3944206f3f3747/content/kubermatic/v2.20/tutorials_howtos/monitoring_logging_alerting/master_seed/customization/_index.en.md
the "Customer-Cluster" (this is called User Cluster everywhere else) Prometheus is configured through a kubermatic.clusterNamespacePrometheus key, that is supposedly present in the seed's YAML.
Unfortunately that Key was only available and documented until v2.14,
- https://www.google.com/search?q=%22clusterNamespacePrometheus%22+kubermatic
but is still mentioned in the documentation. As the keys are only offered through
- https://github.com/kubermatic/kubermatic-installer/blob/release/v2.14/values.seed.example.yaml
- https://github.com/kubermatic/kubermatic-installer/blob/release/v2.14/values.example.yaml
which appear very similar to those used in
- https://github.com/kubermatic/kubermatic/blob/master/charts/seed.example.yaml
is it possible to reuse that key
clusterNamespacePrometheusin theSeedspecto configure theuserNamespacePrometheuses?
While the old installer repository is full of references to it:
- https://github.com/kubermatic/kubermatic-installer/search?q=clusterNamespacePrometheus
- https://github.com/kubermatic/kubermatic/search?q=clusterNamespacePrometheus
the new one doesn't know it anymore, and I am unsure where to put custom scraping rules for Prometheus for the user cluster MLA stack, and how to make sure that they propagate to the User Cluster MLA instances, esp. Prometheus here.
It appears this is nowadays done through the monitoring key in the configuration manifest, but remains undocumented:
- https://github.com/kubermatic/kubermatic/blob/c95ebc2c20fe4d9ef4a7c3f31d2c0758b152f0bb/charts/kubermatic-operator/crd/k8c.io/kubermatic.k8c.io_kubermaticconfigurations.yaml#L488-L525
- https://github.com/kubermatic/kubermatic/blob/56acaa8c339c5ef26324663ff93e7821fdb738a2/docs/zz_generated.kubermaticConfiguration.yaml#L425-L446
It appears this is partly related to
- https://github.com/kubermatic/kubermatic/pull/8923
- https://github.com/kubermatic/kubermatic/blob/master/docs/proposals/prometheus-operator.md
Another way to bring in scraping configs, as do the kube-state-metrics and node-exporter plugins, seems to be through providing prometheus-scraping-* configmaps into the mla-system namespace.
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-scraping-postgres-operator
namespace: mla-system
data:
custom-scrape-configs.yaml: |
...
where we can add a similar YAML-formatted string as in the manifest.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@kubermatic-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen /remove-lifecycle rotten
@embik: Reopened this issue.
In response to this:
/reopen /remove-lifecycle rotten
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale
@wurbanski can you maybe take a look at this if you have time and see what we need to fix and if it's still true?
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale
@embik this is the first time I see this 👀 will take a look
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale