Need Clarification on Protobuf, Kafka and Prometheus interaction
Hello,
Apologies in advance, I was unable to access the slack linked in the bug report menu. I am trying to connect Kafka to a Prometheus instance and I am confused on how index mapping works surrounding Protobuf.
From the guide:
SINK_PROM_METRIC_NAME_PROTO_INDEX_MAPPING
The mapping of fields and the corresponding proto index which will be set as the metric name on Cortex. This is a JSON field.
Example value: {"2":"tip_amount","1":"feedback_ratings"} Proto field value with index 2 will be stored as metric named tip_amount in Cortex and so on Type: required
Would "tip_amount" be a record header? Can firehose handle Kafka messages with variable record list lengths?
Thank you, Liam
Hi @liamhoganIBM,
Prometheus sink on firehose is not purposed for connecting from Kafka to Prometheus instances directly, since Prometheus is pull-based. So Prometheus sink on firehose is purposed to push the events from Kafka to TimeSeries database, eg:cortex. The events from Kafka will be parsed to prometheus exposition format
example:
metric_name{label_name1=label_value1, label_name2=value2} metric_value
From the example config on guideline section,
if SINK_PROM_METRIC_NAME_PROTO_INDEX_MAPPING set to {"2":"tip_amount","1":"feedback_ratings"}, there will be two metric names stored in tsdb.
tip_amount <tip_amount_value>
feedback_ratings <feedback_ratings_value>
Closing because of inactivity.