Snoby

Results 26 comments of Snoby

@Dileep-Dora - did you come to a resolution on this?

I am seeing the same thing.... ``` foo@es-data1:/etc/systemd/system$ docker logs -f es_exporter level=info ts=2021-05-04T20:19:57.393101301Z caller=clusterinfo.go:200 msg="triggering initial cluster info call" level=info ts=2021-05-04T20:19:57.393208779Z caller=clusterinfo.go:169 msg="providing consumers with updated cluster info label"...

I would also mention that you need to export `no_proxy='*'` as found on this [webpage](https://www.whatan00b.com/posts/debugging-a-segfault-from-ansible/). Otherwise you will get messages like `ERROR! A worker was found in a dead state`....

I tried this functionality and my kuberntes deployment looks like this: ``` Args: --producer=kubernetes --kubernetes-format={{.Namespace}}-{{.Name}}c.tropo.com --consumer=aws --kubernetes-filter external-dns.alpha.kubernetes.io/controller=mate --aws-record-group-id=mate-managed ``` however immediately the container goes into a back off crash...

I just double checked the Args look EXACTLY like they do in my original post. ``` apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mate namespace: kube-system spec: replicas: 1 template: metadata:...

AH HA! That's what it was. I was hoping that this could help my rate limiting problem but doesn't seem too. I continually get lots that show mate is querying...

I'm running into the same issue.

That would be great. The hashrate of each thread is not necessary. In Hive OS a process calls a wrapper layer bash script that needs to return data in json...

@xmrig This tends to happen on a vardiff update ( below snippet is from the pool stratum ). ``` { "time": "2023-01-04 17:37:18.2125", "level": "INFO", "logger": "bitoreum_pool", "message": "[0HMNEBNQH8I9E] VarDiff...