Prasad Desala
Prasad Desala
> @PrasadDesala I will need more information than just volume status as there could be numerous reasons as to why the Port shows as 0. One of them could be...
> @PrasadDesala So is the expectation that the peer list to just display the list of peers (UUID & Name) and nothing else? either: 1) Yes, so that we have...
Attaching glusterd2 dump, glusterd2 logs and glusterfs process state dump. [kube3-glusterd2.log.gz](https://github.com/gluster/glusterd2/files/2732902/kube3-glusterd2.log.gz) [kube2-glusterd2.log.gz](https://github.com/gluster/glusterd2/files/2732903/kube2-glusterd2.log.gz) [kube1-glusterd2.log.gz](https://github.com/gluster/glusterd2/files/2732904/kube1-glusterd2.log.gz) [glusterdump.1150.dump.1546865584.gz](https://github.com/gluster/glusterd2/files/2732905/glusterdump.1150.dump.1546865584.gz) [statedump_kube-1.txt](https://github.com/gluster/glusterd2/files/2732911/statedump_kube-1.txt)
> @PrasadDesala I am assuming you meant glustershd is consuming high memory? Also did you enable brick multiplexing in the setup? I think it is glustershd but I am not...
This issue is still seen on the last nightly build. glustershd process memory increased from 8616 to 6.2g. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND...
@atinmu This issue is closed and I don't have the perms to reopen it. If you have the access can you please reopen this issue.
one such output, [root@gluster-kube1-0 /]# glustercli volume status pvc-facc0dcc-1d70-11e9-9b03-5254006fcc4e Volume : pvc-facc0dcc-1d70-11e9-9b03-5254006fcc4e +--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+-------+-------+ | BRICK ID | HOST | PATH | ONLINE | PORT | PID | +--------------------------------------+-------------------------------+-----------------------------------------------------------------------------------------+--------+-------+-------+ | cec910f3-850c-449a-9937-d9d14f3253b5...
[kube3_glusterd2.log](https://github.com/gluster/glusterd2/files/2710365/kube3_glusterd2.log) [kube2_glusterd2.log](https://github.com/gluster/glusterd2/files/2710366/kube2_glusterd2.log) [kube1_glusterd2.log](https://github.com/gluster/glusterd2/files/2710367/kube1_glusterd2.log)
> @aravindavk This might be possible because of https://github.com/gluster/glusterd2/blob/master/glusterd2/commands/peers/addpeer.go#L58, this node might be having multiple addresses. > > @PrasadDesala Can you print the output of `glustercli peer status` before adding...
@rishubhjain As discussed I have reproduce this issue again. Once we end up in this situation we will not be able to remove that stale peer. Peer remove is failing...