Timofey Kirillov
Timofey Kirillov
``` 11:16:50 │ │ php-ml-msql 1/1 0 1 11:16:50 │ │ │ POD READY RESTARTS STATUS --- 11:16:50 │ │ └── ml-msql-0 0/1 0 CreateContainerConf Waiting for: ready 0->1 11:16:50...
Colorize pods logs or change log-block appearance to highlight logging block and separate it from status-progress block and other info.
Simple delete namespace then wait until get request gives "not exists" not working. Maybe there is some option for the get-request to show terminating resources. https://github.com/werf/kubedog/blob/master/pkg/trackers/elimination/elimination.go#L139
Add resource pods watching mode into multitracker (https://github.com/flant/kubedog/blob/master/pkg/trackers/rollout/multitrack/multitrack.go#L68) to show only single pod's logs when multiple replicas has been specified.
This is trace-level message, so it should be suppressed in kubedog. Related to an older issue https://github.com/flant/kubedog/issues/134.
Kubedog show logs until: ControllerIsDone|PodIsDone(default)|EndOfDeployProcess. For now it is always PodIsDone, add more modes.
Final status table should resemble status-progress table and helm final table with all resources with pods listed. Do not print child resources (too verbose).
When Pod status for example changed from error to successful, status-progress table should not print error message.
When a Pod is in the ImagePullBackoff state and pull retry period is too high to be awaited right now, during deploy process, then werf should drop this old Pod...
Long output line length: ``` $ kubectl -n myns logs -f migrate-job-pod | grep BADLINE | wc -c 1855361 ```