deepflow icon indicating copy to clipboard operation
deepflow copied to clipboard

Installation issues: 关于安装报错问题

Open googs1025 opened this issue 2 years ago • 1 comments

依据官方文档的步骤进行安装:https://deepflow.io/docs/zh/ce-install/all-in-one/

虚拟机版本:4c8g centos 7.x 版本 k8s版本:v1.22.x

使用kubectl 查看资源时,按理说都成功运行,不过发现会有pod重启的现象 想问是哪些配置没有正确设置呢?


[root@VM-0-16-centos ~]# kubectl get all -ndeepflow
NAME                                    READY   STATUS    RESTARTS      AGE
pod/deepflow-agent-dxmd5                1/1     Running   0             3h14m
pod/deepflow-app-856b99fbf9-ldprq       1/1     Running   3 (21m ago)   3h14m
pod/deepflow-clickhouse-0               1/1     Running   3 (22m ago)   3h14m
pod/deepflow-grafana-5c9c9b4485-t67pn   1/1     Running   2 (49m ago)   3h14m
pod/deepflow-mysql-566bf78d95-lrtg7     1/1     Running   3 (22m ago)   3h14m
pod/deepflow-server-6695c6b5c5-4rftb    1/1     Running   6 (22m ago)   3h14m

NAME                                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                                                                           AGE
service/deepflow-agent                 ClusterIP   10.101.66.144    <none>        80/TCP                                                                                                                            3h14m
service/deepflow-app                   ClusterIP   10.99.215.133    <none>        20418/TCP                                                                                                                         3h14m
service/deepflow-clickhouse            ClusterIP   10.109.19.243    <none>        8123/TCP,9000/TCP,9009/TCP                                                                                                        3h14m
service/deepflow-clickhouse-headless   ClusterIP   None             <none>        8123/TCP,9000/TCP,9009/TCP                                                                                                        3h14m
service/deepflow-grafana               NodePort    10.107.105.78    <none>        80:30848/TCP                                                                                                                      3h14m
service/deepflow-mysql                 ClusterIP   10.102.177.111   <none>        30130/TCP                                                                                                                         3h14m
service/deepflow-server                NodePort    10.98.104.124    <none>        20416:31698/TCP,20419:30048/TCP,20417:30417/TCP,20035:30077/TCP,30035:30035/TCP,20135:32620/TCP,20033:30584/TCP,30033:30033/TCP   3h14m

NAME                            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/deepflow-agent   1         1         1       1            1           <none>          3h14m

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deepflow-app       1/1     1            1           3h14m
deployment.apps/deepflow-grafana   1/1     1            1           3h14m
deployment.apps/deepflow-mysql     1/1     1            1           3h14m
deployment.apps/deepflow-server    1/1     1            1           3h14m

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/deepflow-app-856b99fbf9       1         1         1       3h14m
replicaset.apps/deepflow-grafana-5c9c9b4485   1         1         1       3h14m
replicaset.apps/deepflow-mysql-566bf78d95     1         1         1       3h14m
replicaset.apps/deepflow-server-6695c6b5c5    1         1         1       3h14m

查看日志发现:


[root@VM-0-16-centos ~]# kubectl logs pod/deepflow-grafana-5c9c9b4485-t67pn -ndeepflow
logger=settings t=2023-12-29T19:59:14.800829006+08:00 level=info msg="Starting Grafana" version=10.1.5 commit=849c612fcb branch=HEAD compiled=2023-10-12T00:34:00+08:00
logger=settings t=2023-12-29T19:59:14.804448296+08:00 level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
logger=settings t=2023-12-29T19:59:14.804471302+08:00 level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
logger=settings t=2023-12-29T19:59:14.804477281+08:00 level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana/"
logger=settings t=2023-12-29T19:59:14.804484126+08:00 level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
logger=settings t=2023-12-29T19:59:14.804490634+08:00 level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
logger=settings t=2023-12-29T19:59:14.804496839+08:00 level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
logger=settings t=2023-12-29T19:59:14.804503023+08:00 level=info msg="Config overridden from command line" arg="default.log.mode=console"
logger=settings t=2023-12-29T19:59:14.804509218+08:00 level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana/"
logger=settings t=2023-12-29T19:59:14.804515567+08:00 level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
logger=settings t=2023-12-29T19:59:14.804520834+08:00 level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
logger=settings t=2023-12-29T19:59:14.804526445+08:00 level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
logger=settings t=2023-12-29T19:59:14.804532471+08:00 level=info msg="Config overridden from Environment variable" var="GF_SECURITY_ADMIN_USER=admin"
logger=settings t=2023-12-29T19:59:14.804538545+08:00 level=info msg="Config overridden from Environment variable" var="GF_SECURITY_ADMIN_PASSWORD=*********"
logger=settings t=2023-12-29T19:59:14.804544315+08:00 level=info msg=Target target=[all]
logger=settings t=2023-12-29T19:59:14.804561891+08:00 level=info msg="Path Home" path=/usr/share/grafana
logger=settings t=2023-12-29T19:59:14.804568005+08:00 level=info msg="Path Data" path=/var/lib/grafana/
logger=settings t=2023-12-29T19:59:14.804573635+08:00 level=info msg="Path Logs" path=/var/log/grafana
logger=settings t=2023-12-29T19:59:14.804579662+08:00 level=info msg="Path Plugins" path=/var/lib/grafana/plugins
logger=settings t=2023-12-29T19:59:14.804586551+08:00 level=info msg="Path Provisioning" path=/etc/grafana/provisioning
logger=settings t=2023-12-29T19:59:14.804593282+08:00 level=info msg="App mode production"
logger=sqlstore t=2023-12-29T19:59:14.805044326+08:00 level=info msg="Connecting to DB" dbtype=mysql
logger=migrator t=2023-12-29T19:59:14.862059212+08:00 level=info msg="Starting DB migrations"
logger=migrator t=2023-12-29T19:59:14.87213731+08:00 level=info msg="migrations completed" performed=0 skipped=494 duration=1.00313ms
logger=secrets t=2023-12-29T19:59:14.937499549+08:00 level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
logger=local.finder t=2023-12-29T19:59:15.173777963+08:00 level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
logger=plugin.signature.validator t=2023-12-29T19:59:15.323577424+08:00 level=warn msg="Permitting unsigned plugin. This is not recommended" pluginID=deepflowio-deepflow-datasource
logger=plugin.signature.validator t=2023-12-29T19:59:15.323620287+08:00 level=warn msg="Permitting unsigned plugin. This is not recommended" pluginID=deepflowio-topo-panel
logger=plugin.signature.validator t=2023-12-29T19:59:15.323642266+08:00 level=warn msg="Permitting unsigned plugin. This is not recommended" pluginID=deepflowio-tracing-panel
logger=plugin.loader t=2023-12-29T19:59:15.492479168+08:00 level=info msg="Plugin registered" pluginID=deepflowio-deepflow-datasource
logger=plugin.loader t=2023-12-29T19:59:15.526958081+08:00 level=info msg="Plugin registered" pluginID=deepflowio-topo-panel
logger=plugin.loader t=2023-12-29T19:59:15.5269926+08:00 level=info msg="Plugin registered" pluginID=grafana-clickhouse-datasource
logger=plugin.grafana-clickhouse-datasource t=2023-12-29T19:59:15.546114915+08:00 level=info msg=Profiler enabled=false
logger=plugin.loader t=2023-12-29T19:59:15.547535752+08:00 level=info msg="Plugin registered" pluginID=deepflowio-tracing-panel
logger=query_data t=2023-12-29T19:59:15.566562425+08:00 level=info msg="Query Service initialization"
logger=live.push_http t=2023-12-29T19:59:15.578033972+08:00 level=info msg="Live Push Gateway initialization"
logger=infra.usagestats.collector t=2023-12-29T19:59:22.63709268+08:00 level=info msg="registering usage stat providers" usageStatsProvidersLen=2
logger=modules t=2023-12-29T19:59:22.637971294+08:00 level=info msg=initialising module=secret-migrator
logger=modules t=2023-12-29T19:59:22.639539322+08:00 level=info msg=initialising module=http-server
logger=http.server t=2023-12-29T19:59:22.650874191+08:00 level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket=
logger=modules t=2023-12-29T19:59:22.676557847+08:00 level=info msg=initialising module=provisioning
logger=datasources t=2023-12-29T19:59:22.752700803+08:00 level=warn msg="Invalid datasource uid. The use of invalid uids is deprecated and this operation will fail in a future version of Grafana. A valid uid is a combination of a-z, A-Z, 0-9 (alphanumeric), - (dash) and _ (underscore) characters, maximum length 40" uid="DeepFlow ClickHouse" name="DeepFlow ClickHouse"
logger=datasources t=2023-12-29T19:59:22.791638483+08:00 level=warn msg="Invalid datasource uid. The use of invalid uids is deprecated and this operation will fail in a future version of Grafana. A valid uid is a combination of a-z, A-Z, 0-9 (alphanumeric), - (dash) and _ (underscore) characters, maximum length 40" uid="DeepFlow MySQL" name="DeepFlow MySQL"
logger=provisioning.alerting t=2023-12-29T19:59:22.899285801+08:00 level=info msg="starting to provision alerting"
logger=provisioning.alerting t=2023-12-29T19:59:22.899322218+08:00 level=info msg="finished to provision alerting"
logger=modules t=2023-12-29T19:59:22.899442359+08:00 level=info msg=initialising module=background-services
logger=modules t=2023-12-29T19:59:22.899675048+08:00 level=info msg="All modules healthy" modules="[secret-migrator provisioning http-server background-services]"
logger=ngalert.state.manager t=2023-12-29T19:59:22.902040909+08:00 level=info msg="Warming state cache for startup"
logger=grafanaStorageLogger t=2023-12-29T19:59:22.905251909+08:00 level=info msg="storage starting"
logger=ngalert.state.manager t=2023-12-29T19:59:22.942025995+08:00 level=info msg="State cache has been initialized" states=0 duration=39.98102ms
logger=ngalert.scheduler t=2023-12-29T19:59:22.942069895+08:00 level=info msg="Starting scheduler" tickInterval=10s
logger=ngalert.multiorg.alertmanager t=2023-12-29T19:59:22.943406532+08:00 level=info msg="Starting MultiOrg Alertmanager"
logger=ticker t=2023-12-29T19:59:22.946919704+08:00 level=info msg=starting first_tick=2023-12-29T19:59:30+08:00
logger=plugins.update.checker t=2023-12-29T19:59:24.332716854+08:00 level=info msg="Update check succeeded" duration=1.432427054s
logger=grafana.update.checker t=2023-12-29T19:59:26.119635112+08:00 level=info msg="Update check succeeded" duration=3.215692772s
logger=infra.usagestats t=2023-12-29T20:00:17.001641071+08:00 level=info msg="Usage stats are ready to report"
logger=cleanup t=2023-12-29T20:09:22.998295132+08:00 level=info msg="Completed cleanup jobs" duration=66.265295ms
logger=plugins.update.checker t=2023-12-29T20:09:25.036788762+08:00 level=info msg="Update check succeeded" duration=695.515079ms
logger=grafana.update.checker t=2023-12-29T20:09:56.132975125+08:00 level=error msg="Update check failed" error="failed to get latest.json repo from github.com: Get \"https://raw.githubusercontent.com/grafana/grafana/main/latest.json\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" duration=30.012381371s
logger=cleanup t=2023-12-29T20:19:22.992510773+08:00 level=info msg="Completed cleanup jobs" duration=74.281844ms
logger=plugins.update.checker t=2023-12-29T20:19:25.228954441+08:00 level=info msg="Update check succeeded" duration=892.610532ms
logger=grafana.update.checker t=2023-12-29T20:19:56.29193774+08:00 level=error msg="Update check failed" error="failed to get latest.json repo from github.com: Get \"https://raw.githubusercontent.com/grafana/grafana/main/latest.json\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" duration=30.171627535s
[mysql] 2023/12/29 20:22:28 packets.go:122: closing bad idle connection: unexpected read from socket
[mysql] 2023/12/29 20:26:05 connection.go:173: bad connection
logger=provisioning.dashboard type=file name=deepflow-system t=2023-12-29T20:26:11.469756584+08:00 level=error msg="failed to search for dashboards" error="rolling back transaction due to error failed: driver: bad connection: driver: bad connection"
logger=provisioning.dashboard type=file name=deepflow-system t=2023-12-29T20:26:14.600238628+08:00 level=error msg="failed to search for dashboards" error="dial tcp: lookup deepflow-mysql on 10.96.0.10:53: no such host"
logger=provisioning.dashboard type=file name=deepflow-templates t=2023-12-29T20:26:14.600304042+08:00 level=error msg="failed to search for dashboards" error="dial tcp: lookup deepflow-mysql on 10.96.0.10:53: no such host"
logger=provisioning.dashboard type=file name=deepflow-templates t=2023-12-29T20:26:14.600404186+08:00 level=error msg="failed to search for dashboards" error="dial tcp: lookup deepflow-mysql on 10.96.0.10:53: no such host"
logger=provisioning.dashboard type=file name=deepflow-system t=2023-12-29T20:26:14.600465083+08:00 level=error msg="failed to search for dashboards" error="dial tcp: lookup deepflow-mysql on 10.96.0.10:53: no such host"
logger=ngalert.multiorg.alertmanager t=2023-12-29T20:26:14.873174524+08:00 level=error msg="error while synchronizing Alertmanager orgs" error="dial tcp: lookup deepflow-mysql on 10.96.0.10:53: no such host"
logger=ngalert.sender.router t=2023-12-29T20:26:14.873223865+08:00 level=error msg="Unable to sync admin configuration" error="dial tcp: lookup deepflow-mysql on 10.96.0.10:53: no such host"
logger=ngalert.scheduler t=2023-12-29T20:26:15.414236504+08:00 level=error msg="Failed to update alert rules" error="failed to get alert rules: failed to fetch alert rules: dial tcp: lookup deepflow-mysql on 10.96.0.10:53: no such host"
[mysql] 2023/12/29 20:26:52 packets.go:122: closing bad idle connection: EOF
[mysql] 2023/12/29 20:26:52 packets.go:122: closing bad idle connection: EOF
logger=provisioning.dashboard type=file name=deepflow-templates t=2023-12-29T20:26:57.670639012+08:00 level=error msg="failed to search for dashboards" error="dial tcp: lookup deepflow-mysql on 10.96.0.10:53: read udp 10.244.0.15:33803->10.96.0.10:53: read: connection refused"
logger=provisioning.dashboard type=file name=deepflow-system t=2023-12-29T20:26:57.670637648+08:00 level=error msg="failed to search for dashboards" error="dial tcp: lookup deepflow-mysql on 10.96.0.10:53: read udp 10.244.0.15:33803->10.96.0.10:53: read: connection refused"
logger=provisioning.dashboard type=file name=deepflow-system t=2023-12-29T20:26:57.670765975+08:00 level=error msg="failed to search for dashboards" error="dial tcp: lookup deepflow-mysql on 10.96.0.10:53: read udp 10.244.0.15:33803->10.96.0.10:53: read: connection refused"
logger=provisioning.dashboard type=file name=deepflow-templates t=2023-12-29T20:26:57.670779687+08:00 level=error msg="failed to search for dashboards" error="dial tcp: lookup deepflow-mysql on 10.96.0.10:53: read udp 10.244.0.15:33803->10.96.0.10:53: read: connection refused"
logger=ngalert.scheduler t=2023-12-29T20:26:57.670783371+08:00 level=error msg="Failed to update alert rules" error="failed to get alert rules: failed to fetch alert rules: dial tcp: lookup deepflow-mysql on 10.96.0.10:53: read udp 10.244.0.15:33803->10.96.0.10:53: read: connection refused"
logger=ngalert.scheduler t=2023-12-29T20:27:05.034102532+08:00 level=error msg="Failed to update alert rules" error="failed to get alert rules: failed to fetch alert rules: dial tcp: lookup deepflow-mysql on 10.96.0.10:53: no such host"
logger=ngalert.scheduler t=2023-12-29T20:27:10.010732474+08:00 level=error msg="Failed to update alert rules" error="failed to get alert rules: failed to fetch alert rules: dial tcp: lookup deepflow-mysql on 10.96.0.10:53: no such host"
logger=ngalert.sender.router t=2023-12-29T20:27:15.006801128+08:00 level=error msg="Unable to sync admin configuration" error="dial tcp 10.102.177.111:30130: connect: connection refused"
logger=ngalert.multiorg.alertmanager t=2023-12-29T20:27:15.009357225+08:00 level=error msg="error while synchronizing Alertmanager orgs" error="dial tcp 10.102.177.111:30130: connect: connection refused"
logger=ngalert.scheduler t=2023-12-29T20:27:20.037738845+08:00 level=error msg="Failed to update alert rules" error="failed to get alert rules: failed to fetch alert rules: dial tcp 10.102.177.111:30130: connect: connection refused"
logger=cleanup t=2023-12-29T20:29:22.98010699+08:00 level=info msg="Completed cleanup jobs" duration=68.363956ms
logger=plugins.update.checker t=2023-12-29T20:29:24.92384489+08:00 level=info msg="Update check succeeded" duration=590.856433ms
logger=grafana.update.checker t=2023-12-29T20:29:26.868279527+08:00 level=info msg="Update check succeeded" duration=748.080706ms
logger=infra.usagestats t=2023-12-29T20:30:17.169570045+08:00 level=info msg="Usage stats are ready to report"
logger=cleanup t=2023-12-29T20:39:22.965007462+08:00 level=info msg="Completed cleanup jobs" duration=47.593452ms
logger=plugins.update.checker t=2023-12-29T20:39:24.968737715+08:00 level=info msg="Update check succeeded" duration=635.748752ms
logger=grafana.update.checker t=2023-12-29T20:39:29.615927455+08:00 level=info msg="Update check succeeded" duration=3.495740104s
logger=cleanup t=2023-12-29T20:49:22.968642847+08:00 level=info msg="Completed cleanup jobs" duration=54.028383ms
logger=plugins.update.checker t=2023-12-29T20:49:25.021341706+08:00 level=info msg="Update check succeeded" duration=688.260075ms
logger=grafana.update.checker t=2023-12-29T20:49:26.781139223+08:00 level=info msg="Update check succeeded" duration=661.238056ms

[root@VM-0-16-centos ~]# kubectl logs pod/deepflow-server-6695c6b5c5-4rftb -ndeepflow
...
LAYOUT(FLAT())
2023-12-29 20:50:32.427 [INFO] [tagrecorder] dictionary.go:288 SHOW TABLES FROM flow_tag LIKE '%!v(MISSING)iew'
2023-12-29 20:50:32.440 [INFO] [tagrecorder] dictionary.go:323 app_label_live_view
2023-12-29 20:50:32.444 [INFO] [tagrecorder] dictionary.go:323 target_label_live_view
2023-12-29 20:50:32.446 [INFO] [tagrecorder] dictionary.go:365 refresh live view app_label_live_view in (10.244.0.12: 9000)
2023-12-29 20:50:32.448 [INFO] [tagrecorder] dictionary.go:374 refresh live view target_label_live_view in (10.244.0.12: 9000)
2023-12-29 20:50:37.819 [INFO] [trisolaris/vtap] process_info.go:1067 start generate gpid data from timed
2023-12-29 20:50:37.823 [INFO] [trisolaris/vtap] process_info.go:1069 end generate gpid data from timed
2023-12-29 20:50:38.597 [INFO] [trisolaris/synchronize] tsdb.go:64 ctrl_ip is 127.0.0.1, (platform data version 1703852851 -> 0), (acl version 3703852851 -> 0), (groups version 1703852851 -> 0), NAME:server-pod-names-watcher
2023-12-29 20:50:39.849 [GIN] 10.244.0.1 GET /v1/health/ 200 2.794769ms
2023-12-29 20:50:46.603 [INFO] [trisolaris/synchronize] tsdb.go:64 ctrl_ip is 127.0.0.1, (platform data version 1703852851 -> 0), (acl version 3703852851 -> 0), (groups version 1703852851 -> 0), NAME:server-pod-names-watcher
2023-12-29 20:50:46.819 [INFO] [trisolaris/vtap] process_info.go:1067 start generate gpid data from timed
2023-12-29 20:50:46.822 [INFO] [trisolaris/vtap] process_info.go:1069 end generate gpid data from timed
2023-12-29 20:50:49.014 [INFO] [trisolaris/synchronize] tsdb.go:64 ctrl_ip is 10.0.0.16, (platform data version 1703852851 -> 1703852851), (acl version 3703852851 -> 0), (groups version 1703852851 -> 0), NAME:ingester
2023-12-29 20:50:49.014 [INFO] [trisolaris/synchronize] tsdb.go:75 ctrl_ip:10.0.0.16, cpu_num:4, memory_size:8101208064, arch:x86_64, os:alpine 3.18.4, kernel_version:3.10.0, pcap_data_mount_path:<nil>
2023-12-29 20:50:49.017 [INFO] [grpc] grpc_platformdata.go:1097 Update rpc groups version 0 -> 1703852851
2023-12-29 20:50:49.846 [GIN] 10.244.0.1 GET /v1/health/ 200 69.97µs
2023-12-29 20:50:49.846 [GIN] 10.244.0.1 GET /v1/health/ 200 69.538µs
2023-12-29 20:50:49.885 [INFO] [trisolaris/synchronize] tsdb.go:64 ctrl_ip is 127.0.0.1, (platform data version 1703852851 -> 0), (acl version 3703852851 -> 0), (groups version 1703852851 -> 0), NAME:resource-info-watcher
2023-12-29 20:50:49.886 [INFO] [event.decoder] grpc_resource_info.go:163 Event update rpc platformdata version 0 -> 1703852851
2023-12-29 20:50:54.606 [INFO] [trisolaris/synchronize] tsdb.go:64 ctrl_ip is 127.0.0.1, (platform data version 1703852851 -> 0), (acl version 3703852851 -> 0), (groups version 1703852851 -> 0), NAME:server-pod-names-watcher
2023-12-29 20:50:55.820 [INFO] [trisolaris/vtap] process_info.go:1067 start generate gpid data from timed
2023-12-29 20:50:55.823 [INFO] [trisolaris/vtap] process_info.go:1069 end generate gpid data from timed
2023-12-29 20:50:56.802 [WARN] [trisolaris/synchronize] vtap.go:282 vtap (ctrl_ip: 10.0.0.16, ctrl_mac: 52:54:00:84:3e:8c, host_ips: [10.0.0.16 172.17.0.1 10.244.0.0 10.244.0.1 10.16.0.1], kubernetes_cluster_id: d-CrMIwhBTri, kubernetes_force_watch: false, group_id: ) not found in cache. NAME:deepflow-agent-ce  REVISION:v6.4.3 9294-8e168feb8335c5938959516ba62b34ea0143c237  BOOT_TIME:1703842774
2023-12-29 20:50:56.806 [INFO] [trisolaris/synchronize] vtap.go:544 open cluster(d-CrMIwhBTri) kubernetes_api_enabled VTap(ctrl_ip: 10.0.0.16, ctrl_mac: 52:54:00:84:3e:8c, kubernetes_force_watch: false)
2023-12-29 20:50:56.806 [INFO] [trisolaris/vtap] vtap.go:1138 start vtap register

2023/12/29 20:50:56 /home/runnerx/actions-runner/_work/deepflow/deepflow/server/controller/trisolaris/dbmgr/dbmgr.go:262 record not found
[4.483ms] [rows:0] SELECT * FROM `vtap` WHERE `ctrl_ip` = '10.0.0.16' AND `ctrl_mac` = '52:54:00:84:3e:8c' ORDER BY `vtap`.`id` LIMIT 1
2023-12-29 20:50:56.820 [INFO] [trisolaris/vtap] vtap_discovery.go:878 register vtap: {tapMode:0 vTapGroupID: defaultVTapGroup:0329a3f0-a62e-11ee-9c87-629b4b29180f vTapAutoRegister:true agentUniqueIdentifier:1 VTapLKData:{ctrlIP:10.0.0.16 ctrlMac:52:54:00:84:3e:8c hostIPs:[10.0.0.16 172.17.0.1 10.244.0.0 10.244.0.1 10.16.0.1 10.0.0.16] host:deepflow-agent-dxmd5 region:ffffffff-ffff-ffff-ffff-ffffffffffff}}

2023/12/29 20:50:56 /home/runnerx/actions-runner/_work/deepflow/deepflow/server/controller/trisolaris/dbmgr/dbmgr.go:84 record not found
[0.721ms] [rows:0] SELECT * FROM `host_device` WHERE `ip` IN ('10.0.0.16','172.17.0.1','10.244.0.0','10.244.0.1','10.16.0.1','10.0.0.16') AND `host_device`.`deleted_at` IS NULL ORDER BY `host_device`.`id` LIMIT 1
2023-12-29 20:50:56.827 [ERRO] [trisolaris/vtap] vtap_discovery.go:190 failed to register agent(10.0.0.16-52:54:00:84:3e:8c) by querying DB table host_device(ip in ([10.0.0.16 172.17.0.1 10.244.0.0 10.244.0.1 10.16.0.1 10.0.0.16])) without finding data, err: record not found

2023/12/29 20:50:56 /home/runnerx/actions-runner/_work/deepflow/deepflow/server/controller/trisolaris/dbmgr/dbmgr.go:161 record not found
[0.512ms] [rows:0] SELECT * FROM `host_device` WHERE `name` = 'deepflow-agent-dxmd5' AND `host_device`.`deleted_at` IS NULL ORDER BY `host_device`.`id` LIMIT 1
2023-12-29 20:50:56.827 [ERRO] [trisolaris/vtap] vtap_discovery.go:194 failed to register agent(10.0.0.16-52:54:00:84:3e:8c) by querying DB table host_device(name in (deepflow-agent-dxmd5)) without finding data, err: record not found
2023-12-29 20:50:56.828 [ERRO] [trisolaris/vtap] vtap_discovery.go:258 failed to register agent(10.0.0.16-52:54:00:84:3e:8c) by querying DB table pod_node(ip in ([10.0.0.16 172.17.0.1 10.244.0.0 10.244.0.1 10.16.0.1 10.0.0.16]) or name in (deepflow-agent-dxmd5)) without finding data
2023-12-29 20:50:56.829 [ERRO] [trisolaris/vtap] vtap_discovery.go:466 failed to register agent(10.0.0.16-52:54:00:84:3e:8c) by querying DB table vinterface_ip(ip in ([10.0.0.16 172.17.0.1 10.244.0.0 10.244.0.1 10.16.0.1 10.0.0.16])) without finding data
2023-12-29 20:50:56.830 [ERRO] [trisolaris/vtap] vtap_discovery.go:484 failed to register agent(10.0.0.16-52:54:00:84:3e:8c) by querying DB table ip_resource(ip in ([10.0.0.16 172.17.0.1 10.244.0.0 10.244.0.1 10.16.0.1 10.0.0.16])) without finding data
2023-12-29 20:50:56.831 [ERRO] [trisolaris/vtap] vtap_discovery.go:466 failed to register agent(10.0.0.16-52:54:00:84:3e:8c) by querying DB table vinterface_ip(ip in ([10.0.0.16 172.17.0.1 10.244.0.0 10.244.0.1 10.16.0.1 10.0.0.16])) without finding data
2023-12-29 20:50:56.831 [ERRO] [trisolaris/vtap] vtap_discovery.go:484 failed to register agent(10.0.0.16-52:54:00:84:3e:8c) by querying DB table ip_resource(ip in ([10.0.0.16 172.17.0.1 10.244.0.0 10.244.0.1 10.16.0.1 10.0.0.16])) without finding data
2023-12-29 20:50:56.831 [INFO] [trisolaris/vtap] vtap.go:1145 end vtap register
2023-12-29 20:50:56.977 [WARN] [genesis] grpc_server.go:217 kubernetes api sync received message with vtap_id 0 from 10.0.0.16
2023-12-29 20:50:56.982 [INFO] [genesis] grpc_server.go:276 kubernetes api sync received version 1703842825 from ip 10.0.0.16 no vtap_id
2023-12-29 20:50:56.986 [WARN] [genesis] grpc_server.go:217 kubernetes api sync received message with vtap_id 0 from 10.0.0.16
2023-12-29 20:50:56.986 [INFO] [genesis] grpc_server.go:276 kubernetes api sync received version 1703842825 from ip 10.0.0.16 no vtap_id
2023-12-29 20:50:59.846 [GIN] 10.244.0.1 GET /v1/health/ 200 86.023µs
2023-12-29 20:51:01.684 [INFO] [cloud] kubernetes_gather_task.go:104 kubernetes gather (k8s-d-CrMIwhBTri) assemble data starting
2023-12-29 20:51:01.721 [WARN] [genesis] genesis.go:590 prometheus data not found cluster id (d-CrMIwhBTri)
2023-12-29 20:51:01.727 [ERRO] [cloud.kubernetes_gather] pod_node.go:137 node tidy node ip Errornetaddr.ParseIPPrefix("42.193.17.123:6443/32"): ParseIP("42.193.17.123:6443"): unexpected character (at ":6443")
2023-12-29 20:51:01.729 [INFO] [cloud] kubernetes_gather_task.go:116 kubernetes gather (k8s-d-CrMIwhBTri) assemble data complete
[root@VM-0-16-centos ~]#

googs1025 avatar Dec 29 '23 12:12 googs1025

你好,请问具体是哪些 pod 在重启呢?能否使用 kubectl logs -p 查看一下对应重启 pod 的日志,或者提供一个联系方式,方便沟通?

目前只能从 server 日志中看出来有些 agent 对接出现问题,但无法看出重启原因

1473371932 avatar Jan 15 '24 01:01 1473371932