SSL handshake timeout with agent [was: Error when open tables]
Hello!I'm trying to open a list of tables in the maintenance tab and after a while I get an error. There are many tables in the schema. A separate monitoring was created for this server, there are no other servers in it. How can this error be corrected? Is it possible to increase the table polling timeout?
can you share agent logs ?
There are no errors in the log, below is a fragment of the log after trying to open a list of tables. In the web interface, we get the same error as in the screenshot in my first message.
2023-08-14 09:59:58 +06 temboardagent[92554] INFO: services: Starting web. 2023-08-14 09:59:58 +06 temboardagent[92574] INFO: services: Starting scheduler. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Starting monitoring collector. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Gathering host information. 2023-08-14 09:59:59 +06 temboardagent[92604] INFO: dashboard: Running dashboard collector. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Load the probes to run. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running probes at 2023-08-14T03:59:59.552460+00:00. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running instance probe sessions. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running instance probe xacts. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running instance probe locks. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running instance probe blocks. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running instance probe bgwriter. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running instance probe db_size. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running instance probe tblspc_size. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running host probe filesystems_size. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running host probe cpu. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running host probe process. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running host probe memory. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running host probe loadavg. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running instance probe wal_files. 2023-08-14 09:59:59 +06 temboardagent[92605] INFO: monitoring: Running instance probe replication_lag. 2023-08-14 10:00:00 +06 temboardagent[92605] INFO: monitoring: Running instance probe temp_files_size_delta. 2023-08-14 10:00:00 +06 temboardagent[92605] INFO: monitoring: Running instance probe replication_connection. 2023-08-14 10:00:00 +06 temboardagent[92605] INFO: monitoring: Running database probe heap_bloat. 2023-08-14 10:00:01 +06 temboardagent[92605] INFO: monitoring: Running database probe btree_bloat. 2023-08-14 10:00:01 +06 temboardagent[92604] INFO: dashboard: Running dashboard collector. 2023-08-14 10:00:01 +06 temboardagent[92605] INFO: monitoring: Finished probes run. 2023-08-14 10:00:01 +06 temboardagent[92605] INFO: monitoring: Add data to metrics table. 2023-08-14 10:00:01 +06 temboardagent[92605] INFO: monitoring: Collect done. 2023-08-14 10:00:03 +06 temboardagent[92604] INFO: dashboard: Running dashboard collector. 2023-08-14 10:00:06 +06 temboardagent[92604] INFO: dashboard: Running dashboard collector.
Can you curl an agent from UI environment ?
Hi,
I'm having the same issue with schema with large number of table/index.
It seems to be related with the select in attachement during more than 30 seconds (temboard agent timed out befor the end of the query)
The explain noticed a join consuming 99% of the query time:
-> Nested Loop Left Join (cost=5.90..1000.95 rows=406 width=326) (actual time=111.336..72358.904 rows=34886 loops=1) Join Filter: ((s_2.schemaname = ns.nspname) AND (s_2.tablename = tbl.relname) AND (s_2.attname = att.attname)) Rows Removed by Join Filter: 588475079 Buffers: shared hit=140705
Increasing work_mem to avoid disk sort didn't solve the problem.
Tested on postgres 12 and 15, same result
Regards, Sylvain select temboard.txt select temboard explain plan.txt