Performance degradation on query interpretation
While testing an upgrade to 22.3 we've noticed that some queries (especially longer ones with several joins) have degraded their performance notably across the latest CH releases (and they seem to keep getting slower with each passing release).
Reviewing the logs and perf it seems CH is now spending way more time interpreting the query and executing the plan vs executing the plan itself (which is in fact faster).
Here is a reproducer based on a real query. Setup:
DROP database IF EXISTS db_interpretation_01 SYNC;
DROP database IF EXISTS db_interpretation_02 SYNC;
create database IF NOT EXISTS db_interpretation_01;
CREATE TABLE IF NOT EXISTS db_interpretation_01.table_01
(
`idColumnU64` UInt64,
`dateColumn` DateTime,
`aggCount` AggregateFunction(count),
`aggArgMaxFloat32_1` AggregateFunction(argMax, Float32, DateTime),
`aggArgMaxString` AggregateFunction(argMax, String, DateTime),
`aggArgMaxFloat32_2` AggregateFunction(argMax, Float32, DateTime),
`nDateTime_02_date` SimpleAggregateFunction(max, DateTime),
`nDateTime_02_date_292929292` SimpleAggregateFunction(max, DateTime),
`agg_topk_uint32` AggregateFunction(topKWeighted(2), UInt32, UInt32),
`agg_argmax_string_datetime_01` AggregateFunction(argMax, String, DateTime),
`agg_argmax_string_u8_01` AggregateFunction(argMax, UInt8, DateTime),
`agg_argmax_string_datetime_02` AggregateFunction(argMax, String, DateTime),
`agg_argmax_string_datetime_03` AggregateFunction(argMax, String, DateTime),
`agg_argmax_string_datetime_04` AggregateFunction(argMax, String, DateTime),
`agg_argmax_string_datetime_05` AggregateFunction(argMax, String, DateTime),
`agg_argmax_string_datetime_06` AggregateFunction(argMax, String, DateTime),
`agg_argmax_string_datetime_07` AggregateFunction(argMax, String, DateTime),
`agg_argmax_string_u8_02` AggregateFunction(argMax, UInt8, DateTime),
`agg_argmax_string_u8_03` AggregateFunction(argMax, UInt8, DateTime),
`agg_argmax_string_u8_04` AggregateFunction(argMax, UInt8, DateTime),
`agg_count_01` AggregateFunction(count),
`agg_count_02` AggregateFunction(count),
`agg_count_03` AggregateFunction(count),
`agg_count_04` AggregateFunction(count),
`agg_count_05` AggregateFunction(count),
`agg_count_06` AggregateFunction(count),
`agg_count_07` AggregateFunction(count),
`agg_count_08` AggregateFunction(count),
`agg_count_09` AggregateFunction(count),
`agg_count_10` AggregateFunction(count),
`agg_count_11` AggregateFunction(count),
`agg_count_12` AggregateFunction(count),
`agg_count_13` AggregateFunction(count),
`agg_count_14` AggregateFunction(count),
`agg_count_15` AggregateFunction(count),
`agg_count_16` AggregateFunction(count),
`agg_argmax_string_datetime_08` AggregateFunction(argMax, String, DateTime),
`agg_argmax_f32_datetime_01` AggregateFunction(argMax, Float32, DateTime),
`agg_argmax_string_datetime_09` AggregateFunction(argMax, String, DateTime),
`agg_argmax_f32_datetime_02` AggregateFunction(argMax, Float32, DateTime),
`agg_argmax_date_datetime_01` AggregateFunction(argMax, Date, DateTime),
`agg_argmax_date_datetime_02` AggregateFunction(argMax, Date, DateTime),
`agg_argmax_u8_other_01` AggregateFunction(argMax, UInt8, DateTime),
`agg_argmax_u8_other_02` AggregateFunction(argMax, UInt8, DateTime),
`agg_argmax_u8_other_03` AggregateFunction(argMax, UInt8, DateTime),
`agg_argmax_u8_other_04` AggregateFunction(argMax, UInt8, DateTime),
`agg_argmax_string_datetime_10` AggregateFunction(argMax, String, DateTime),
`agg_argmax_u8_other_05` AggregateFunction(argMax, UInt8, DateTime),
`agg_argmax_u8_other_06` AggregateFunction(argMax, UInt8, DateTime),
`agg_argmax_u8_other_07` AggregateFunction(argMax, UInt8, DateTime),
`agg_argmax_string_datetime_11` AggregateFunction(argMax, String, DateTime),
`other_max_datetime_01` SimpleAggregateFunction(max, DateTime),
`other_max_datetime_02` SimpleAggregateFunction(max, DateTime),
`nDateTime_03_date` SimpleAggregateFunction(max, DateTime),
`nDateTime_03_shown_date` SimpleAggregateFunction(max, DateTime),
`nDateTime_04_date` SimpleAggregateFunction(max, DateTime),
`nDateTime_04_shown_date` SimpleAggregateFunction(max, DateTime),
`aggCount_3` AggregateFunction(count),
`uniq_date_agg` AggregateFunction(uniq, Date),
`aggCount_4` AggregateFunction(count),
`agg_argmax_u128_datetime_01` AggregateFunction(argMax, UInt128, DateTime),
`topk_u128_01` AggregateFunction(topKWeighted(5), UInt128, UInt32),
`agg_argmax_string_datetime_12` AggregateFunction(argMax, String, DateTime),
`agg_argmax_string_datetime_13` AggregateFunction(argMax, String, DateTime),
`agg_argmax_string_datetime_14` AggregateFunction(argMax, String, DateTime),
`agg_argmax_string_datetime_15` AggregateFunction(argMax, String, DateTime),
`agg_argmax_u32_datetime_01` AggregateFunction(argMax, UInt32, DateTime),
`agg_argmax_string_datetime_16` AggregateFunction(argMax, String, DateTime),
`agg_argmax_string_u8_100` AggregateFunction(argMax, String, UInt8),
`agg_argmax_string_datetime_18` AggregateFunction(argMax, String, DateTime),
`other_max_datetime_05` SimpleAggregateFunction(max, DateTime),
`topk_Datetime_u32_u32` AggregateFunction(topKWeighted(5), UInt32, UInt32),
`agg_argmax_string_datetime_17` AggregateFunction(argMax, String, DateTime),
`other_max_datetime_09` SimpleAggregateFunction(max, DateTime),
`agg_count_17` AggregateFunction(count),
`agg_count_18` AggregateFunction(count),
`agg_count_19` AggregateFunction(count),
`agg_count_20` AggregateFunction(count)
)
ENGINE = AggregatingMergeTree()
PARTITION BY toYYYYMM(dateColumn)
ORDER BY idColumnU64
TTL dateColumn + toIntervalMonth(6)
SETTINGS index_granularity = 1024, ttl_only_drop_parts = 1, min_rows_for_wide_part = 1000000000;
CREATE TABLE IF NOT EXISTS db_interpretation_01.table_02
(
`idColumnU64` UInt64,
`dateColumn` DateTime,
`agg_uniq_u128_01` AggregateFunction(uniq, UInt128),
`agg_uniq_u128_02` AggregateFunction(uniq, UInt128),
`aggCount` AggregateFunction(count),
`agg_uniq_u128_03` AggregateFunction(uniq, UInt128),
`agg_uniq_u128_04` AggregateFunction(uniq, UInt128),
`aggCount_3` AggregateFunction(count),
`aggCount_4` AggregateFunction(count),
`agg_topk_01` AggregateFunction(topKWeighted(2), UInt128, UInt64)
)
ENGINE = AggregatingMergeTree()
PARTITION BY toYYYYMM(dateColumn)
ORDER BY idColumnU64
TTL dateColumn + toIntervalMonth(6)
SETTINGS index_granularity = 1024, ttl_only_drop_parts = 1;
CREATE TABLE IF NOT EXISTS db_interpretation_01.table_03
(
`idColumnU64` UInt64,
`dateColumn` Date,
`aggCount` AggregateFunction(count),
`aggCount_2` AggregateFunction(count),
`aggCount_2_shown` AggregateFunction(count),
`minDate` SimpleAggregateFunction(min, Date),
`maxDate` SimpleAggregateFunction(max, Date),
`maxInt16` SimpleAggregateFunction(max, Int16),
`minUInt16` SimpleAggregateFunction(min, UInt16),
`minUInt16_2` SimpleAggregateFunction(min, UInt16),
`aggCount_3` AggregateFunction(count),
`aggCount_4` AggregateFunction(count)
)
ENGINE = AggregatingMergeTree()
PARTITION BY toYYYYMM(dateColumn)
ORDER BY (idColumnU64, dateColumn)
TTL dateColumn + toIntervalDay(30)
SETTINGS index_granularity = 1024, ttl_only_drop_parts = 1;
CREATE TABLE IF NOT EXISTS db_interpretation_01.table_04
(
`idColumnU64` UInt64,
`dateColumn` DateTime,
`u128_id_02` UInt128,
`ls_01` LowCardinality(String),
`agg_count_01` AggregateFunction(count),
`agg_count_02` AggregateFunction(count),
`agg_smax_datetime_01` SimpleAggregateFunction(max, DateTime),
`agg_smax_datetime_02` SimpleAggregateFunction(max, DateTime),
`agg_count_03` AggregateFunction(count),
`agg_count_04` AggregateFunction(count)
)
ENGINE = AggregatingMergeTree()
PARTITION BY toYYYYMM(dateColumn)
ORDER BY (idColumnU64, u128_id_02, ls_01)
TTL dateColumn + toIntervalMonth(6)
SETTINGS index_granularity = 1024, ttl_only_drop_parts = 1;
CREATE TABLE IF NOT EXISTS db_interpretation_01.table_05
(
`idColumnU64` UInt64,
`dateColumn` Date,
`agg_uniq_u128_01` AggregateFunction(uniq, UInt128),
`agg_uniq_u128_02` AggregateFunction(uniq, UInt128),
`agg_uniq_u128_03` AggregateFunction(uniq, UInt128),
`agg_uniq_u128_04` AggregateFunction(uniq, UInt128),
`aggCount_3` AggregateFunction(count),
`aggCount_4` AggregateFunction(count),
`aggCount` AggregateFunction(count)
)
ENGINE = AggregatingMergeTree()
PARTITION BY toYYYYMM(dateColumn)
ORDER BY (idColumnU64, dateColumn)
TTL dateColumn + toIntervalDay(30)
SETTINGS index_granularity = 1024, ttl_only_drop_parts = 1;
CREATE TABLE IF NOT EXISTS db_interpretation_01.table_06
(
`idColumnU64` UInt64,
`dateColumn` DateTime,
`aggCount_3` AggregateFunction(count),
`aggCount` AggregateFunction(count),
`sagg_max_date` SimpleAggregateFunction(max, DateTime)
)
ENGINE = AggregatingMergeTree()
PARTITION BY toYYYYMM(dateColumn)
ORDER BY idColumnU64
TTL dateColumn + toIntervalMonth(6)
SETTINGS index_granularity = 1024, ttl_only_drop_parts = 1;
CREATE DATABASE IF NOT EXISTS db_interpretation_02;
CREATE TABLE IF NOT EXISTS db_interpretation_02.table_07
(
`idU128` UInt128,
`idU128_2` UInt128,
`idU128_3` UInt128,
`nI16` Nullable(Int16) DEFAULT CAST(NULL, 'Nullable(Int16)'),
`idColumnI64` Nullable(Int64) DEFAULT CAST(NULL, 'Nullable(Int64)'),
`nStr` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nStr_2` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nI16_02` Nullable(Int16) DEFAULT CAST(NULL, 'Nullable(Int16)'),
`nStr_3` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nStr_4` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nStr_5` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nI8_01` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nI8_02` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nI8_03` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nI8_04` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nI8_05` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nI8_06` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nDate_01` Nullable(Date) DEFAULT CAST(NULL, 'Nullable(Date)'),
`nStr_6` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nStr_7` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nStr_8` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nStr_9` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nStr_10` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nStr_11` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nI8_07` Nullable(UInt8) DEFAULT CAST(NULL, 'Nullable(UInt8)'),
`nI8_08` Nullable(UInt8) DEFAULT CAST(NULL, 'Nullable(UInt8)'),
`Str_01` String,
`nI32_01` Nullable(Int32) DEFAULT CAST(NULL, 'Nullable(Int32)'),
`nI8_19` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nI8_09` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nI8_10` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nI8_11` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nI8_12` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nI8_13` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nI8_14` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nStr_12` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nStr_13` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nI8_15` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nI8_16` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nDateTime_01` Nullable(DateTime) DEFAULT CAST(NULL, 'Nullable(DateTime)'),
`nDateTime_02` Nullable(DateTime) DEFAULT CAST(NULL, 'Nullable(DateTime)'),
`nDateTime_03` Nullable(DateTime) DEFAULT CAST(NULL, 'Nullable(DateTime)'),
`nDateTime_04` Nullable(DateTime) DEFAULT CAST(NULL, 'Nullable(DateTime)'),
`nDateTime_05` Nullable(DateTime) DEFAULT CAST(NULL, 'Nullable(DateTime)'),
`nDateTime_06` Nullable(DateTime) DEFAULT CAST(NULL, 'Nullable(DateTime)'),
`nDateTime_07` Nullable(DateTime) DEFAULT CAST(NULL, 'Nullable(DateTime)'),
`nDateTime_08` Nullable(DateTime) DEFAULT CAST(NULL, 'Nullable(DateTime)'),
`nDateTime_09` Nullable(DateTime) DEFAULT CAST(NULL, 'Nullable(DateTime)'),
`nDateTime_10` Nullable(DateTime) DEFAULT CAST(NULL, 'Nullable(DateTime)'),
`nDateTime_11` Nullable(DateTime) DEFAULT CAST(NULL, 'Nullable(DateTime)'),
`nDateTime_12` Nullable(DateTime) DEFAULT CAST(NULL, 'Nullable(DateTime)'),
`nF64_01` Nullable(Float64) DEFAULT CAST(NULL, 'Nullable(Float64)'),
`nStr_14` Nullable(String) DEFAULT CAST(NULL, 'Nullable(String)'),
`nDate_02` Nullable(Date) DEFAULT CAST(NULL, 'Nullable(Date)'),
`nDateTime_13` Nullable(DateTime) DEFAULT CAST(NULL, 'Nullable(DateTime)'),
`nF64_02` Nullable(Float64) DEFAULT CAST(NULL, 'Nullable(Float64)'),
`nF64_03` Nullable(Float64) DEFAULT CAST(NULL, 'Nullable(Float64)'),
`nF64_04` Nullable(Float64) DEFAULT CAST(NULL, 'Nullable(Float64)'),
`nF64_05` Nullable(Float64) DEFAULT CAST(NULL, 'Nullable(Float64)'),
`nI8_18` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)'),
`nI8_17` Nullable(Int8) DEFAULT CAST(NULL, 'Nullable(Int8)')
)
ENGINE = Join(ANY, LEFT, idU128);
CREATE TABLE IF NOT EXISTS db_interpretation_01.table_08
(
`idColumnU64` UInt64,
`dateColumn` Date,
`aggCount_3` AggregateFunction(count),
`aggCount_4` AggregateFunction(count)
)
ENGINE = AggregatingMergeTree()
PARTITION BY toYYYYMM(dateColumn)
ORDER BY (idColumnU64, dateColumn)
TTL dateColumn + toIntervalDay(30)
SETTINGS index_granularity = 1024, ttl_only_drop_parts = 1;
No data is loaded (it's not necessary to reproduce the slow down):
Benchmark:
clickhouse-benchmark -i 100 --query "SELECT *
FROM
(
SELECT
cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(toUInt128('1015029'))) AS idColumnU64,
*
FROM
(
SELECT
if(max(nDateTime_02_date_292929292) > '2020-10-31 00:00:00', max(nDateTime_02_date_292929292), NULL) AS o1,
if(max(other_max_datetime_05) > '2020-10-31 00:00:00', max(other_max_datetime_05), NULL) AS o2,
if(max(nDateTime_03_date) > '2020-10-31 00:00:00', max(nDateTime_03_date), NULL) AS o3,
if(max(nDateTime_04_date) > '2020-10-31 00:00:00', max(nDateTime_04_date), NULL) AS o4,
if(max(nDateTime_02_date) > '2020-10-31 00:00:00', max(nDateTime_02_date), NULL) AS o5,
if(max(other_max_datetime_01) > '2020-10-31 00:00:00', max(other_max_datetime_01), NULL) AS o6,
if(max(other_max_datetime_02) > '2020-10-31 00:00:00', max(other_max_datetime_02), NULL) AS o7,
argMaxMerge(agg_argmax_string_datetime_13) AS o8,
argMaxMerge(agg_argmax_string_datetime_05) AS o9,
argMaxMerge(agg_argmax_string_datetime_06) AS o10,
argMaxMerge(agg_argmax_string_datetime_02) AS o11,
argMaxMerge(agg_argmax_string_datetime_04) AS o12,
argMaxMerge(agg_argmax_string_datetime_15) AS o13,
argMaxMerge(agg_argmax_string_datetime_01) AS o14,
argMaxMerge(agg_argmax_string_u8_01) AS o15,
argMaxMerge(agg_argmax_f32_datetime_02) AS o16,
if(argMaxMerge(agg_argmax_string_datetime_09) != '', argMaxMerge(agg_argmax_string_datetime_09), NULL) AS o17,
if(argMaxMerge(agg_argmax_date_datetime_01) > '2020-10-31', argMaxMerge(agg_argmax_date_datetime_01), NULL) AS o18,
if(argMaxMerge(agg_argmax_date_datetime_02) > '2020-10-31', argMaxMerge(agg_argmax_date_datetime_02), NULL) AS o19,
argMaxMerge(agg_argmax_u8_other_02) AS o20,
argMaxMerge(agg_argmax_u8_other_03) AS o21,
argMaxMerge(agg_argmax_u8_other_04) AS o22,
argMaxMerge(agg_argmax_u8_other_01) AS o23,
argMaxMerge(agg_argmax_string_datetime_10) AS o24,
argMaxMerge(agg_argmax_string_datetime_11) AS o25,
countMerge(aggCount_3) AS o26,
countMerge(aggCount_4) AS o27
FROM db_interpretation_01.table_01 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(toUInt128('1015029'))) = c.idColumnU64
) AS s01,
(
WITH (
SELECT coalesce(if((topKWeightedMerge(2)(agg_topk_01)[1]) != toUInt128(toUInt128('1015029')), topKWeightedMerge(2)(agg_topk_01)[1], topKWeightedMerge(2)(agg_topk_01)[2]), 0)
FROM db_interpretation_01.table_02 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2')) = c.idColumnU64
) AS other_idU128
SELECT
if(max(other_max_datetime_05) > '2020-10-31 00:00:00', max(other_max_datetime_05), NULL) AS o28,
if(max(other_max_datetime_01) > '2020-10-31 00:00:00', max(other_max_datetime_01), NULL) AS o29,
if(max(nDateTime_02_date) > '2020-10-31 00:00:00', max(nDateTime_02_date), NULL) AS o30,
other_idU128
FROM db_interpretation_01.table_01 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(other_idU128)) = c.idColumnU64
) AS s02,
(
SELECT
minIf(minDate, dateColumn > (now() - toIntervalDay(7))) AS o31,
maxIf(maxDate, dateColumn > (now() - toIntervalDay(7))) AS o32,
maxIf(maxInt16, dateColumn > (now() - toIntervalDay(28))) AS o33,
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalHour(24))) AS o34,
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalDay(14))) AS o35,
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalDay(28))) AS o36,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalHour(24))) AS o37,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(7))) AS o38,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(28))) AS o27_month,
countMergeIf(aggCount_2_shown, dateColumn > (now() - toIntervalDay(14))) AS o40
FROM db_interpretation_01.table_03 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(toUInt128('1015029'))) = c.idColumnU64
) AS s03,
(
SELECT
countMerge(agg_count_03) AS o41,
countMerge(agg_count_04) AS o42
FROM db_interpretation_01.table_04 AS c
PREWHERE (cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(toUInt128('1015029'))) = c.idColumnU64) AND (ls_01 = 'exit')
) AS s04,
(
SELECT
countMerge(aggCount_3) AS o43,
countMerge(aggCount_4) AS o44,
countMerge(aggCount) AS o45
FROM db_interpretation_01.table_02 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2')) = c.idColumnU64
) AS s05,
(
SELECT
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalDay(14))) AS o46,
uniqMergeIf(agg_uniq_u128_03, dateColumn > (now() - toIntervalHour(24))) AS o47,
uniqMergeIf(agg_uniq_u128_03, dateColumn > (now() - toIntervalDay(14))) AS o48,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(14))) AS o49,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(28))) AS o50
FROM db_interpretation_01.table_05 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2')) = c.idColumnU64
) AS s06,
(
SELECT countMerge(aggCount_3) AS o51
FROM db_interpretation_01.table_06 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(joinGet(db_interpretation_02.table_07, 'idColumnI64', toUInt128('1015029')))) = c.idColumnU64
) AS s07,
(
SELECT
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalDay(28))) AS s52,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(28))) AS s53
FROM db_interpretation_01.table_08 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(joinGet(db_interpretation_02.table_07, 'idColumnI64', toUInt128('1015029')))) = c.idColumnU64
) AS s08
) AS final_s01
FORMAT JSONEachRow;"
Results per version:
- 21.8.15.7:
localhost:9000, queries 100, QPS: 18.219, RPS: 0.000, MiB/s: 0.000, result RPS: 18.219, result MiB/s: 0.006. - 21.9.5.16:
localhost:9000, queries 100, QPS: 18.076, RPS: 0.000, MiB/s: 0.000, result RPS: 18.076, result MiB/s: 0.006. - 21.10.4.26:
localhost:9000, queries 100, QPS: 17.302, RPS: 0.000, MiB/s: 0.000, result RPS: 17.302, result MiB/s: 0.006. - 21.11.5.33:
localhost:9000, queries 100, QPS: 17.228, RPS: 0.000, MiB/s: 0.000, result RPS: 17.228, result MiB/s: 0.006. - 21.12.4.1:
localhost:9000, queries 100, QPS: 17.523, RPS: 0.000, MiB/s: 0.000, result RPS: 17.523, result MiB/s: 0.006. - 22.1.3.7:
localhost:9000, queries 100, QPS: 17.347, RPS: 0.000, MiB/s: 0.000, result RPS: 17.347, result MiB/s: 0.006. - 22.2.2.1:
localhost:9000, queries 100, QPS: 16.836, RPS: 0.000, MiB/s: 0.000, result RPS: 16.836, result MiB/s: 0.006. - 22.3.10.22:
localhost:9000, queries 100, QPS: 9.757, RPS: 0.000, MiB/s: 0.000, result RPS: 9.757, result MiB/s: 0.003. - 22.3.10.22 without projections (https://github.com/ClickHouse/ClickHouse/issues/39905):
localhost:9000, queries 100, QPS: 10.463, RPS: 0.000, MiB/s: 0.000, result RPS: 10.463, result MiB/s: 0.004. - 22.4.3.3:
localhost:9000, queries 100, QPS: 9.844, RPS: 0.000, MiB/s: 0.000, result RPS: 9.844, result MiB/s: 0.003. - 22.5.1.2079:
localhost:9000, queries 100, QPS: 9.527, RPS: 0.000, MiB/s: 0.000, result RPS: 9.527, result MiB/s: 0.003. - 22.6.4.35:
localhost:9000, queries 100, QPS: 8.975, RPS: 0.000, MiB/s: 0.000, result RPS: 8.975, result MiB/s: 0.003. - 22.7.2.15:
localhost:9000, queries 100, QPS: 9.136, RPS: 0.000, MiB/s: 0.000, result RPS: 9.136, result MiB/s: 0.003.
In a year of releases we've gone from ~18.2 QPS to ~9.1 QPS, which is a nice and round 100% worse :smile:
I seeseveral releases where the query has slowed down noticeably but the most important one happened in 22.2 -> 22.3, so I'm going to try to bisect that and analyze what's going on. In any case, and seeing other similar issues that have appeared recently (https://github.com/ClickHouse/ClickHouse/issues/39905, https://github.com/ClickHouse/ClickHouse/issues/35490) it might be worth adding some complex queries with low data to the performance tests to avoid making them much slower when adding new steps to the analyze/interpret phase.
Must be fixed by https://github.com/ClickHouse/ClickHouse/pull/31796.
Looking at the worse performance drop (22.2 -> 22.3) it was introduced in:
fe4534d4646e51e40ac04fec83e4a321c4ce19ea is the first bad commit
commit fe4534d4646e51e40ac04fec83e4a321c4ce19ea
Author: Amos Bird <[email protected]>
Date: Mon Mar 7 21:51:34 2022 +0800
Get rid of duplicate query planing.
src/Interpreters/InterpreterSelectQuery.cpp | 20 ++++++++------------
src/Interpreters/InterpreterSelectWithUnionQuery.cpp | 4 +++-
src/Interpreters/JoinedTables.cpp | 4 +++-
3 files changed, 14 insertions(+), 14 deletions(-)
PR: https://github.com/ClickHouse/ClickHouse/pull/35094
Commit prior to this one:
Queries executed: 100.
localhost:9000, queries 100, QPS: 15.031, RPS: 0.000, MiB/s: 0.000, result RPS: 15.031, result MiB/s: 0.005.
0.000% 0.064 sec.
10.000% 0.064 sec.
20.000% 0.064 sec.
30.000% 0.065 sec.
40.000% 0.065 sec.
50.000% 0.065 sec.
60.000% 0.065 sec.
70.000% 0.066 sec.
80.000% 0.069 sec.
90.000% 0.071 sec.
95.000% 0.072 sec.
99.000% 0.077 sec.
99.900% 0.084 sec.
99.990% 0.084 sec.
With this commit:
Queries executed: 100.
localhost:9000, queries 100, QPS: 9.419, RPS: 0.000, MiB/s: 0.000, result RPS: 9.419, result MiB/s: 0.003.
0.000% 0.100 sec.
10.000% 0.101 sec.
20.000% 0.102 sec.
30.000% 0.103 sec.
40.000% 0.103 sec.
50.000% 0.104 sec.
60.000% 0.106 sec.
70.000% 0.106 sec.
80.000% 0.107 sec.
90.000% 0.107 sec.
95.000% 0.109 sec.
99.000% 0.166 sec.
99.900% 0.196 sec.
99.990% 0.196 sec.
Must be fixed by https://github.com/ClickHouse/ClickHouse/pull/31796.
For reference I tested the branch and got similar performance as in master:
master (5ebf230dbbffb9b087047b31b03a9c5228eb2ae2):
localhost:9000, queries 100, QPS: 8.149, RPS: 0.000, MiB/s: 0.000, result RPS: 8.149, result MiB/s: 0.003.
Your branch (21bf5980ed8f3929277bf16842ad6ed4cd338570):
localhost:9000, queries 100, QPS: 8.306, RPS: 0.000, MiB/s: 0.000, result RPS: 8.306, result MiB/s: 0.003.
@Algunenano this branch introduce new analyzer and planner, it is not enabled by default you need to specify use_analyzer = 1. Althought, joins are in progress right now almost finished. It does not make a lot of sense to tune current infrastructure right now.
this branch introduce new analyzer and planner, it is not enabled by default you need to specify use_analyzer = 1.
I see, that's for the pointers. I'll keep an eye on it to test it as soon as I can and it's ready.
It does not make a lot of sense to tune current infrastructure right now.
If it's going to be available for production use soon then yes, I guess it doesn't make sense and I could give a hand testing and making sure it's ready. The problem I have right now is that we can't upgrade some clusters because of this specific performance degradation (lots of queries with cross joins of 1 line). It's in my best interest to have this addressed so I can stop supporting (internally) old releases.
@Algunenano we plan to finish it as soon as possible. I already tested and planning part time decreases significantly for non trivial queries.
@Algunenano I checked your use case with and without analyzer.
clickhouse-benchmark -i 100 --query "SELECT *
FROM
(
SELECT
cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(toUInt128('1015029'))) AS idColumnU64,
*
FROM
(
SELECT
if(max(nDateTime_02_date_292929292) > '2020-10-31 00:00:00', max(nDateTime_02_date_292929292), NULL) AS o1,
if(max(other_max_datetime_05) > '2020-10-31 00:00:00', max(other_max_datetime_05), NULL) AS o2,
if(max(nDateTime_03_date) > '2020-10-31 00:00:00', max(nDateTime_03_date), NULL) AS o3,
if(max(nDateTime_04_date) > '2020-10-31 00:00:00', max(nDateTime_04_date), NULL) AS o4,
if(max(nDateTime_02_date) > '2020-10-31 00:00:00', max(nDateTime_02_date), NULL) AS o5,
if(max(other_max_datetime_01) > '2020-10-31 00:00:00', max(other_max_datetime_01), NULL) AS o6,
if(max(other_max_datetime_02) > '2020-10-31 00:00:00', max(other_max_datetime_02), NULL) AS o7,
argMaxMerge(agg_argmax_string_datetime_13) AS o8,
argMaxMerge(agg_argmax_string_datetime_05) AS o9,
argMaxMerge(agg_argmax_string_datetime_06) AS o10,
argMaxMerge(agg_argmax_string_datetime_02) AS o11,
argMaxMerge(agg_argmax_string_datetime_04) AS o12,
argMaxMerge(agg_argmax_string_datetime_15) AS o13,
argMaxMerge(agg_argmax_string_datetime_01) AS o14,
argMaxMerge(agg_argmax_string_u8_01) AS o15,
argMaxMerge(agg_argmax_f32_datetime_02) AS o16,
if(argMaxMerge(agg_argmax_string_datetime_09) != '', argMaxMerge(agg_argmax_string_datetime_09), NULL) AS o17,
if(argMaxMerge(agg_argmax_date_datetime_01) > '2020-10-31', argMaxMerge(agg_argmax_date_datetime_01), NULL) AS o18,
if(argMaxMerge(agg_argmax_date_datetime_02) > '2020-10-31', argMaxMerge(agg_argmax_date_datetime_02), NULL) AS o19,
argMaxMerge(agg_argmax_u8_other_02) AS o20,
argMaxMerge(agg_argmax_u8_other_03) AS o21,
argMaxMerge(agg_argmax_u8_other_04) AS o22,
argMaxMerge(agg_argmax_u8_other_01) AS o23,
argMaxMerge(agg_argmax_string_datetime_10) AS o24,
argMaxMerge(agg_argmax_string_datetime_11) AS o25,
countMerge(aggCount_3) AS o26,
countMerge(aggCount_4) AS o27
FROM db_interpretation_01.table_01 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(toUInt128('1015029'))) = c.idColumnU64
) AS s01,
(
WITH (
SELECT coalesce(if((topKWeightedMerge(2)(agg_topk_01)[1]) != toUInt128(toUInt128('1015029')), topKWeightedMerge(2)(agg_topk_01)[1], topKWeightedMerge(2)(agg_topk_01)[2]), 0)
FROM db_interpretation_01.table_02 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2')) = c.idColumnU64
) AS other_idU128
SELECT
if(max(other_max_datetime_05) > '2020-10-31 00:00:00', max(other_max_datetime_05), NULL) AS o28,
if(max(other_max_datetime_01) > '2020-10-31 00:00:00', max(other_max_datetime_01), NULL) AS o29,
if(max(nDateTime_02_date) > '2020-10-31 00:00:00', max(nDateTime_02_date), NULL) AS o30,
other_idU128
FROM db_interpretation_01.table_01 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(other_idU128)) = c.idColumnU64
) AS s02,
(
SELECT
minIf(minDate, dateColumn > (now() - toIntervalDay(7))) AS o31,
maxIf(maxDate, dateColumn > (now() - toIntervalDay(7))) AS o32,
maxIf(maxInt16, dateColumn > (now() - toIntervalDay(28))) AS o33,
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalHour(24))) AS o34,
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalDay(14))) AS o35,
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalDay(28))) AS o36,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalHour(24))) AS o37,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(7))) AS o38,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(28))) AS o27_month,
countMergeIf(aggCount_2_shown, dateColumn > (now() - toIntervalDay(14))) AS o40
FROM db_interpretation_01.table_03 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(toUInt128('1015029'))) = c.idColumnU64
) AS s03,
(
SELECT
countMerge(agg_count_03) AS o41,
countMerge(agg_count_04) AS o42
FROM db_interpretation_01.table_04 AS c
PREWHERE (cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(toUInt128('1015029'))) = c.idColumnU64) AND (ls_01 = 'exit')
) AS s04,
(
SELECT
countMerge(aggCount_3) AS o43,
countMerge(aggCount_4) AS o44,
countMerge(aggCount) AS o45
FROM db_interpretation_01.table_02 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2')) = c.idColumnU64
) AS s05,
(
SELECT
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalDay(14))) AS o46,
uniqMergeIf(agg_uniq_u128_03, dateColumn > (now() - toIntervalHour(24))) AS o47,
uniqMergeIf(agg_uniq_u128_03, dateColumn > (now() - toIntervalDay(14))) AS o48,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(14))) AS o49,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(28))) AS o50
FROM db_interpretation_01.table_05 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2')) = c.idColumnU64
) AS s06,
(
SELECT countMerge(aggCount_3) AS o51
FROM db_interpretation_01.table_06 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(joinGet(db_interpretation_02.table_07, 'idColumnI64', toUInt128('1015029')))) = c.idColumnU64
) AS s07,
(
SELECT
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalDay(28))) AS s52,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(28))) AS s53
FROM db_interpretation_01.table_08 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(joinGet(db_interpretation_02.table_07, 'idColumnI64', toUInt128('1015029')))) = c.idColumnU64
) AS s08
) AS final_s01
FORMAT JSONEachRow;"
Queries executed: 100.
localhost:9000, queries 100, QPS: 6.620, RPS: 0.000, MiB/s: 0.000, result RPS: 6.620, result MiB/s: 0.002.
0.000% 0.149 sec.
10.000% 0.150 sec.
20.000% 0.150 sec.
30.000% 0.150 sec.
40.000% 0.150 sec.
50.000% 0.151 sec.
60.000% 0.151 sec.
70.000% 0.151 sec.
80.000% 0.152 sec.
90.000% 0.153 sec.
95.000% 0.154 sec.
99.000% 0.161 sec.
99.900% 0.162 sec.
99.990% 0.162 sec.
With analyzer:
clickhouse-benchmark -i 100 --query "SELECT *
FROM
(
SELECT
cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(toUInt128('1015029'))) AS idColumnU64,
*
FROM
(
SELECT
if(max(nDateTime_02_date_292929292) > '2020-10-31 00:00:00', max(nDateTime_02_date_292929292), NULL) AS o1,
if(max(other_max_datetime_05) > '2020-10-31 00:00:00', max(other_max_datetime_05), NULL) AS o2,
if(max(nDateTime_03_date) > '2020-10-31 00:00:00', max(nDateTime_03_date), NULL) AS o3,
if(max(nDateTime_04_date) > '2020-10-31 00:00:00', max(nDateTime_04_date), NULL) AS o4,
if(max(nDateTime_02_date) > '2020-10-31 00:00:00', max(nDateTime_02_date), NULL) AS o5,
if(max(other_max_datetime_01) > '2020-10-31 00:00:00', max(other_max_datetime_01), NULL) AS o6,
if(max(other_max_datetime_02) > '2020-10-31 00:00:00', max(other_max_datetime_02), NULL) AS o7,
argMaxMerge(agg_argmax_string_datetime_13) AS o8,
argMaxMerge(agg_argmax_string_datetime_05) AS o9,
argMaxMerge(agg_argmax_string_datetime_06) AS o10,
argMaxMerge(agg_argmax_string_datetime_02) AS o11,
argMaxMerge(agg_argmax_string_datetime_04) AS o12,
argMaxMerge(agg_argmax_string_datetime_15) AS o13,
argMaxMerge(agg_argmax_string_datetime_01) AS o14,
argMaxMerge(agg_argmax_string_u8_01) AS o15,
argMaxMerge(agg_argmax_f32_datetime_02) AS o16,
if(argMaxMerge(agg_argmax_string_datetime_09) != '', argMaxMerge(agg_argmax_string_datetime_09), NULL) AS o17,
if(argMaxMerge(agg_argmax_date_datetime_01) > '2020-10-31', argMaxMerge(agg_argmax_date_datetime_01), NULL) AS o18,
if(argMaxMerge(agg_argmax_date_datetime_02) > '2020-10-31', argMaxMerge(agg_argmax_date_datetime_02), NULL) AS o19,
argMaxMerge(agg_argmax_u8_other_02) AS o20,
argMaxMerge(agg_argmax_u8_other_03) AS o21,
argMaxMerge(agg_argmax_u8_other_04) AS o22,
argMaxMerge(agg_argmax_u8_other_01) AS o23,
argMaxMerge(agg_argmax_string_datetime_10) AS o24,
argMaxMerge(agg_argmax_string_datetime_11) AS o25,
countMerge(aggCount_3) AS o26,
countMerge(aggCount_4) AS o27
FROM db_interpretation_01.table_01 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(toUInt128('1015029'))) = c.idColumnU64
) AS s01,
(
WITH (
SELECT coalesce(if((topKWeightedMerge(2)(agg_topk_01)[1]) != toUInt128(toUInt128('1015029')), topKWeightedMerge(2)(agg_topk_01)[1], topKWeightedMerge(2)(agg_topk_01)[2]), 0)
FROM db_interpretation_01.table_02 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2')) = c.idColumnU64
) AS other_idU128
SELECT
if(max(other_max_datetime_05) > '2020-10-31 00:00:00', max(other_max_datetime_05), NULL) AS o28,
if(max(other_max_datetime_01) > '2020-10-31 00:00:00', max(other_max_datetime_01), NULL) AS o29,
if(max(nDateTime_02_date) > '2020-10-31 00:00:00', max(nDateTime_02_date), NULL) AS o30,
other_idU128
FROM db_interpretation_01.table_01 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(other_idU128)) = c.idColumnU64
) AS s02,
(
SELECT
minIf(minDate, dateColumn > (now() - toIntervalDay(7))) AS o31,
maxIf(maxDate, dateColumn > (now() - toIntervalDay(7))) AS o32,
maxIf(maxInt16, dateColumn > (now() - toIntervalDay(28))) AS o33,
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalHour(24))) AS o34,
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalDay(14))) AS o35,
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalDay(28))) AS o36,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalHour(24))) AS o37,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(7))) AS o38,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(28))) AS o27_month,
countMergeIf(aggCount_2_shown, dateColumn > (now() - toIntervalDay(14))) AS o40
FROM db_interpretation_01.table_03 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(toUInt128('1015029'))) = c.idColumnU64
) AS s03,
(
SELECT
countMerge(agg_count_03) AS o41,
countMerge(agg_count_04) AS o42
FROM db_interpretation_01.table_04 AS c
PREWHERE (cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(toUInt128('1015029'))) = c.idColumnU64) AND (ls_01 = 'exit')
) AS s04,
(
SELECT
countMerge(aggCount_3) AS o43,
countMerge(aggCount_4) AS o44,
countMerge(aggCount) AS o45
FROM db_interpretation_01.table_02 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2')) = c.idColumnU64
) AS s05,
(
SELECT
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalDay(14))) AS o46,
uniqMergeIf(agg_uniq_u128_03, dateColumn > (now() - toIntervalHour(24))) AS o47,
uniqMergeIf(agg_uniq_u128_03, dateColumn > (now() - toIntervalDay(14))) AS o48,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(14))) AS o49,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(28))) AS o50
FROM db_interpretation_01.table_05 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2')) = c.idColumnU64
) AS s06,
(
SELECT countMerge(aggCount_3) AS o51
FROM db_interpretation_01.table_06 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(joinGet(db_interpretation_02.table_07, 'idColumnI64', toUInt128('1015029')))) = c.idColumnU64
) AS s07,
(
SELECT
countMergeIf(aggCount_3, dateColumn > (now() - toIntervalDay(28))) AS s52,
countMergeIf(aggCount_4, dateColumn > (now() - toIntervalDay(28))) AS s53
FROM db_interpretation_01.table_08 AS c
PREWHERE cityHash64('0321352416546546546546546546546', lower('BU'), lower('random2'), toUInt128(joinGet(db_interpretation_02.table_07, 'idColumnI64', toUInt128('1015029')))) = c.idColumnU64
) AS s08
) AS final_s01
FORMAT JSONEachRow SETTINGS use_analyzer = 1;"
Queries executed: 100.
localhost:9000, queries 100, QPS: 38.364, RPS: 0.000, MiB/s: 0.000, result RPS: 38.364, result MiB/s: 0.013.
0.000% 0.023 sec.
10.000% 0.023 sec.
20.000% 0.023 sec.
30.000% 0.023 sec.
40.000% 0.024 sec.
50.000% 0.024 sec.
60.000% 0.024 sec.
70.000% 0.027 sec.
80.000% 0.029 sec.
90.000% 0.033 sec.
95.000% 0.035 sec.
99.000% 0.036 sec.
99.900% 0.038 sec.
99.990% 0.038 sec.
Around 6x. What need to be additionally said. We do not spend any time optimizing Analyzer, Planner low level details (it is possible to improve it performance multiple times I suppose), and that I did not expect that new Analyzer, Planner will improve performance a lot on your use case, because there is not so many JOINs. Analyzer, Planner new infrastructure main targets for performance improvements are queries with a lot of JOINs. CC: @alexey-milovidov
I did not expect that new Analyzer, Planner will improve performance a lot on your use case, because there is not so many JOINs.
I don't know if you are joking or not, but 6x is a superb improvement for this use case. With data the query is only ~15% slower (that is, the old parse and interpretation steps take 85-90% of the time), so I expect this 6x to be almost the same with data. You can't imagine how painful this specific use case currently is; to the point of blocking the upgrade of some machines just to support it. I'm super grateful for the effort :raised_hands:
As soon as you feel the PR is ready for scrutiny I intend to both review it and stress test it. In my ideal scenario the new analyzer would be production ready and on by default asap. Then we can think of improving it further :D In any case, let me know if there is anything I can do to help getting this ready.
@Algunenano The pull request is ready for review! We are planning to merge it disabled by default, introduce special test runs in CI with enabled, and after tests will pass, enable it by default.
This is fixed with the new analyzer. Nevertheless, I want to keep this open until I can have a look at perf calls and see if there is any easy wins that can be added in the current infrastructure based on this test.