Pasha Nemirovsky
Pasha Nemirovsky
@tomatolog ``` root@manticore-01:~# searchd -v Manticore 6.0.4 1a3a4ea82@230314 (columnar 2.0.4 5a49bd7@230306) (secondary 2.0.4 5a49bd7@230306) Copyright (c) 2001-2016, Andrew Aksyonoff Copyright (c) 2008-2016, Sphinx Technologies Inc (http://sphinxsearch.com) Copyright (c) 2017-2023, Manticore...
@tomatolog can you please advise what's best practice recommendation is in terms of index size?
@tomatolog thx, so version 6.2.0 won't require reindexing all data correct after an upgrade?
@tomatolog @sanikolaev we upgraded the version, and it crashed after 1 hour of uptime. See below the details below: ``` Thu Sep 7 11:53:48.547 2023] [28923] WARNING: '10.0.82.101:9312': query timed...
 I'll increase the memory of manticore machines from 64gb to 128gb to test the behavior ... but it is visible that something is very wrong here in terms of...
settings ``` # Ansible managed # https://github.com/manticoresoftware/manticoresearch/blob/master/manual/Server_settings/Searchd.md#node_address common { # https://manual.manticoresearch.com/Server_settings/Common#lemmatizer_base lemmatizer_base = /usr/share/manticore/nlp/ # https://manual.manticoresearch.com/Server_settings/Common#progressive_merge # progressive_merge = # https://manual.manticoresearch.com/Server_settings/Common#json_autoconv_keynames # json_autoconv_keynames = # https://manual.manticoresearch.com/Server_settings/Common#json_autoconv_numbers # json_autoconv_numbers = 0...
@tomatolog same table now consumes 2-3x memory more comparing to the previous version (we monitor all metrics over time), see below: 
The graph above was taken from the counter ram_bytes taken from index stats endpont. In general, we monitor the following counters per each index within the cluster: Output from one...