swoole table memory rezerve and usage problem.
$table = new Swoole\Table(1024); $table->column('name', Swoole\Table::TYPE_STRING, 64); $table->column('id', Swoole\Table::TYPE_INT, 4); //1,2,4,8 $table->column('num', Swoole\Table::TYPE_FLOAT); $table->create();
As the columns increase, the initial size of memory (1024) is automatically allocated at application startup.
For example, for 10k clients, when we use this together with other values to create 10k sessionids, the application gets an error because it tries to allocate too large memory space at startup.
in fact, the table is empty, for example, it will store a maximum of 1000 session data instantly,
For example, when php is in use, it falls into insufficient memory.
The fact that the swoole initially auto-reserves this memory for this tablespace is exploiting the physical memory for other swoole scripts even though it is not being used.
For high memory usage of the table, it would be nice to make some kind of compression on the swoole side, or to set the table to use disk file for storage.
For example, for 10k clients, when we use this together with other values to create 10k sessionids, the application gets an error because it tries to allocate too large memory space at startup.
Are you hosting swoole for other party? Current swoole deployment is kind of for self hosted.
The fact that the swoole initially auto-reserves this memory for this tablespace is exploiting the physical memory for other swoole scripts even though it is not being used.
For high memory usage of the table, it would be nice to make some kind of compression on the swoole side, or to set the table to use disk file for storage.
Auto-reserves is perfectly fine for me (in fact that what I want), one benefit of swoole table is superspeed, will usage of compression and/or disk slow it down significantly?
But if we decide to go this route, maybe go with an option? (eg: $table = new Swoole\Table(table_size: 1024, table_compression: true, compression_algo: ...); so current user are not getting affected.
Are you hosting swoole for other party? Current swoole deployment is kind of for self hosted.
No, I don't, but swoole needs an independent dedicated server due to its structure, 1gb 2gb ram in restricted vps / vds environments remains bad with this memory usage in order to unleash its potential.
Auto-reserves is perfectly fine for me (in fact that what I want), one benefit of swoole table is superspeed, will usage of compression and/or disk slow it down significantly?
I don't think it slows it down for ssd or nvme, It doesn't really matter for an instant 2 3kb disk read/write io per request. The linux file system already useing caches a ram for linux disk access.
But if we decide to go this route, maybe go with an option? (eg:
$table = new Swoole\Table(table_size: 1024, table_compression: true, compression_algo: ...);so current user are not getting affected.
yes it can be like this, why do we have to specify length for types. in other words, by 2021 these should have been exceeded.
The problem is that the swoole starts running for an unused free memory space, allocating physical memory, and the application initially crashes if there is no free memory space.
that is why we have to pass the total item number (1024) to the table, why can't we manage this total item number dynamically, we can't resize the table at runtime without losing the data again?
why with 1gigabyte ram and 1 cpu when moving 40k client connections,
this table memory allocation problem is blocking the swoole.
Actually, I don't want to use tables. Why can't we read and write variables such as global arrays from within functions, which is another problem and a huge problem.
No, I don't, but swoole needs an independent dedicated server due to its structure, 1gb 2gb ram in restricted vps / vds environments remains bad with this memory usage in order to unleash its potential.
@okoca55 why you are using Swoole ? related to your question it seems you don't need it, or you don't understand what is Swoole Exactly,
I don't think it slows it down for ssd or nvme, It doesn't really matter for an instant 2 3kb disk read/write io per request.
it seems you don't have any idea about computer architecture and what's going on when you access to the file system, there is huge difference, you can read and write more than 2 millions request per second with Swoole Table!
The linux file system already useing caches a ram for linux disk access.
Yes, but in different scenarios, none of which is similar to what you expect from Swoole, sorry but you really need a deep understanding of computer hardware and OS architecture,
that is why we have to pass the total item number (1024) to the table
because Swoole designed to be trustful and high performance and acting as stateful application, lets thinking ... you run your application without allocating memory, and in runtime your application needs some memory! what happening for your memory pointer? sure your application will crash
why can't we manage this total item number dynamically, we can't resize the table at runtime without losing the data again?
in this case 98% of PHP programmers will stuck in memory leak issue at runtime, without any clear error,
Why can't we read and write variables such as global arrays from within functions, which is another problem and a huge problem.
Most of the questions are due to insufficient technical knowledge about the behavior of operating systems, So answering to these questions is very difficult, because you don't have enough background knowledge in this field, actually using Global arrays or variable is dangerous for the stateful application!
I think this is great Book in order to understand what's going on, and I'm sure you will find all your answer in this book : Mastering Swoole PHP