Connection leak happening in supavisor connections
Bug report
- [N] I confirm this is a bug with Supabase, not with my own application. I'm not aware of the exact RCA, hence can't tell. I've tried SingleConnectionDataSource & PGSimpleDataSource in JDBC, but the issue still persists.
- [Y] I confirm I have searched the Docs, GitHub Discussions, and Discord.
Describe the bug
When connecting to Supavisor in Transaction pooler mode via JDBC (SingleConnectionDataSource or PGSimpleDataSource), the supavisor connections are not being released after calling connection.close(). I kept the suppressClose to false to avoid any connection proxy. After a while, the supavisor connections in grafana reaches to 200 count and application starts throwing below exception:
FATAL: Max client connections reached
On top of this, even if the application server is stopped, the supavisor connection count still remains the same. So, even if there was possibility of having bug in my application code, the connection count should ideally drop to zero when the server is killed.
Expected behavior
Supavisor connection should get released when connection is closed by application code.
Screenshots
System information
- OS: Linux (Although not applicable)
- Browser (if applies) Not Applicable
- Version of supabase-js: hosted on supabase
- Version of Node.js: Not being used
- Version of Java: 17
Additional context
References: SingleConnectionDataSource - https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/jdbc/datasource/SingleConnectionDataSource.html PGSimpleDataSource- https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/ds/PGSimpleDataSource.html
Is it self hosted Supavisor instance? If so, which version of Supavisor do you run?
No, it is not self hosted. Using the Supabase hosted Supavisor only.
@LakhveerChahal have you reported that to the Supabase Helpdesk? That would help us as it would provide us the information about your project so we would be able to access the logs and metrics for your organisation.
Hi @hauleth We have added a support ticket with the following ID: 20158080681
@hauleth Do we have any update on this issue? We are facing this issue on Project Id: xylvebryvatsekgukvlr
Adding a Screenshot of the "Number of Client Connections VS Time" graph for your reference.
I have restarted your pool, so it should be now possible to reconnect. I will investigate it further, but I will move conversation to the support ticket.
Ok, we have noticed the problem and I am working on a resolution. I am reopening. cc @chasers
We have been seeing this too.
21180299410 21188011144 20717203232
The gut reaction is that the client is leaking connections, but I'm fairly confident we aren't. As of today, my staging pooler in ap-southeast-2 can't receive any connections at all (including from pgAdmin). 10s connection timeout every time. I tried rebooting the database and project but no luck.
I tried adding the ipv4 add-on and connecting directly but I'm currently getting SSL errors there that I need to investigate.
We are aware of the issue and we are planning fix deployment. Currently the workaround in case of this issue is to edit the pool size in the configuration which then forces pool restart and it mitigates the issue. I will post there when the issue will be deployed.
Thanks I will keep that in mind when it pops up
We are aware of the issue and we are planning fix deployment
out of curiosity, what is the root cause?
@abc3 the connection shutdowns and errors weren't properly handled, which mean that there was message, that was left unhandled, it was logged, but the timeout wasn't restarted, so the process was just idling there without proper shutdown. The fix was to properly handle TCP errors in the ClientHandler.
I am having the same issue and I don't know how to solve it with JDBC.
I am also having the same issue ?? Can someone please suggest how do i able to resolve this issue
@hauleth @abc3 @chasers supavisor: container_name: supabase-pooler image: supabase/supavisor:1.1.56
Please check the image which i have attached
can you please help me to resolve this issue??
In my case supabase_admin has a lot of connections which is causing max connections limit reached
select
*
from
pg_stat_activity where usename = 'supabase-admin';
Result :
harset='utf-8'>
Please check the image which i have attached
can you please help me to resolve this issue??
In my case supabase_admin has a lot of connections which is causing max connections limit reached
select
*
from
pg_stat_activity where usename = 'supabase-admin';
Result :
|
Result :
|
|---|
@abc3 the connection shutdowns and errors weren't properly handled, which mean that there was message, that was left unhandled, it was logged, but the timeout wasn't restarted, so the process was just idling there without proper shutdown. The fix was to properly handle TCP errors in the
ClientHandler.
@hauleth Any update on the above issue ??
We are currently working on releasing updated version that should solve that issue. It should be out in few days.
I am also having the same issue ?? Can someone please suggest how do i able to resolve this issue
@hauleth @abc3 @chasers supavisor: container_name: supabase-pooler image: supabase/supavisor:1.1.56
Please check the image which i have attached
can you please help me to resolve this issue??
In my case supabase_admin has a lot of connections which is causing max connections limit reached
select*frompg_stat_activity where usename = 'supabase-admin';
Result :
-- harset='utf-8'>
Please check the image which i have attached
can you please help me to resolve this issue??
In my case supabase_admin has a lot of connections which is causing max connections limit reached
select*frompg_stat_activity where usename = 'supabase-admin';
Result :
Result :
usename application_name supabase_admin TimescaleDB Background Worker Launcher supabase_admin pg_cron scheduler supabase_admin supabase_admin pg_net 0.14.0 supabase_storage_admin supabase_admin supabase_mt_realtime supabase_admin authenticator PostgREST 12.2.0 supabase_admin supabase_mt_realtime supabase_admin supabase_mt_realtime supabase_admin supabase_mt_realtime supabase_admin supabase_mt_realtime supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin cluster_node_supavisor@73342e01069e supabase_admin cluster_node_supavisor@73342e01069e supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_auth_admin supabase_auth_admin
@hauleth Please notify here once this issue has been fixed
@RishabhPachori, just to clarify, are you experiencing this issue in a self-hosted environment or in a specific Supabase region?
@RishabhPachori, just to clarify, are you experiencing this issue in a self-hosted environment or in a specific Supabase region?
Yes, i am facing this issue in self hosting environment.
@RishabhPachori, you're using an outdated version of Supavisor (1.1.56), and the current version is 2.5.1. Try updating, as it includes many important fixes that may resolve this issue. Just be careful, since v1 and v2 differ significantly
@RishabhPachori, you're using an outdated version of Supavisor (1.1.56), and the current version is 2.5.1. Try updating, as it includes many important fixes that may resolve this issue. Just be careful, since v1 and v2 differ significantly
I have tried it already. Update all images to supabase latest version but the issue is not resolve for me
@RishabhPachori, could you describe the exact steps to reproduce the issue? Ideally, create a minimal app that triggers it
@RishabhPachori, could you describe the exact steps to reproduce the issue? Ideally, create a minimal app that triggers it
You can follow this supabase docker guide to selfhost supabase in your local system
https://supabase.com/docs/guides/self-hosting/docker
Follow the same steps in your local system. When supabase is up open the supabase dashboard url go to the database roles section Where you can see approx 50+ connections of supabase_admin role
I am also having the same issue ?? Can someone please suggest how do i able to resolve this issue @hauleth @abc3 @chasers supavisor: container_name: supabase-pooler image: supabase/supavisor:1.1.56
Please check the image which i have attached can you please help me to resolve this issue?? In my case supabase_admin has a lot of connections which is causing max connections limit reached selectfrompg_stat_activity where usename = 'supabase-admin'; Result : -- harset='utf-8'> Please check the image which i have attached can you please help me to resolve this issue?? In my case supabase_admin has a lot of connections which is causing max connections limit reached selectfrompg_stat_activity where usename = 'supabase-admin'; Result : Result : usename application_name supabase_admin TimescaleDB Background Worker Launcher supabase_admin pg_cron scheduler supabase_admin supabase_admin pg_net 0.14.0 supabase_storage_admin supabase_admin supabase_mt_realtime supabase_admin authenticator PostgREST 12.2.0 supabase_admin supabase_mt_realtime supabase_admin supabase_mt_realtime supabase_admin supabase_mt_realtime supabase_admin supabase_mt_realtime supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin cluster_node_supavisor@73342e01069e supabase_admin cluster_node_supavisor@73342e01069e supabase_admin supavisor_meta supabase_admin supavisor_meta supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_admin supabase_auth_admin supabase_auth_admin
@hauleth Please notify here once this issue has been fixed
@hauleth @abc3 @chasers can you please suggest some workaround that i can use to resolve this issue for now ??
@RishabhPachori, the application name property supavisor_meta indicates that these connections are used by Supavisor as a metadata store, where it keeps data about tenants.
By default, the metadata connection pool size is set to 25. To reduce this value, you can set the DB_POOL_SIZE environment variable in your supabase-project/docker-compose.yml file.
For example, add the following line after the POOLER_POOL_MODE setting:
DB_POOL_SIZE: 5
This will limit the number of connections Supavisor uses for its internal metadata pool.
Alternatively, you can define the pool size via your .env file. In your docker-compose.yml, use the following line:
DB_POOL_SIZE: ${POOLER_DB_POOL_SIZE}
Then, in your .env file, add:
POOLER_DB_POOL_SIZE=5
@RishabhPachori, the application name property
supavisor_metaindicates that these connections are used by Supavisor as a metadata store, where it keeps data about tenants.By default, the metadata connection pool size is set to 25. To reduce this value, you can set the
DB_POOL_SIZEenvironment variable in yoursupabase-project/docker-compose.ymlfile. For example, add the following line after the POOLER_POOL_MODE setting:DB_POOL_SIZE: 5 This will limit the number of connections Supavisor uses for its internal metadata pool.
Alternatively, you can define the pool size via your
.envfile. In your docker-compose.yml, use the following line:DB_POOL_SIZE: ${POOLER_DB_POOL_SIZE} Then, in your
.envfile, add:POOLER_DB_POOL_SIZE=5
Is this is a workaround to resolve the above issue ?? @abc3
What is this one then ?? POOLER_DEFAULT_POOL_SIZE: 20 https://github.com/supabase/supabase/blob/354c9b6955dc78f7f45dd3babac43eb7bae27722/docker/docker-compose.yml#L518
@RishabhPachori, let me explain further.
Supavisor is a multitenant solution, which means it needs to store data about different tenants, databases, users, and more. For this purpose, it uses a metadata store, which is a PostgreSQL database Supavisor uses internally. This metadata store has its own connection pool.
In self-hosted setups, for simplicity, Supavisor uses the same PostgreSQL instance as the client database to store metadata.
The POOLER_DEFAULT_POOL_SIZE setting controls the size of the client connection pool. This setting limits how many connections can be made to Supavisor from external PostgreSQL clients.
Hope this makes things clearer.
@RishabhPachori, let me explain further.
Supavisor is a multitenant solution, which means it needs to store data about different tenants, databases, users, and more. For this purpose, it uses a metadata store, which is a PostgreSQL database Supavisor uses internally. This metadata store has its own connection pool.
In self-hosted setups, for simplicity, Supavisor uses the same PostgreSQL instance as the client database to store metadata.
The
POOLER_DEFAULT_POOL_SIZEsetting controls the size of the client connection pool. This setting limits how many connections can be made to Supavisor from external PostgreSQL clients.Hope this makes things clearer.
Thanks @abc3 I have set DB_POOL_SIZE: 5 POOLER_DEFAULT_POOL_SIZE: 20
Now connections drops to 36. Is it is normal in selfhost supabase setup ?? currently supabase_admin has 33 connections in total
After that i run this command. You can see that there are so many idle connections. is it is okay ?? select usename, application_name,state from pg_stat_activity where state = 'idle';
| usename | application_name | state |
|---|---|---|
| authenticator | PostgREST 12.2.12 (cd3cf9e) | idle |
| authenticator | PostgREST 12.2.12 (cd3cf9e) | idle |
| supabase_storage_admin | idle | |
| supabase_admin | supavisor_meta | idle |
| supabase_admin | supavisor_meta | idle |
| supabase_admin | supavisor_meta | idle |
| supabase_admin | supavisor_meta | idle |
| supabase_admin | cluster_node_supavisor@f63b2489e6a1 | idle |
| supabase_admin | cluster_node_supavisor@f63b2489e6a1 | idle |
| supabase_admin | supavisor_meta | idle |
| supabase_admin | supabase_mt_realtime | idle |
| supabase_admin | supabase_mt_realtime | idle |
| supabase_admin | supabase_mt_realtime | idle |
| supabase_admin | supabase_mt_realtime | idle |
| supabase_admin | supabase_mt_realtime | idle |
| supabase_admin | idle | |
| supabase_admin | idle | |
| supabase_admin | idle | |
| supabase_admin | idle | |
| supabase_admin | idle | |
| supabase_admin | idle | |
| supabase_admin | idle | |
| supabase_admin | idle | |
| supabase_admin | idle | |
| supabase_admin | idle | |
| supabase_admin | idle | |
| supabase_admin | idle | |
| supabase_admin | idle |
@RishabhPachori, LGTM. Those connections without application_name are likely related to the analytics service. But since I'm no longer part of the Supabase team, I can only help you with Supavisor