[DATABRICKS ERROR]: "FetchError: socket hang up"
Hi, we have encountered a socket hang-up error while performing a longevity test. we have used Node version: 20.11.1 and @databricks/sql package version: 1.7.1
kindly find the below error and screenshot for the same:
" error occurred while resolving the DB query!, FetchError: request to https://adb-4989540552326160.0.azuredatabricks.net/sql/1.0/warehouses/d332f02ce1613e30 failed, reason: socket hang up"
Please help us to resolve the issue and let us know in-case of more information needed.
Hi @kumarav16! I will ask you few question that crossed my mind right away:
- do you see this error with latest version of the library?
- do you use proxy?
- on which stage (opening session, running query, getting the result, etc.) this is happening?
- how large is your result set?
But it would be perfect if you can provide a minimal reproducible example, because this error look somewhat generic
Hi @kravets-levko Kindly find the answer to the above questions below:
-
do you see this error with latest version of the library? We have not yet tried with the latest version which is 1.8.4, we are getting the error with version 1.7.1
-
do you use proxy? No proxy has been used
-
on which stage (opening session, running query, getting the result, etc.) this is happening? while running a query
-
how large is your result set? max 600-700 KB
Is this issue resolved? I am also getting the same issue. On local it is working fine but when I deploy my code to lambda, I am getting this error.
Hi @kravets-levko, hope you are doing well! could you please suggest solution for the above issues as still we are facing the same issues while running longevity test.
We sporadically encounter this issue (driver version 1.10.0), with errors like
FetchError: Invalid response body while trying to fetch https://<redacted>.cloud.databricks.com/sql/1.0/warehouses/<id>: Premature close
(Unfortunately, I cannot provide repro steps, so I'm not sure how useful this comment is for debugging; I just wanted to note that other people are experiencing the issue.)
We have the same issue. In our case the amount of data being fetched by is huge and is greater than 100 MB. This error is unhandled by the SDK so we are getting an unhandled promise rejection.
@databricks/sql version - 1.10.0
Node version - 22.17.0
FetchError: request to https://dbstorages<redacted>.blob.core.windows.net/jobs/<redacted>/sql/2025-07-01/15/results_2025-07-01T15:20:51Z_b973fc56-4b47-4a87-b51f-f3b2e39d9774?sig=<redacted>=2025-07-01T15%3A35%3A58Z&sv=2019-02-02&spr=https&sp=r&sr=b failed,
reason: socket hang up
at ClientRequest.<anonymous> (/usr/app/node_modules/node-fetch/lib/index.js:1501:11)
at ClientRequest.emit (node:events:518:28)
at emitErrorEvent (node:_http_client:104:11)
at TLSSocket.socketOnEnd (node:_http_client:542:5)
at TLSSocket.emit (node:events:530:35)
at endReadableNT (node:internal/streams/readable:1698:12)
at process.processTicksAndRejections (node:internal/process/task_queues:90:21)
{ type: 'system', errno: 'ECONNRESET', code: 'ECONNRESET' }
Databricks query history: