Connector logging error messages
We are seeing the following logging output at the very end when our jobs succeed.
2018-12-18 19:39:26,559 [ERROR ] could not find io module state (interpreter shutdown?) (network.py:661)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/snowflake/connector/network.py", line 781, in _request_exec
auth=SnowflakeAuth(token),
File "/usr/local/lib/python3.6/site-packages/botocore/vendored/requests/sessions.py", line 451, in request
prep = self.prepare_request(req)
File "/usr/local/lib/python3.6/site-packages/botocore/vendored/requests/sessions.py", line 382, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "/usr/local/lib/python3.6/site-packages/botocore/vendored/requests/models.py", line 307, in prepare
self.prepare_body(data, files, json)
File "/usr/local/lib/python3.6/site-packages/botocore/vendored/requests/models.py", line 436, in prepare_body
length = super_len(data)
File "/usr/local/lib/python3.6/site-packages/botocore/vendored/requests/utils.py", line 59, in super_len
fileno = o.fileno()
RuntimeError: could not find io module state (interpreter shutdown?)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/snowflake/connector/network.py", line 648, in _request_exec_wrapper
**kwargs)
File "/usr/local/lib/python3.6/site-packages/snowflake/connector/network.py", line 871, in _request_exec
raise RetryRequest(err)
snowflake.connector.network.RetryRequest: could not find io module state (interpreter shutdown?)
Our logs are full with this message, even tho our ETLs run fine. Are we doing something wrong? Should we explicitly close the connection at the end of a program?
Our environment is the latest python 3.6 with:
'beautifulsoup4==4.5.3',
'click==6.7',
'dicttoxml==1.7.4',
'facebook_business==3.1.1',
'google-api-python-client==1.6.1',
'google-auth==1.5.1',
'IP2Location==8.0.0',
'jenkinsapi==0.3.3',
'Jinja2==2.9.4',
'joblib==0.11',
'Pandas==0.19.2',
'PyMySQL==0.7.9',
'PyYaml==3.12',
'pyasn1==0.4.1',
'pyasn1-modules==0.2.1',
'redis==2.10.5',
'retrying==1.3.3',
'simplejson==3.10.0',
'slackclient==1.0.5',
'snowflake-connector-python==1.6.10',
'snowflake-sqlalchemy==1.0.2',
'SQLAlchemy==1.0.17',
'sqlparse==0.2.3',
'tabulate==0.7.7',
'tzlocal==1.3',
'urllib3==1.23',
'zeep==0.24.0'
@ivankixeye do you close the connection? If not closed explicitly, the destructor will attempt to close the connection by sending a message to the server. However, if the dependent module is already destructed, Python Connector will fail with such error.
@smtakeda I do not. I will look into closing the connection explicitly.
I don't understand why this ends up in the logs at ERROR level but the process exits succesful. There is no error anywhere, nothing is raised and no non-zero exit code shows up. Why not raise an exception or do something instead of logging this?
Hey, I just want to add that we are also seeing this issue with the python adaptor.
We were performing a transition from python 2 to python 3 when we first saw this (we have seen no issues when using python 2). It first happened that when this error occurred the adaptor would seem to get into and endless retry loop such that the program simply hung after completing. With further investigation we have managed to isolate the issue a bit. It seems that the issue is only present when we setup logging with the python logging package (if we remove the line logging.basicConfig([...])) from our code the hang went away (although we can not be sure if the error stopped occuring since we no longer see the logging). After the above comment we noted we were not explicitely closing the connection but relying on destructors so we added an explict close of the connector (although I do not think that this should be required, destructors exist for a reason so should be safe to rely on). This stopped the hanging but did not stop the error message appearing. Also we noted that if we disable the telemetry within the connector then this error message does not occur so it seems to somehow be related to the telemetry but there does not appear to be a supported means of disabling this.
This error is problematic for us since we use this connector as part of production systems and we can not tolerate error messages being emitted within our production environment, even if they are not impacting functionality. We would greatly apprecuate this being investigated. I am happy to give more details if they would be helpful.
I resolved this issue by muting all snowflake.connector logging output,
storing it in a buffer, and only output its contents when an unhandled
exception bubbles up. This keeps our logs clean and easier to read.
On Fri, Dec 21, 2018 at 2:46 AM Ben Sowden [email protected] wrote:
Hey, I just want to add that we are also seeing this issue with the python adaptor.
We were performing a transition from python 2 to python 3 when we first saw this (we have seen no issues when using python 2). It first happened that when this error occurred the adaptor would seem to get into and endless retry loop such that the program simply hung after completing. With further investigation we have managed to isolate the issue a bit. It seems that the issue is only present when we setup logging with the python logging package (if we remove the line logging.basicConfig([...])) from our code the hang went away (although we can not be sure if the error stopped occuring since we no longer see the logging). After the above comment we noted we were not explicitely closing the connection but relying on destructors so we added an explict close of the connector (although I do not think that this should be required, destructors exist for a reason so should be safe to rely on). This stopped the hanging but did not stop the error message appearing. Also we noted that if we disable the telemetry within the connector then this error message does not occur so it seems to somehow be related to the telemetry but there does not appear to be a supported means of disabling this.
This error is problematic for us since we use this connector as part of production systems and we can not tolerate error messages being emitted within our production environment, even if they are not impacting functionality. We would greatly apprecuate this being investigated. I am happy to give more details if they would be helpful.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/snowflakedb/snowflake-connector-python/issues/145#issuecomment-449355226, or mute the thread https://github.com/notifications/unsubscribe-auth/AUF06TNxI8nX-4ggqA3HUZgs61GVWFs9ks5u7LwPgaJpZM4ZZFU7 .
I am closing the connection and still getting this error as well. Trying to get rid of this is driving me insane.
This is a very temporary band-aid but just to get it running, you can also
logging.getLogger("snowflake.connector.cursor").disabled = True
logging.getLogger("snowflake.connector.connection").disabled = True
However @ivankixeye 's solution is definitely preferable.
Hi, this worked for me
logging.getLogger("snowflake.connector.network").disabled = True
I would say the issue comes from network module, not from cursor or connection.
Could the driver detect Python process termination and disable logging to mitigate this? Certainly no logging helps but it'll make debugging harder...
This appears to be an issue with client telemetry trying to send data while the connection is tearing down. The error no longer appears when client telemetry is disabled by setting this line to False: https://github.com/snowflakedb/snowflake-connector-python/blob/5ad17f6491c1bfe87facd8b367fd51b519f2fd03/telemetry.py#L54
The backtrace I saw after adding some logging looked like this:
File "Python/pvenv_3.6/lib64/python3.6/site-packages/snowflake/connector/connection.py", line 194, in __del__
self.close(retry=False)
File "Python/pvenv_3.6/lib64/python3.6/site-packages/snowflake/connector/connection.py", line 502, in close
self._telemetry.close()
File "Python/pvenv_3.6/lib64/python3.6/site-packages/snowflake/connector/telemetry.py", line 114, in close
self.send_batch()
File "Python/pvenv_3.6/lib64/python3.6/site-packages/snowflake/connector/telemetry.py", line 96, in send_batch
method='post', client=None, timeout=5)
File "Python/pvenv_3.6/lib64/python3.6/site-packages/snowflake/connector/network.py", line 304, in request
timeout=timeout, _include_retry_params=_include_retry_params)
File "Python/pvenv_3.6/lib64/python3.6/site-packages/snowflake/connector/network.py", line 533, in _post_request
_include_retry_params=_include_retry_params)
File "Python/pvenv_3.6/lib64/python3.6/site-packages/snowflake/connector/network.py", line 612, in fetch
**kwargs)
File "Python/pvenv_3.6/lib64/python3.6/site-packages/snowflake/connector/network.py", line 652, in _request_exec_wrapper
**kwargs)
File "Python/pvenv_3.6/lib64/python3.6/site-packages/snowflake/connector/network.py", line 795, in _request_exec
logger.error(traceback.format_stack())
I resolved this issue by muting all
snowflake.connectorlogging output, storing it in a buffer, and only output its contents when an unhandled exception bubbles up. This keeps our logs clean and easier to read. … On Fri, Dec 21, 2018 at 2:46 AM Ben Sowden @.***> wrote: Hey, I just want to add that we are also seeing this issue with the python adaptor. We were performing a transition from python 2 to python 3 when we first saw this (we have seen no issues when using python 2). It first happened that when this error occurred the adaptor would seem to get into and endless retry loop such that the program simply hung after completing. With further investigation we have managed to isolate the issue a bit. It seems that the issue is only present when we setup logging with the python logging package (if we remove the line logging.basicConfig([...])) from our code the hang went away (although we can not be sure if the error stopped occuring since we no longer see the logging). After the above comment we noted we were not explicitely closing the connection but relying on destructors so we added an explict close of the connector (although I do not think that this should be required, destructors exist for a reason so should be safe to rely on). This stopped the hanging but did not stop the error message appearing. Also we noted that if we disable the telemetry within the connector then this error message does not occur so it seems to somehow be related to the telemetry but there does not appear to be a supported means of disabling this. This error is problematic for us since we use this connector as part of production systems and we can not tolerate error messages being emitted within our production environment, even if they are not impacting functionality. We would greatly apprecuate this being investigated. I am happy to give more details if they would be helpful. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#145 (comment)>, or mute the thread https://github.com/notifications/unsubscribe-auth/AUF06TNxI8nX-4ggqA3HUZgs61GVWFs9ks5u7LwPgaJpZM4ZZFU7 .
How to mute all snowflake.connector logging output?
Here's MWE, I encountered this problem from a completely different code base. I suppose the problem is related to garbage collection order?
#!/usr/bin/env python3
class Foo:
def __init__(self):
self._open = open
def __del__(self):
with self._open('/tmp/doge_poop', 'w') as f:
f.write('corn kernel')
foo=Foo()
We started seeing this when using SQLAlchemy engine.connect() as a context manager (snowflake-connector-python version 2.7.7):
import os
from sqlalchemy import create_engine
from sqlalchemy.engine.url import URL
engine = create_engine(
URL(<connection parameters>),
)
with engine.connect() as connection:
result = connection.execute(<query>).fetchall()
The error went away when changing the code so that we handle the closing of the connection manually.
I was having this same issue.
from snowflake.connector import connect as snowflake_connect
with snowflake_connect(**self.credentials) as conn:
cursor = conn.cursor().execute(query)
return cursor.fetch_pandas_all()
handling it like this solve the issue
I was getting the mentioned error log when doing something like:
def execute_query(query: str):
with snowflake_connect(**self.credentials) as conn:
result = conn.cursor().execute(query)
result = execute_query("SELECT 1")
result.fetch_pandas_all()
So basically, handling everything inside the context of the connection solved the issue for me
To clean up and re-prioritize bugs and feature requests we are closing all issues older than 6 months as of March 1, 2023. If there are any issues or feature requests that you would like us to address, please re-create them. For urgent issues, opening a support case with this link Snowflake Community is the fastest way to get a response