UserWarning: resource_tracker
Hello,
Upon uploading anything, I reliably receive the following warning, which prevents b2 from exiting cleanly:
/opt/pkg/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
Is this likely to be an issue with b2 or with resource_tracker? Is there a clear way to avoid this?
Does it also happen if --noProgress option is used?
It does not! Using --noProgress exits cleanly.
I've read a hint somewhere that it's a float math error in tqdm somewhere that causes it.
Is there something specific to your uploads, such as the object size being zero bytes?
It happens with all uploads I'm afraid, large or small (or zero bytes).
is your clock behind or something? Can you please use ntpdate or something to make sure it's not what is causing the problem?
I'm on macOS so time is set automatically via time.apple.com.
Generally use --noProgress in scripts - the progress reporting function is not a free meal, so you'll ever get slightly improved performance if you disable that. As for the bug itself, it's in tqdm. Figuring out why it only fires for you and how to work around it is, I'm sorry to say, not a to priority for me personally. If someone would look into it and provide a PR, we'd be happy to merge it, but as it is now, I could suggest maybe changing tqdm version (maybe they fixed it already?) or working on it with tqdm developers.
Thanks, I upgrade tqdm to their latest 4.62.3 and I get the same warning without clean output. It's not an impediment to my life since it just requires a ^C.
It might be because I'm using ksh instead of 🤮 bash.
Oh. Can you find the strength in yourself to test it with bash so that we can confirm it's ksh-induced, or maybe it's not and it's specific to something else in your environment?
Oh. Can you find the strength in yourself to test it with bash so that we can confirm it's ksh-induced
Lol. I will dig deep and report back.
💪
This hits me too, using the OSX default z-shell. No progress hides the error, but also makes it really hard to know how close large (multi-gig) files are to completion 😉
I have the same problem and --noProgress hides the error.
I'm having the same problem.
% /opt/homebrew/Cellar/[email protected]/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '
I'm also having Cyberduck and MountainDuck FTP programs fail at uploading files after the last byte is uploaded, and I'm wondering if this error is the root cause? Do they implement b2-tools under the hood?
The issue you observe is just a warning, not an error, but you seem to have another issue, which is most likely caused by crossing a quota.
It seems that MountainDuck is written in powershell and Cyberduck in Java, so they don't share code with the cli.
This is caused by a bug in Python multiprocessing - see https://bugs.python.org/issue46391
Unfortunately, I'm not sure there's any way to suppress the warning.
It was fixed in python