Mining refactor with new features
This is a rather large PR that I fully expect to be reworked, but wanted to share because I think it adds some useful features. I've tried to break the commits up into logical steps so we can choose to only apply a subset if needed.
- raise error if vanity characters not in bech32 possible characters
- move
mine_vanity_keyintopow.pyto avoid circular imports when adding other features - refactor functions that take "guesses" out of the mining functions so that we can use them to test performance
- add methods to check hashrate and estimate mining times for both types of private key mining and event mining
@armstrys can you (or anyone else) drop in some rough performance calcs here? i.e. "hashes" (guesses) per second.
I had been working on my own vanity key brute forcing library in python, but completely abandoned it once I saw rana's performance.
My M1 Macbook Pro was doing something like 1 mil guesses in two minutes in python. By running multiple instances I could hit maybe 6x that.
Rana is hitting 340k per second!
I think it probably makes more sense to try to write python bindings for rana's Rust code and PR into rana whatever mods that might require.
If you pull this branch you should be able to run get_hashrate(_guess_key) for hex or get_hashrate(_guess_vanity_key) for npub (assuming my code is right). I think this implementation is consistent with the fastest I’ve been able to get keys in Python on my machine. It wouldn’t surprise me if rana is much faster - I probably won’t have any time to benchmark on my 2015 intel MacBook today, but I can probably try sometime next week.
I suspect you’re right that investigating python bindings to another solution might make more sense here! I hadn’t thought of that.
In [24]: pow.get_hashrate(pow._guess_key)
Out[24]: 24796.10208550446
In [25]: pow.get_hashrate(pow._guess_key)
Out[25]: 24901.973319194134
In [26]: pow.get_hashrate(pow._guess_vanity_key)
Out[26]: 10890.909980241477
In [27]: pow.get_hashrate(pow._guess_vanity_key)
Out[27]: 10914.554774844557
So for vanity gen it's about 34x slower than rana -- but that's with rana using all 8 cores. If we manually run 8 instances of python and assume no performance loss per thread (more or less true based on my earlier testing with my own python vanity gen), we roughly cut that to rana being somewhere around 4x faster.