server icon indicating copy to clipboard operation
server copied to clipboard

Adaptive chunk sizes

Open 0xVavaldi opened this issue 6 years ago • 2 comments

This is a feature suggestion posted based on a discussion between s3inlc and Thor.

Fact: Hashcat benchmarking is bad

When running large hash lists, different hash types or having an issue on the client side, different results will come in on the benchmark. In this test case we had a 250k vbulletin list (mode 2611) from hashes.org that functioned at subpar speeds.

9MH/s on 4x 1080 Ti 12MH/s on 1x 1080 Ti

The chunk size is about 3k out of the total wordlist attack keyspace of 1,464,244,267 and was completed in approx 50 seconds to 1 minute 20. This is far off from the 600 second target.

The goal of this feature is to adjust the benchmark based on the target and have it adapt the size, resulting in greater speeds & less chunks without reduced functionality or performance.

The proposed formula for this (by S3inlc) is <new chunk size> = 600s / <time needed> * <old chunk size>

The goal of the formula is to adjust the chunk size UP while the time needed for the last benchmark is less than the ideal chunk size (600). This allows for more utilization in case the benchmark turned out too low and could also be used to reduce utilization / chunk time if too high.

0xVavaldi avatar Mar 12 '19 18:03 0xVavaldi

The proposed formula for this (by S3inlc) is perfect This would be a good function Doning it manually at the moment

H4xl0r avatar Feb 24 '20 21:02 H4xl0r

@s3inlc Thoughts on the implementation?

0xVavaldi avatar Oct 19 '21 19:10 0xVavaldi