server icon indicating copy to clipboard operation
server copied to clipboard

Benchmark cache

Open jessevz opened this issue 2 years ago • 7 comments

made a cache for the benchmarks so that benchmarks can be reused and the benchmark proces can be skipped when the benchmark has already been cached. Key components:

  • Added a benchmark model. The benchmark model got a field for all the dependencies of the benchmark output. These are the benchmark type (speed or runtime), the cracker binary, the hardware that is used, the attack parameters and the hashmode.
  • Agents are split up into hardware groups, so that benchmark can be linked to a hardwaregroup and not a single agent.
  • Benchmarks get invalidated by a time to live.
  • frontend for the benchmarks in /benchmark.php to show what has been cached and to manually remove values.

jessevz avatar Jul 25 '23 10:07 jessevz

Thanks for the PR :) There are a few additional things I noticed:

  • The code style was not consistently applied, there are quite a few functions which have the curly brackets on new lines after function definitions.
  • The new feature should also be documented in doc/changelog.md
  • Probably the TTL should be configurable as different users may have different times they would like to have benchmark results cached (ideally, even add that e.g. if it is set to 0, the caching will be disabled).
  • I tried to test the feature, but it seems that the upgrade process is somehow broken, during initialization it fails with the following error:
hashtopolis-backend     | Start initialization process...
hashtopolis-backend     | PHP Fatal error:  Uncaught PDOException: SQLSTATE[42000]: Syntax error or access violation: 1075 Incorrect table definition; there can be only one auto column and it must be defined as a key in /var/www/html/install/updates/update_v0.14.x_v0.14.0.php:10
hashtopolis-backend     | Stack trace:
hashtopolis-backend     | #0 /var/www/html/install/updates/update_v0.14.x_v0.14.0.php(10): PDO->query()
hashtopolis-backend     | #1 /var/www/html/install/updates/update.php(53): include('...')
hashtopolis-backend     | #2 /var/www/html/inc/load.php(162): include('...')
hashtopolis-backend     | #3 {main}
hashtopolis-backend     |   thrown in /var/www/html/install/updates/update_v0.14.x_v0.14.0.php on line 10

and when accessing the benchmark page it is missing the tables which are commented out in the update script.

s3inlc avatar Aug 07 '23 14:08 s3inlc

@s3inlc thanks for the review! I have changed my updatescript and I think it works properly now. But there is still something that i do not understand, and that is when I delete my docker volumes to make Hashtopolis run my install script, it will also run my update script afterwards which will fail since it will try to make the same tables for the second time. What should I change so that the updatescript doesnt get run after an install?

jessevz avatar Aug 11 '23 11:08 jessevz

Thanks for the changes and sorry for the waiting time. I tested it more in detail now, there are a few things left which should be fixed which I'll put into the review. Regarding your question for the update script, essentially it's double check, but normally we do the check with the key (as you implemented now) and inside that we do again a check if the table already exists. This then covers all cases of setups from a release, from pulled master, etc.

When I tested the benchmark caching one small point came up. When caching the benchmark it is independent from the agent specific parameters. Though this can have a large influence, e.g. if someone decides to use a machine with multiple GPUs, but e.g. runs multiple agents with one using one of the GPUs each (using -d ). Maybe that is something to consider, that there may be agent specific parameters which should be taken into account as part of the benchmark cache CMD?

s3inlc avatar Aug 24 '23 12:08 s3inlc

Thanks for quickly updating all the requested parts. It seems that there now are only some issues with the tests remaining. One of the agent tests probably fails, because there was the change with the devices: https://github.com/hashtopolis/server/actions/runs/5974589110/job/16211674722?pr=955#step:6:96 and https://github.com/hashtopolis/server/actions/runs/5974589110/job/16211674722?pr=955#step:6:102

And the other one is most likely caused by a missing use statement in the BenchmarkUtils class: https://github.com/hashtopolis/server/actions/runs/5974589110/job/16211674722?pr=955#step:6:121

s3inlc avatar Aug 25 '23 12:08 s3inlc

I think everything works now, if there are still any improvements that needs to be made please let me know.

what's good to know is that how the attackparameters are parsed is still a bit POC-ish. For example i currently used a static map that parses a few short list parameters and turns them into the long list parameter to make sure the attackparameters are in 1 single format. Ideally we want to have a dynamic map were the cracker binary gives all the possible short and long list parameters, but I thought that would require so many adjustments to the code that it probably had to be a separate story.

jessevz avatar Aug 26 '23 11:08 jessevz

Thanks for all the updates and addressing all the requested changes. So far it looks now good for me. I will have a final look with zyronix regarding the PR and if we don't see anything else, it should be fine to merge.

Regarding your statement about the way you implemented the parsing of the parameters. I see that the perfect way would be much more complicated with parsing the possible short/long parameter combinations, also if any other cracker comes into place. For the moment I think your solution does cover already most of the cases and we may miss just some cases where it does not use the cached value, but it should not effect functionality, just that a benchmark may be requested where it would not be needed.

s3inlc avatar Aug 26 '23 11:08 s3inlc

@jessevz There are some merge conflicts left before we can merge this one. Could you take a look into this one?

zyronix avatar Oct 03 '23 07:10 zyronix