dpgen2 icon indicating copy to clipboard operation
dpgen2 copied to clipboard

Request for new features: stage-specific parameters for explore and fp, finetune/iter-initial-modle distinguishing training parameters

Open Vibsteamer opened this issue 1 year ago • 1 comments

REQUEST1 :

expect the following parameters to have further structures to support exploration-stage specific assignment :

  1. explore/convergence (all paramenters within)
  2. explore/max_numb_iter
  3. explore/fatal_at_max
  4. fp/task_max

e.g.,

  • in stage_0 : ... "explore": { "type": "lmp", "config": { "command": "lmp -var restart 0", "impl": "pytorch" }, "convergence": { "type": "adaptive-lower", "conv_tolerance": 0.005, "rate_candi_f": 0.15, "level_f_hi": 5.0, "n_checked_steps": 3 }, "max_numb_iter": 2, "fatal_at_max": false, ... "fp": { "task_max": 4000, ...

  • in stage_3 : ... "explore": { "type": "lmp", "config": { "command": "lmp -var restart 0", "impl": "pytorch" }, "convergence": { "type": "adaptive-lower", "conv_tolerance": 0.005, "numb_candi_f": 4000, "level_f_hi": 5.0, "n_checked_steps": 3, }, "max_numb_iter": 20, "fatal_at_max": True, "fp": { "task_max": 4000, ...

REQUEST 2

expect to support different ending_pref_e/f/v for initial_finetune from multi-task pre-train models and the successive init_model form the finetuned_initial_model.

Currently train/config supports only start but no end parameters, like only "init_model_start_pref_e" but no "init_model_end_pref_e". Instead, the end_prefs are inherited from the limit_prefs from one training scripr defined intrain/config/templated_script

Maybe need to support two scripts as by train/config/templated_script, or adding new init_model_end_pref_e/f/v parameters in train/config

scenario arising REQUEST 1

In practice of the pre-train models initiated DP-GEN2, multiple successive exploration stages are used to ehance the exploration efficiency on a complex sample space.

The sample sapce consists of derivatives from (1) many severely different initial configurations, (2) both trivial dynamics images and significant low-probability instances, and (3) successors after low-probability instances which is also trivial but as well severely different compared with their initial/parent configurations.

(1) will suffer from the species bias after pre-train (and finetune), then leads to over-sampling on full trajectories of specific far-away-from-pretrain configurations (2) is our central target (3) will suffer from the conformer bias after pre-train (and finetune), then leads to over-sampling on these trivial successors configurations

Thus stage_0 and stage_1 are used to debiasing (1) and (3) through randomly select candidates from a broader model_devi range. No final exploration convergence is expected for these two stages. stage_2 is the actually meant to be converged one for *2), and related parameters would be different from debiasing stages.

scenario arising REQUEST 2

tests showed different parameter preferences for trainings in two stages.

Vibsteamer avatar Apr 26 '24 07:04 Vibsteamer

BTW, due to some compatibility limitation, I'm using this branch https://github.com/zjgemi/dpgen2/tree/deepmd-pytorch from and thanks to @zjgemi

Vibsteamer avatar Apr 26 '24 07:04 Vibsteamer