deepks-kit
deepks-kit copied to clipboard
The error of bandgap does not decrease as the iteration proceeds
- The problem of convergence rate decreasing to 0 is mitigated by adjusting the orbital_factor to become smaller.
- However, as the iteration proceeds, the error of bandgap does not continue to decrease.
- Or the error of bandgap decreases and then increases as the iteration progresses.
For example, with this parameter setting:force_factor:1 stress_factor:0.1 orbital:0.01, the log.data is the fallowings.
iter.init
iter.00
iter,01
iter.02
iter.03
iter.04
Questions:
- Any suggestions for parameter tuning for this phenomenon? Before, by reducing the training epoch, we can let the training have a few more rounds of iterations, but after that, it will still fail to converge in a certain round of scf calculation, and the error will only decrease slightly.
- There is a lot of confusion about the convergence of the scf calculation with training iterations and the problem of bandgap error not decreasing, and I would like to get answers and suggestions.