deepks-kit icon indicating copy to clipboard operation
deepks-kit copied to clipboard

The error of bandgap does not decrease as the iteration proceeds

Open yycx1111 opened this issue 7 months ago • 0 comments

  1. The problem of convergence rate decreasing to 0 is mitigated by adjusting the orbital_factor to become smaller.
  2. However, as the iteration proceeds, the error of bandgap does not continue to decrease.
  3. Or the error of bandgap decreases and then increases as the iteration progresses. For example, with this parameter setting:force_factor:1 stress_factor:0.1 orbital:0.01, the log.data is the fallowings. iter.init Image

iter.00 Image

iter,01 Image

iter.02 Image

iter.03 Image

iter.04 Image

Questions:

  1. Any suggestions for parameter tuning for this phenomenon? Before, by reducing the training epoch, we can let the training have a few more rounds of iterations, but after that, it will still fail to converge in a certain round of scf calculation, and the error will only decrease slightly.
  2. There is a lot of confusion about the convergence of the scf calculation with training iterations and the problem of bandgap error not decreasing, and I would like to get answers and suggestions.

yycx1111 avatar Jun 25 '25 16:06 yycx1111