Test: Add unit tests and integrate tests for new features of DeePKS.
Linked Issue
Fix #6107
Unit Tests and/or Case Tests for my changes
- Add integrate test for v_delta(k) label for multi-k cases in DeePKS.
- Add unit test for orbital and orbpre calculations in DeePKS.
- Add unit test for vdpre calculation in DeePKS.
What's changed?
- Add some check functions used for unit tests in DeePKS.
- Add support for complex number checking in DeePKS unit tests.
I found that the reference files for vdpre is extremely large and is difficult to be reduced effectively. I don't know whether to add it or keep the original situation (no checks for vdpre). @mohanchen
I change the multi-k UT by using SZ orbitals for H2O, which significantly reduces the lines of reference files. Also number of K-point is setting to 5 3 1 (avoid using 2 2 2, which may not catch the falut caused by sign problems).
The result for UT differs in different machine. Seems like a hidden bug is not found yet. Not ready for merge.
I found that the source of the difference is the compilation using Intel or GNU. The step where the difference occurs is the step of calling torch::autograd::grad() in the cal_edelta_gedm() function. The differences in values in the test samples can be greater than 1e-4 and thus cannot be ignored. I'm not sure if it is caused by the underlying implementation of Pytorch. Currently, I'm trying to upgrade the version of libtorch to observe whether these differences in different environments will disappear.
@mohanchen I've tried different version of libtorch and use both intel&GNU compiler for test. The result shows that:
- For GNU, all the results are the same for different version of libtorch.
- For intel, the results are different from GNU, but same in version <2.5.0. For libtorch=2.5.0 (highest version support for ABACUS now), the results is different from both GNU result and intel results of other versions.
Currently, the result in this pr is from GNU version.
I've already accelerated these testing process now and closed the check for gedm temporarily. Ready for merge. @mohanchen