CI tests fail to run after latest update
I updated to the latest project template and tried to commit some new code to my repository, but now my CI tests are failing. Even before this attempt, the .yml files in .github/workflows were corrupted with updates that did not get cleaned up correctly. I fixed these by hand until I got pre-commit to run correctly. Even so, the CI tests are now failing with: errors reported here These errors do not make sense to me and don't seem to pertain to the code updates I made, unless my corrections to the munged files were somehow wrong. The errors seem to pertain to configuration issues with the project templates. I will note that the project template update tried to revert back to python 3.11, which I did not want, so I kept the python version at 3.12. Any help or guidance to resolve this issue will be much appreciated.
As an example, the update removed the following line from pyproject.toml:
addopts = "--import-mode=importlib"
and replaced it with
addopts = "--doctest-modules --doctest-glob=*.rst"
The latter line caused pytest to fail when I tried to run it. When I restored the original line, pytest worked again.
I got past one of the unit test errors by pinning the version of numpy in pyproject.toml and requirements.txt, but now the CI test is failing because it is not including the packages that I have listed in requirements.txt.
Hi @evevkovacs I'm looking over the errors in the workflows that you linked above, and my hunch is that in the testing-and-coverage.yml workflow, specifically in the test-lowest-versions section, we aren't picking up the requirements.txt file to use when trying to identify the lowest possible versions for testing.
Looking at your repo, I can see that you're not using a custom configuration for the Python project template. This means that by default the template will include testsing against the lowest versions of dependencies (RTD link).
I believe that the fastest way to get your tests working, and my general best-practices recommendation would be move the dependencies that are defined in the requirements.txt file into pyproject.toml into the dependencies list along with numpy.
Our followup actions will be:
- Ensure that test-lowest-version is not enabled by default when a user hasn't selected a custom configuration https://github.com/lincc-frameworks/python-project-template/issues/527
- Update the code that implements the test-lowest-version to look for a
requirements.txtand use if along withpyproject.tomlfile if available. https://github.com/lincc-frameworks/python-project-template/issues/528
Thank you. I will go ahead and try your suggestion and let you know. Also, how do I turn off test-lowest-version myself?
I tried adding requirements to pyproject.toml. It still failed. See errors. I think the quickest way to fix this would be to turn off the test-lowest-version option. How do I do this? Can I just remove the test-lowest-version section from the testing-and-coverage.yml?
The asv test is also still failing The issue seems to be a missing library libmambapy. This can be fixed by conda install -c conda-forge conda-libmamba-solver libmambapy, but this needs to happen for the workflow. I'm not sure how to do that.
Thanks for your help.
It looks like you still have some merge diffs in the benchmarks/asv.conf.json file and ASV is crashing when trying to read the json.
https://github.com/ArgonneCPAC/diffaux/blob/39553dcf603c492cefc4bb06d098447917c28718/benchmarks/asv.conf.json#L40
OK, so removing the test-lowest-version section from the testing-and-coverage.yml worked! Now I just need to get the asv-pr test to work again.
@jeremykubica Thank you, I fixed those merge diffs and it's working. How did you know to look at the asv.conf.json file? The error message was not very helpful.
My appologies for the late reply, but yes, as you've discovered, you can just the test-lowest-version from the testsing-and-coverage.yml file. Now that you've removed that, it should stay out of your way going forward.
@jeremykubica Thank you, I fixed those merge diffs and it's working. How did you know to look at the asv.conf.json file? The error message was not very helpful.
I don't have super generalizable advice. I saw that the error was occurring during a json load (that was in the logs you pointed to) and knew that ASV loaded the benchmark configuration from a json file.