Skip observables contained in particle groups
This is not a fix for https://github.com/MDAnalysis/mdanalysis/issues/4598 but enables reading files that contain data in
observables
\-- <group1>
| \-- <observable2>
| \-- step: Integer[variable]
| \-- time: Float[variable]
| \-- value: <type>[variable][D][D]
Changes made in this Pull Request:
PR Checklist
- [x] Tests?
- [ ] Docs?
- [ ] CHANGELOG updated?
- [ ] Issue raised/referenced?
Developers certificate of origin
- [x] I certify that this contribution is covered by the LGPLv2.1+ license as defined in our LICENSE and adheres to the Developer Certificate of Origin.
📚 Documentation preview 📚: https://mdanalysis--4615.org.readthedocs.build/en/4615/
This file hasn't been touched in 9 months and I don't think there are some open PRs so there would be a chance to format the entire file using ruff which I accidently did here https://github.com/MDAnalysis/mdanalysis/commit/faafe8b671bebd954c62a1027afe7dfaae7e9bd3 - is this of interest?
Hello @PythonFZ! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
There are currently no PEP 8 issues detected in this Pull Request. Cheers! :beers:
Comment last updated at 2024-07-15 13:51:03 UTC
Linter Bot Results:
Hi @PythonFZ! Thanks for making this PR. We linted your code and found the following:
Some issues were found with the formatting of your code.
| Code Location | Outcome |
|---|---|
| main package | ⚠️ Possible failure |
| testsuite | ⚠️ Possible failure |
Please have a look at the darker-main-code and darker-test-code steps here for more details: https://github.com/MDAnalysis/mdanalysis/actions/runs/9940672284/job/27457930405
Please note: The black linter is purely informational, you can safely ignore these outcomes if there are no flake8 failures!
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 93.59%. Comparing base (
cfda8b7) to head (66b9ecb). Report is 29 commits behind head on develop.
Additional details and impacted files
@@ Coverage Diff @@
## develop #4615 +/- ##
===========================================
- Coverage 93.61% 93.59% -0.02%
===========================================
Files 171 183 +12
Lines 21243 22316 +1073
Branches 3934 3936 +2
===========================================
+ Hits 19886 20886 +1000
- Misses 898 971 +73
Partials 459 459
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@edisj and @ljwoods2 could you have a quick look at this PR and comment?
@hmacdope would you be able to shepherd this PR? If not switch it to me.
@hmacdope What is the expected way to auto-format the code? I'm used to running either black or ruff format but once I run that on the code base it will yield to changes in most of the files. I haven't found anything at https://userguide.mdanalysis.org/stable/contributing.html?
Maybe there are some pre-commit hooks I haven't found?
Only format lines that you touch, otherwise the diff gets too noisy. There’s the darker linter/formatter action that tells you if you should fix any of your lines. Am 6/18/24 um 04:50 schrieb Fabian Zills @.***>: @hmacdope What is the expected way to auto-format the code? I'm used to running either black or ruff format but once I run that on the code base it will yield to changes in most of the files. I haven't found anything at https://userguide.mdanalysis.org/stable/contributing.html? Maybe there are some pre-commit hooks I haven't found?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
@PythonFZ lets go with .h5md as I think the missing extension may be causing some test failures. Other than that this is very close.
@PythonFZ lets go with
.h5mdas I think the missing extension may be causing some test failures. Other than that this is very close.
I renamed the files and ensured everything is properly formatted. I hope everything is good to go?
There seems to be some failing test on the azure runner. Can your reproduce locally @PythonFZ ?
There seems to be some failing test on the
azurerunner. Can your reproduce locally @PythonFZ ?
All tests are passing for me locally. From what I see in the logs, it is the following test (correct me if I am wrong)
/home/vsts/work/1/s/testsuite/MDAnalysisTests/parallelism/test_multiprocessing.py
All tests from that file run locally for me. Also, these tests should not have been affected by this PR and the tests I changed also all run locally. The same seems to be true for the GH hosted runners. I'm a bit lost here on how to fix this, because I don't know the differences between the Azure runner and the GH runners.
This failing test is flaky and is not related to your work. It will not block the PR. OliverAm 6/28/24 um 07:20 schrieb Fabian Zills @.***>:
There seems to be some failing test on the azure runner. Can your reproduce locally @PythonFZ ?
All tests are passing for me locally. From what I see in the logs, it is the following test (correct me if I am wrong) /home/vsts/work/1/s/testsuite/MDAnalysisTests/parallelism/test_multiprocessing.py
All tests from that file run locally for me. Also, these tests should not have been affected by this PR and the tests I changed also all run locally. The same seems to be true for the GH hosted runners. I'm a bit lost here on how to fix this, because I don't know the differences between the Azure runner and the GH runners.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
Let me kick CI IIRC there were some failing tests on mdanalysis-CI (Azure_Tests Linux-Python310-64bit-full-wheel)
@orbeckst are you happy with changes here?
@PythonFZ if you can have a quick look at my comments then we should be able to merge this PR soon.
@orbeckst I've added a file that tests for AttributeError. I can add another file for testing against KeyError but this is an unlikely case. The code will account for this though. Let me know if you prefer the additional file or are good with this.
I'm not sure how to account for
Add this limitation to the documentation.
If you are referring to
We should issue a warning that observables of the form observables/group1/observable1 are being ignored if present.
This should not longer be the case and all valid, non-fixed time step, H5MD datasets should work.
lgtm, thanks for adding the test @PythonFZ !
Thank you for your contribution @PythonFZ ! 🎉