Keep slow notebooks up-to-date
We have a “pre-executed” folder with some slow notebooks. It would be great to have a way to test whether they still work. A few options to consider:
- Alert maintainers about notebooks that haven’t been committed in a long time.
- Re-run these notebooks nightly as part of smoke tests.
- Do the same but move them to a separate folder.
This extension may be helpful: https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/nbextensions/execute_time/readme.html
Could RTD be configured to use a locally hosted runner to compile notebooks?
Should we have a nag email/slack message sent on a regular cadence to rerun the notebooks?
Should we devote time to figuring out how to set up a "locally" hosted runner on USDF/HPC systems?
@drewoldag
Could RTD be configured to use a locally hosted runner to compile notebooks?
I don't know, but maybe failing on Gihub CI would be enough
Should we have a nag email/slack message sent on a regular cadence to rerun the notebooks?
We can do the same as for nightly CI
Should we devote time to figuring out how to set up a "locally" hosted runner on USDF/HPC systems?
This would be nice. I think we can use Middle Earth or Pittsburgh Supercomputer Center for that