plasma-python
plasma-python copied to clipboard
PPPL deep learning disruption prediction package
- [ ] Establish actually useful regression and unit tests
Presently, at the end of every epoch, the trained weights are reloaded via a call to `Keras.Models.load_weights()` 3x separate times in order to evaluate the accuracy on the shots in...
Related to #58, #52, and #51. We should to add a continually-updated record of the `examples/second`, `second/batch`, and other statistics discussed in #51 to a new file `docs/Benchmarking.md` (or `ComputationalEfficiency.md`,...
Details reproduced from email correspondence in November 2019. There are slight discrepancies in the output of `guaranteed_preprocessed.py` from the current version of the code and the figures from Kates-Harbeck *et...
Following discussion on Wednesday 2019-12-04 in FRNN group meeting in San Diego, we need to start systematically saving the best trained models for: 1. Collaboration (no need for multiple users...
Related to #52.
A tricky, undocumented step when switching to training on `jet_data_0D` is that the user must comment-out the line containing `'etemp_profile': etemp_profile, 'edens_profile': edens_profile,` in the definition of the dictionary that...
Straightforward alternative "single number" evaluation metric for ML algorithms--- should make comparisons to e.g. Rea *et al* (2019) and Churchill (2019) a bit easier. More generally, we could output various...
Example of the current per-step (iteration) diagnostic output provided by FRNN around epoch 22 of the D3D 0D model (run on 4 V100 GPUs of Traverse): ``` [0] step: 0...
Mostly repeating private email and in-person communication on this topic for reference notes and posterity. FRNN performance on V100s on the 2x IBM AC922 systems, OLCF Summit and Princeton's Traverse...