plasma-python
plasma-python copied to clipboard
PPPL deep learning disruption prediction package
Currently, FRNN reports the following metrics related to computational speed/efficiency during training: **Per step, per epoch:** - `Examples/sec` - `sec/batch` - % of batch time spent in calculation vs. synchronization...
Currently, if the testing and training ({train} U {validate}) are drawn from the same source shot list, then the ratio `conf['model']['train_frac']` is used to randomly divide the source shots without...
Related #42 (remove Theano).
https://github.com/PPPLDeepLearning/plasma-python/blob/b13dbed1883730d971dfe87fc1bc44e368840083/data/gadata.py#L77 In the following code: `if numpy.ndim(self.ydata) == 2: self.ydata = numpy.transpose(self.ydata) ` it's better to replace np.transpose() with the shorthand .T: `if numpy.ndim(self.ydata) == 2: self.ydata = self.ydata.T `...