omesa
omesa copied to clipboard
visualizations and storage for experiments
This is a big one, which requires the set-up of a database back-end (done already) and a (web) front-end:
- [x] Dump serialized config files to JSON database.
- [x] Represent the performance in a way that general metrics can be shown.
- [x] project
- [x] name
- [x] training_set (i.a.)
- [x] testing_set (i.a) -- this and above probably need to be abstracted from loaders
- [x] string representation of the used features
- [x] string NAME of the classifier used
- [x] ~~POS / NEG f1-scores (could be put in a graph)~~ micro f-1
- [ ] Able to overview and compare experiments visually.
- [x] Flat Performance bar.
- [x] Plotting performance on data proportions.
- [ ] Summary of experiment configurations.
- [x] Confusion matrices.
- [ ] Aggregate performances in one report.
- [ ] t-SNE?
- [ ] Insight into feature importances.
- [x] LIME evaluation.
- [ ] Coof. representations.
First rough work on the front-end done in https://github.com/cmry/omesa/commit/705ce59006b38eb94e098cc452ecad80feaf792e. For now, first there needs to be some representation in the database that makes relevant calls a bit easier. Some of those high level representations should be:
- project
- name
- training_set (i.a.)
- testing_set (i.a) -- this and above probably need to be abstracted from loaders
- string representation of the used features
- string NAME of the classifier used
- POS / NEG f1-scores (could be put in a graph)
LIME evaluation (from this paper) has been added since https://github.com/cmry/omesa/commit/3da48276999e1b401d52889a774c2437ea15aec9. Currently for binary classification only!