QONNX ingestion for Vivado and Quartus
Description
This is the main development to ingest QONNX in hls4ml. It's a bit of a large PR, and will probably take a while to review and update as needed. It has been tested on the QONNX model zoo inputs, and includes them in pytests.
It includes #562, #583, and #525 because they were largely developed here, and are generally needed for ONNX parsing. Ideally, those would go in first. (I will go through those PRs to make sure they have everything needed to be merged.)
For more information, see https://indico.cern.ch/event/1184299/contributions/4975803/attachments/2484362/4265432/QONNX%20Ingestion.pdf
Type of change
For a new feature or function, please create an issue first to discuss it with us before submitting a pull request.
Note: Please delete options that are not relevant.
- [x] Bug fix (non-breaking change that fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [x] A new research paper code implementation
Tests
The test_qonnx.py is the main source of tests, mainly running over the QONNX model zoo inputs.
- [x] Add tests that do synthesis
Checklist
- [x] I have read the guidelines for contributing.
- [x] I have commented my code, particularly in hard-to-understand areas (but could do better)
- [ ] I have made corresponding changes to the documentation.
- [x] My changes generate no new warnings.
- [x] I have added tests that prove my fix is effective or that my feature works.
Thanks for this, it's a lot of work. I'll take a look at why the pytests fail, it's the test job generation step that fails, not even the pytests themselves.
I think the main thing this development would benefit from is more refactoring of the new optimizer passes into flows. It seems like everything that is involved in removing Quant nodes from the graph should be in an isolated QONNX converter flow. Then further, it looks like after all the conversion there are some new attributes in the Model specific to the QONNX conversion, while I think we need to aim for consistency in the IR irrespective of the source model flavour.
For example, the propagate_dense_precision and propagate_conv_precision passes are obviously really nice - and could be expanded to other layers - but they look for attributes quant_precision, weight_precision, bias_precision which would only be there for QONNX models and not for QKeras models (which already adds weight_quantizer, bias_quantizer). So that part of the IR that relates to quantization needs to be made consistent between the two (which may imply changes to both QONNX and QKeras conversion).
And then indeed we wait for 525, 562, 583 since in principle this PR shouldn't touch the backends.
I saw that the new qonnx test fails due to missing qonnx in the test image environment. I've updated the test image, and pushed a commit to pick it up. Plus I switched the imports from finn to imports from qonnx since those features were moved.
For example, the
propagate_dense_precisionandpropagate_conv_precisionpasses are obviously really nice - and could be expanded to other layers - but they look for attributesquant_precision,weight_precision,bias_precisionwhich would only be there for QONNX models and not for QKeras models (which already addsweight_quantizer,bias_quantizer). So that part of the IR that relates to quantization needs to be made consistent between the two (which may imply changes to both QONNX and QKeras conversion).
I initially made the optimizers propagate all precisions, but this proved to be bad if you left the default ap_fixed<16,6> since you wound up with huge accumulators. I wanted to restrict the propagation to when you explicitly specify the precision of the inputs. The way QKeras is currently parsed, I don't think you can tell when the input is specially configured. I used the special attributes to control when to apply the optimizer. There may be a better way to indicate when to to activate this optimizer.
I initially made the optimizers propagate all precisions, but this proved to be bad if you left the default ap_fixed<16,6> since you wound up with huge accumulators. I wanted to restrict the propagation to when you explicitly specify the precision of the inputs. The way QKeras is currently parsed, I don't think you can tell when the input is specially configured. I used the special attributes to control when to apply the optimizer. There may be a better way to indicate when to to activate this optimizer.
Yeah, I agree that it's bad for 'PTQ' models so it should be somehow protected. I had been thinking about this independently from QONNX (since it is), and in that sense it could be another standalone flow (call it type propagation or something like that). That way controlling whether to run these passes can be left to the user since they can already control which flows to run from the config. And that way, if the flow is run at all I think it's an okay policy to override any types in the config - so we wouldn't necessarily need to know if they were configured or extracted from the model.
This pull request was presented at an hls4ml meeting: https://indico.cern.ch/event/1184299/contributions/4975803/attachments/2484362/4265432/QONNX%20Ingestion.pdf
Concerning the propagation of precisions down, my plan was that once the QKeras parsing was improved, you would enable it there, too. This was meant to be an optimization that worked in all cases of QAT. But maybe it is fine to have it as a standalone flow that you enable in QKeras and QONNX parsing by default, and in other cases if you prefer. If you control it that way, you have the advantage of being able to enable it or disable it at will, even in cases that are not QKeras or QONNX, but you do have the disadvantage of having to enable it or disable it for the whole graph, while the current method (with the planned QKeras extension) enables it per node provided the quantizations are set by QAT, but doesn't allow enabling it when doing PTQ. I could be convinced either way
This is replaced by #832