Christoph Berganski
Christoph Berganski
> you define the op with the format `(Identifier, Python, C++, RTL)`, do you have a use for the "RTL" definition or is this just a preparation for the future...
This now relies on cherry-picked commits from https://github.com/Xilinx/finn/pull/901 and https://github.com/Xilinx/finn/pull/1030.
Hm, while I can successfully run the mobilenet and the cybersecurity end2end tests, the bnn-pynq tests fail immediately at the export step, this seems not to be my fault (at...
Ok, then I will skip doing the end2end and finn-examples tests now and focus on our transformer model. My dummy model already built successfully, but I would like to test...
There are scalar multiplications of floating-point type preceding the inputs to the Concat operator, turning the inputs into floating-point as well. Currently, the StreamingConcat does not support floating-point inputs, see...
Hm, conceptually, you now would need something like the `MoveIdenticalOpPastJoinOp` transformation (this is in `finn.transformation.streamline.reorder`), but it seems to be made for two-input join-nodes, while you have an "arbitrary" many...
Another problematic transformation is the **`MoveScalarAddPastMatMul`**, this one transforms the scalar to be added via dot-product with the matmul weights. Of course this does not work for the dynamic two-input...
_Re-post of insights I gained while looking at the related issue #892, to have this documented here as well:_ For me it seems like currently all occurrences of the **`FoldTransposeIntoQuantInit`**...
As we are gradually moving towards more realistic and complete models using the [Brevitas quantized multi-head attention](https://github.com/Xilinx/brevitas/blob/master/src/brevitas/nn/quant_mha.py), we are seeing even more issues: - **`FoldQuantWeights`** seems to propagate shapes backwards,...
Hm, the issue regarding the inverse of the scale after the `FoldQuantWeights` transformation seems to be due to some asymmetry in handling the two inputs to the Add node: The...