finn icon indicating copy to clipboard operation
finn copied to clipboard

IODMA not generated

Open Logitek27 opened this issue 4 years ago • 2 comments

Hello, I'm having some trouble to use FINN with my own network. I already followed the tutorial scripts which worked fine. My network is not composed of fancy layers, only conv, batchnorm, Relu dans fully connected layers. I'm using FINN v0.5b and get the following error on the synthesize step:

Running step: step_synthesize_bitfile [14/15] Traceback (most recent call last): File "/workspace/finn/src/finn/builder/build_dataflow.py", line 128, in build_dataflow_cfg model = transform_step(model, cfg) File "/workspace/finn/src/finn/builder/build_dataflow_steps.py", line 486, in step_synthesize_bitfile ZynqBuild(cfg.board, cfg.synth_clk_period_ns, cfg.enable_hw_debug) File "/workspace/finn-base/src/finn/core/modelwrapper.py", line 140, in transform transformed_model File "/workspace/finn/src/finn/transformation/fpgadataflow/make_zynq_proj.py", line 326, in apply MakeZYNQProject(self.platform, enable_debug=self.enable_debug) File "/workspace/finn-base/src/finn/core/modelwrapper.py", line 140, in transform transformed_model File "/workspace/finn/src/finn/transformation/fpgadataflow/make_zynq_proj.py", line 168, in apply ), "Must have 1 AXI lite interface on IODMA nodes" AssertionError: Must have 1 AXI lite interface on IODMA nodes

/workspace/finn/src/finn/transformation/fpgadataflow/make_zynq_proj.py(168)apply() -> ), "Must have 1 AXI lite interface on IODMA nodes"

It seems that there should be an IO AXI lite for DMA, but it's not generated.. I don't know why? Can someone help me with this? I have put my a simplified model to reproduce the error and the script I used here : https://github.com/Logitek27/FINN_IODMA_ISSUE . I just used the "build_custom" command with the FINN docker.

Thanks for your help

Logitek27 avatar Mar 29 '21 07:03 Logitek27

Hi, your model is not directly compatible with the series of transformations applied by the default build_dataflow_cfg flow. For example, your model contains depthwise-conv layers, which require the InferVVAU() transformation to map the respective MatMul nodes to "Vector_Vector_Activate_Batch" HLS cores. This transformation is not present in the default build steps (see https://github.com/Xilinx/finn/blob/dev/src/finn/builder/build_dataflow_steps.py), so the MatMul nodes remain in the graph after step 3 (convert_to_hls). This breaks the rest of the build flow because the next step cannot connect all nodes within one dataflow partition, which is required for IODMA insertion. As you can see in the intermediate .onnx model of step 4, it breaks the graph into separate dataflow partitions where it encounters non-HLS nodes.

To fix this, you can experiment with a custom series of build steps. I suggest a look at our MobileNetV1 example, which also uses a few custom steps to deal with the dw conv layers: https://github.com/Xilinx/finn-examples/blob/main/build/mobilenet-v1/build.py https://github.com/Xilinx/finn-examples/blob/main/build/mobilenet-v1/custom_steps.py

I tried these exact steps on your model, but it doesn't seem to do the trick without some modification. Simply adding the InferVVAU() to the default step 3 handles the dw MatMul nodes, but the graph still breaks in 3 places:

  1. MaxPoolNHWC followed by FMPadding_Batch
  2. MaxPoolNHWC followed by FMPadding_Batch
  3. Series of Transpose->MaxPool->Mul->Flatten

fpjentzsch avatar Apr 02 '21 09:04 fpjentzsch

Hi, Thank you for your response it's very clear. I've tried to add this transformation and other modifications to solve my problem without success yet. But it's better with "InferVVAU()" transformation. I let you know if I succeed to solve the problem!

Logitek27 avatar Apr 12 '21 08:04 Logitek27

I am closing this issue due to inactivity. Please feel free to reopen or create a new issue if the problem persists!

auphelia avatar Feb 14 '23 13:02 auphelia