CMSIS_5 icon indicating copy to clipboard operation
CMSIS_5 copied to clipboard

Deploying a 2D Convolutional Network from Keras

Open mengna0707 opened this issue 4 years ago • 5 comments

I looked through the previous problem: reordering the weights ot the neural model implemented by Keras, and converting it to the inference framework implemented by CMSIS-NN(https://github.com/ARM-software/CMSIS_5/issues/541).

I ran a test for different layers using the reordering method mentioned above. 1)For the full connection layer: In Keras, the input is (36,2), 36 is the output of the previous convolution layer, and the original shape is(3,3,4). So my transformation for input and weights of this laver are as follows. In the neural network implemented by CMSIS-NN, I got the correct output!

transposed intput=input.transpose(2,1,0)  # reorder the input
transposed_wts=np.reshape(weight(3,3,4,2)).transpose(3,2,1,0)  # reorder the weights 
transposed_wts=np.reshape(transposed_wts(2,36))
transposed_bias =np.transpose(bias)  # reorder the bias

2)For the Convolutional Layer: In Keras,the input's shape is(1,3,3,2), weight's shape is (3,3,2,4). I reordered these data in the following way,but the result is wrong!

transposed_input= input.transpose(2,1,0)  # reorder the input
transposed_wts= weight.transpose(3,0,1,2)   # reorder the weights 
transposed_bias=np.transpose(bias)  # reorder the bias

Is there something wrong with the way l reordered the inputs and weights for these layers? Any advice would be appreciated!

mengna0707 avatar Jan 08 '22 04:01 mengna0707

Maybe this is related to that for fully connected per layer quantization is used however for conv Keras is using per channel quantization and the legacy API is using per layer quantization? https://github.com/mansnils/CMSIS_5/tree/develop/CMSIS/NN#legacy-vs-tfl-micro-compliant-apis

mansnils avatar Jan 10 '22 08:01 mansnils

@mengna0707 I would suspect reordering for the convolution too. You'll have to see if the format from keras actually requires reordering. I am guessing here. If the keras input shape of (1,3,3,2) is of NHWC, then it doesn't require re-ordering for CMSIS-NN. That would make the weight shape as HxWxIN_CHxOUT_CH. Now this has to be reshaped to {in channel, x kernel, y kernel, out channel}.

https://developer.arm.com/documentation/102591/0000/Compare-the-ML-framework-and-CMSIS-NN-data-layouts has an example of re-ordering. What is in the tutorial is all the support that exists now since the legacy API's are not supported any longer. Hope that helps.

felix-johnny avatar Jan 10 '22 09:01 felix-johnny

Hi @mansnils ,I didn't use TFLite's API for simulation quantization in my training. I used Legacy API to test, so I used per layer quantization in the way mentioned in this tutorial( https://github.com/ARM-software/ML-KWS-for-MCU/blob/8151349b110f4d1c194c085fcc5b3535bdf7ce4a/Deployment/Quant_guide.md ).

mengna0707 avatar Jan 11 '22 02:01 mengna0707

@felix-johnny ,I converted the input in various ways,including direct storage or conversion of shapes for storage,but they did't work. As for weights conversion,I tried the way which you mentioned. Even if the following method is used, the result is not correct.

transposed_wts = weight.transpose(2,1,0,3);  # reorder the weights
# transposed_wts = weight.transpose(2,0,1,3) # another way I tried
transposed_bias = np.transpose(bias);
transposed_input = np.transpose(quant_input) # reoder the input

Oh,god, the API that I used to test is arm_convolve_HWC_q7_basis.c , if this reordering tutorial(https://developer.arm.com/documentation/102591/0000/Compare-the-ML-framework-and-CMSIS-NN-data-layouts ) doesn't work with legacy API, then I need to retest the API arm_convolve_s8.c. Thank you for your advice. I'm going to look at the program code again.

mengna0707 avatar Jan 11 '22 03:01 mengna0707

@mengna0707 have u been able to fix your problem? I am experiencing the same as you

MiguelCosta94 avatar Apr 23 '22 18:04 MiguelCosta94