[BUG] Example of smaller net architecture that eats more CPU then it's bigger brother
I tried to search better architectures for NAM and find really interesting example Win 10, Reaper 6.70, NAM plugin VST3.0 v0.7.11 on AMD Ryzen 5 3600 and RTX4070Ti Project sample rate 48Khz NAM files with samplerate 192kHz
I work on architecture based on WaveNet with 5 layers and 4 channels (alternate to "Feather" with 1.5x better loss values). Then i try to make it less CPU hungry and decrease 4 channels to 3. Resulted neural net has smaller size (less weights number), but it eats more CPU.
Here are screenshots of how much CPU it eats (see FX CPU on right side):
here are the comparison of their architectures, only differense is size of each layer 3 instead of 4:
"input_size": 1, "condition_size": 1, "head_size": 3, "channels": 3, "kernel_size": 6, "dilations": [1, 5, 25, 125], "activation": "Tanh" "input_size": 3, "condition_size": 1, "head_size": 3, "channels": 3, "kernel_size": 6, "dilations": [1, 5, 25, 125], "activation": "Tanh" "input_size": 3, "condition_size": 1, "head_size": 3, "channels": 3, "kernel_size": 6, "dilations": [1, 5, 25, 125], "activation": "Tanh" "input_size": 3, "condition_size": 1, "head_size": 2, "channels": 3, "kernel_size": 6, "dilations": [1, 5, 25, 125], "activation": "Tanh" "input_size": 3, "condition_size": 1, "head_size": 1, "channels": 2, "kernel_size": 6, "dilations": [625, 1, 5, 25, 125, 625], "activation": "Tanh" , "head": null, "head_scale": 0.36 vs "input_size": 1, "condition_size": 1, "head_size": 4, "channels": 4, "kernel_size": 6, "dilations": [1, 5, 25, 125], "activation": "Tanh" "input_size": 4, "condition_size": 1, "head_size": 4, "channels": 4, "kernel_size": 6, "dilations": [1, 5, 25, 125], "activation": "Tanh" "input_size": 4, "condition_size": 1, "head_size": 4, "channels": 4, "kernel_size": 6, "dilations": [1, 5, 25, 125], "activation": "Tanh" "input_size": 4, "condition_size": 1, "head_size": 2, "channels": 4, "kernel_size": 6, "dilations": [1, 5, 25, 125], "activation": "Tanh" "input_size": 4, "condition_size": 1, "head_size": 1, "channels": 2, "kernel_size": 6, "dilations": [625, 1, 5, 25, 125, 625], "activation": "Tanh" , "head": null, "head_scale": 0.36
Here are models (they are for 192khz, will make same in 48kHz for check same results), main info in filename: cpu utilize NAM examples.zip
maybe this example will helps to find some little buggy CPU utilizing thing on NAM plugin.
The same happens with 48kHz models, lite version eats more CPU:
Interesting. Can you use benchmodel.cpp to run some tests factoring out the plugin code and see if it still is observed there?
There's a fair amount of juice to squeeze in optimizing the DSP code, but that may help as a first step. I somewhat expect that this is an Issue for NeuralAmpModelerCore, and not this repo, but this will help me be more sure.
i'm not experienced enough. as i understand this C++ code measures the cpu-time in ms of running NAM model on some data samples, but i don't know how to run this bench on my .NAM files =) I want to make this tests, maybe you can help, guide me in a nutshell, what exactly to do to run bechmark?