Multiple Caps Layers
Hi
we are very impressed by your article and the improvement you implemented. In the article you wrote: "However, we adopt only two layers of capsules due to the relative simplicity of the dataset investigated" we are trying to use your caps implementation with our problem. our input is 40X40X40 and a more complicated image than MNIST, therefore we want to use more caps layers. should we simply duplicate the PrimaryCaps layers before the FCCaps layer?
Thanks
Yaniv.
Hi @yanivgilad,
Thank you. We appreciate it 😊
No, to create a second layer of capsules ,you have to place another "FCCaps" layer, setting the number of capsules and their relative dimension.
Hi @EscVM
Thanks for the response, we are still not sure how to proceed.
our input is a series of signals which could look like this:

BTW do you think its better if we use wavelet transform on it so it is converted to this?

Our Network looks like this:
x = tf.keras.layers.Conv2D(32,5,activation="relu", padding='valid', kernel_initializer='he_normal')(inputs)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(64,3, activation='relu', padding='valid', kernel_initializer='he_normal')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(64,3, activation='relu', padding='valid', kernel_initializer='he_normal')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(128,3,2, activation='relu', padding='valid', kernel_initializer='he_normal')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = PrimaryCaps(128, 15, 16, 8)(x)
digit_caps = FCCaps(2, 16)(x)
we have 40 signals so our input is 40X40X40, how should we add another FCCAPS layer?