Efficient-CapsNet icon indicating copy to clipboard operation
Efficient-CapsNet copied to clipboard

Multiple Caps Layers

Open yanivgilad opened this issue 4 years ago • 2 comments

Hi

we are very impressed by your article and the improvement you implemented. In the article you wrote: "However, we adopt only two layers of capsules due to the relative simplicity of the dataset investigated" we are trying to use your caps implementation with our problem. our input is 40X40X40 and a more complicated image than MNIST, therefore we want to use more caps layers. should we simply duplicate the PrimaryCaps layers before the FCCaps layer?

Thanks

Yaniv.

yanivgilad avatar Jan 11 '22 08:01 yanivgilad

Hi @yanivgilad,

Thank you. We appreciate it 😊

No, to create a second layer of capsules ,you have to place another "FCCaps" layer, setting the number of capsules and their relative dimension.

EscVM avatar Jan 11 '22 18:01 EscVM

Hi @EscVM Thanks for the response, we are still not sure how to proceed. our input is a series of signals which could look like this: image

BTW do you think its better if we use wavelet transform on it so it is converted to this? image

Our Network looks like this:

x = tf.keras.layers.Conv2D(32,5,activation="relu", padding='valid', kernel_initializer='he_normal')(inputs)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(64,3, activation='relu', padding='valid', kernel_initializer='he_normal')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(64,3, activation='relu', padding='valid', kernel_initializer='he_normal')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(128,3,2, activation='relu', padding='valid', kernel_initializer='he_normal')(x)
x = tf.keras.layers.BatchNormalization()(x)
x = PrimaryCaps(128, 15, 16, 8)(x)
digit_caps = FCCaps(2, 16)(x)   

we have 40 signals so our input is 40X40X40, how should we add another FCCAPS layer?

yanivgilad avatar Jan 18 '22 12:01 yanivgilad