pointconv-tensorflow2 icon indicating copy to clipboard operation
pointconv-tensorflow2 copied to clipboard

Mistake in the main PointConvSA layer

Open TejaswiKC opened this issue 3 years ago • 0 comments

Hi,

Thank you for implementing PointConv as tf layers. When I actually used them for one of my applications, I found that there is an error in your implementation of the PointConvSA layer in 'pointconv/layers.py'.

More specifically, the np_conv layer in the build function has a non-trivial kernel of size [1, mlp[-1]] while the stride remains [1, 1]. This can be understood both from the architecture in Figure 5 of the reference paper as well as the code at "https://github.com/DylanWusee/pointconv/blob/master/PointConv.py". The current implementation in this repository reduces the number of filters by a large value, which leads to the model not training well. In fact, the loss does not reduce at all in my application.

I have resolved this by customizing my Conv2d layer in utils.py as: ` class Conv2d(Layer):

def __init__(self, filters, strides=[1, 1],
	activation=tf.nn.relu, padding='VALID', initializer='glorot_normal', bn=False,
	kernel=[1, 1]
):
	super(Conv2d, self).__init__()

	self.filters = filters
	self.strides = strides
	self.activation = activation
	self.padding = padding
	self.initializer = initializer
	self.bn = bn
	self.kernel = kernel

def build(self, input_shape):

	self.w = self.add_weight(
		shape=(self.kernel[0], self.kernel[1], input_shape[-1], self.filters),
		initializer=self.initializer,
		trainable=True,
		name='pnet_conv'
	)
	if self.bn: self.bn_layer = BatchNormalization()

	super(Conv2d, self).build(input_shape)

def call(self, inputs, training=True):

	points = tf.nn.conv2d(inputs, filters=self.w, strides=self.strides, padding=self.padding)
	if self.bn: points = self.bn_layer(points, training=training)
	if self.activation: points = self.activation(points)

	return points

`

This can then be called in building PointConvSA as, self.np_conv = utils.Conv2d(self.mlp[-1], strides=[1, 1], activation=self.activation, bn=self.bn, kernel=[1, self.mlp[-1]])

Hope this is helpful if someone comes across a similar issue.

TejaswiKC avatar Jun 24 '22 15:06 TejaswiKC