there is a errer when runing
this line of code turn a error when runing
x = tf.reshape(x, [N, G, C // G, H, W])
typeerror: failed to convert object of type <class 'list'> to tensor. contents: [none, 32, 2, none, none]. consider casting elements to a supported type.
Could you provide the command you tried to run and the version of your TensorFlow?
the code is a bit long,so i just provide the main code as follow conv1 = tf.nn.conv2d(x, filter=[3, 3, 1, 64], strides=[1, 1, 1, 1], padding='SAME') conv1_bn = group_norm(conv1)
my version of tensorflow is 1.4 and python is 3.6 when python main.py , the error is follow
x = tf.reshape(x, [N, G, C // G, H, W]) typeerror: failed to convert object of type <class 'list'> to tensor. contents: [none, 32, 2, none, none]. consider casting elements to a supported type.
Are you using the norm code I implemented in ops.py or the snippet provided in the paper? I do not think I clearly understand your code, including where do you define x and what is group_norm.
i use the norm code ,just the name of function is differ from you. x = tf.placeholder(tf.float32,[None, None, None, 1], name='x_input')
I cannot debug with this piece of information. My guess is that tf is complaining about using None to specify a tensor shape. Maybe you can try to replace the line x = tf.reshape(x, [N, G, C // G, H, W])
with x = tf.reshape(x, tf.TensorShape([N, G, C // G, H, W])). I can try to help if more information is provided.
thanks@shaohua0116, I made it by feeding x a fixed shape. However, the test result was bad when I replaced BN by GN. my data batch is 4.
can you explain feeding x a fixed shape?befor input x into GN?
I also find this problem. it may be a bug of tensorflow. you can use x = tf.reshape(x, [-1, G, C // G, H, W]) It work !
I have a grayscale 3D image where I have let's say N samples with dims of H, W, D and C=1, what would be the proper way of normalizing it using your group norm routine?
This is the way I did it.
def group_norm(x, G=32, eps=1e-5, scope='group_norm') :
with tf.variable_scope(scope) :
N, D, H, W, C = x.get_shape().as_list()
G = min(G, C)
x = tf.reshape(x, [-1, D, H, W, G, C // G])
mean, var = tf.nn.moments(x, [1, 2, 3, 5], keep_dims=True)
x = (x - mean) / tf.sqrt(var + eps)
# with tf.variable_scope("group_norm", reuse=tf.AUTO_REUSE):
gamma = tf.get_variable('gamma', [1, 1, 1, 1, C], initializer=tf.constant_initializer(1.0))
beta = tf.get_variable('beta', [1, 1, 1, 1, C], initializer=tf.constant_initializer(0.0))
x = tf.reshape(x, [-1, D, H, W, C]) * gamma + beta
return x
But if I use group norm multiple times I get dimension error for beta and gamma since they will have a different value of C along different layers. I still cannot solve the issue, using setting reuse to false, or none or doing local_variable_initializer, I could find a stupid way around it but creating multiple functions of group norm each with a different name for betta and gamma.
Can you suggest a smarter way? did you get into this issue of already existing variables gamma and beta and not sharing them?
This is the way I did it.
def group_norm(x, G=32, eps=1e-5, scope='group_norm') : with tf.variable_scope(scope) : N, D, H, W, C = x.get_shape().as_list() G = min(G, C) x = tf.reshape(x, [-1, D, H, W, G, C // G]) mean, var = tf.nn.moments(x, [1, 2, 3, 5], keep_dims=True) x = (x - mean) / tf.sqrt(var + eps) # with tf.variable_scope("group_norm", reuse=tf.AUTO_REUSE): gamma = tf.get_variable('gamma', [1, 1, 1, 1, C], initializer=tf.constant_initializer(1.0)) beta = tf.get_variable('beta', [1, 1, 1, 1, C], initializer=tf.constant_initializer(0.0)) x = tf.reshape(x, [-1, D, H, W, C]) * gamma + beta return xBut if I use group norm multiple times I get dimension error for beta and gamma since they will have a different value of C along different layers. I still cannot solve the issue, using setting reuse to false, or none or doing local_variable_initializer, I could find a stupid way around it but creating multiple functions of group norm each with a different name for betta and gamma.
Can you suggest a smarter way? did you get into this issue of already existing variables gamma and beta and not sharing them?
I run into the same error. The current solution is reshape x into [-1, C, D, H*W] and multiply gamma with shape [1, C, 1, 1], and same for the beta. Have you found any trick to address to address this issue?