How to visualize the output of the generator
Related code to save the outputs is below: " g_objects = sess.run(net_g_test,feed_dict={z_vector:z_sample}) if not os.path.exists(train_sample_directory): os.makedirs(train_sample_directory) g_objects.dump(train_sample_directory+'/biasfree_'+str(epoch)) "
so how can I visualize the output 'biasfree_200' ?
I want to know how to see this data ,too.Did you resolve it?can you share with me
In the training or test phase, I change the saved data format to .mat. There is a python file to visualize the voxel data. Please see my website: https://github.com/tasx0823/voxel-visualization. Hope to help you.
you mean you save the data in .mat rather than dump()??? and you train the result is good?Could you share ?and we can communite in other media?such as QQ?facebook?
Ok. We can communicate in QQ. My qq is 532474454.
Hi, im stuck with the same problem, could you please share the code you used to save it as a .mat file?
It may not be the best way to solve it, but I did it by using matplotlib 3d scatterplots saving them as images to another folder after n epochs
Hey, could you please share the implementation of it with me? I tried to do that but the whole code broke. I want to see what I am doing wrong.
Thanks!
Below is the function I use for plotting. I didn't bother for a dynamic solution and just hardcoded 4 images as output by range(4).
def plot_arrays_3d(arrs, epoch, img_dir):
for i in range(4):
squeezed = np.squeeze(arrs[i], -1)
x, y, z = squeezed.nonzero()
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, -z, zdir='z', c='red')
plt.savefig(img_dir + '/biasfree_{0}_{1}.png'.format(epoch, i))
That's fantastic, thank you so much!
Can I ask you a quick question since you are active on this thread and have used the code. Did you get chairs outputted in the end. I ran my model with the same parameters as the code but in the end, my graphs are cubes and not chairs. I can see chair shapes around the 4000 epoch but after that they start turning into cubes? Did you change the parameters or am I doing something wrong?
I used this model to generate other 3D objects than the given ones. But I ran into the same problem, where objects started to get generated, but started collapsing to cubes after a short time. It was necessary to use lower learning rates for generator and discriminator to generate objects. The downside with those was it took 17000 epochs with to get near what I was looking for, but in the end the results were looking good. Although the computing time was 20h on a Tesla V100. The learning rates I used on mit_biasfree.py are
g_lr = 0.0002
d_lr = 0.000001
z_size = 100
It took me a while to fit the hyperparameters to get acceptable results and I don't think they're anywhere near an optimum. Trial and Error makes this task pretty tedious.
I used this model to generate other 3D objects than the given ones. But I ran into the same problem, where objects started to get generated, but started collapsing to cubes after a short time. It was necessary to use lower learning rates for generator and discriminator to generate objects. The downside with those was it took 17000 epochs with to get near what I was looking for, but in the end the results were looking good. Although the computing time was 20h on a Tesla V100. The learning rates I used on mit_biasfree.py are
g_lr = 0.0002 d_lr = 0.000001 z_size = 100It took me a while to fit the hyperparameters to get acceptable results and I don't think they're anywhere near an optimum. Trial and Error makes this task pretty tedious.
If we change the learning rate as mentioned here. Will it work? Is there any separate inference script to generate the model? Is testGAN the inference script? which generates the chair?
Please answer my questions. Thanks in advance.