Hanling Wang
Hanling Wang
> How can use this code for cifar10 dataset? > this is my code which is run on cifar10,but i encountered some mistakes in the course of running as a...
I was just confused with the same issue but after rethinking the code seems right to me. Here's the link: https://github.com/pjreddie/darknet/blob/a3714d0a2bf92c3dea84c4bea65b2b0c64dbc6b1/src/blas.c#L9 Considering Yolov2, where w=h=26, c=512 and out_w=out_h=13, out_c=2048. The...
> i get these private field errors even after making the respective changesa as mentioned above: > Compiling minigrep v0.1.0 (C:\Users\veeks\USERPROFILES\projects\minigrep) > error[E0616]: field `query` of struct `minigrep::Config` is private...
I've successfully convert it to TensorRT version for inference. Kindly check it here: https://github.com/kongyanye/EfficientDet-TensorRT
I've successfully convert it to TensorRT version for inference. Kindly check it here: https://github.com/kongyanye/EfficientDet-TensorRT
disc_cost = tf.reduce_mean(disc_fake) - tf.reduce_mean(disc_real) + LAMBDA*gradient_penalty gradient_penalty can't be negative but the other 2 terms can be.
D_loss is composed of wasserstein loss (K.mean(y_true * y_pred)) so it can be negative.
In the code given by author of WGAN-GP (https://github.com/igul222/improved_wgan_training/blob/master/gan_64x64.py), the loss are defined as below: disc_cost = tf.reduce_mean(disc_fake) - tf.reduce_mean(disc_real) + LAMBDA*gradient_penalty gen_cost = -tf.reduce_mean(disc_fake) When generating output for discriminator,...
这里的loss是在一个batch(64)上计算的,batch太小的话随机性较高。如果想要看到更稳定的loss,可以对loss做moving average或者增大batch的数值
I figured out the problem. The api source file should be placed under $GOPATH/src/git.fd.io/govpp.git and the new packages is indeed called "interfaces" instead of "interface".