一口酥

Results 16 comments of 一口酥

> Hi, > > dependency: > pytorch 1.4.0, > CUDA 10.2 > Pytorch_encoding master branch > > the following code is run on single GPU( GeForce RTX 2080, 8GB): >...

Thanks a lot for your reply, I'll write as much details as I can think of. Dataset I use is `celebA 256x256`, 30000 images in total. landmarks are computed by...

Hi, thanks for your patience and help, I'll try small dataset and large landmark loss weight with deleting regularization of translation and rotation. To be sure I understand your suggestion:...

I tried `resnet152` and loss as: ``` python lmloss = landmarkLoss(self.BFM.landmarksAssociation, cam_vertices, landmarks, focals, self.cam_center) # same reg as NextFace code runstep2 reg = 0.0001 * sh_coeffs.pow(2).mean() + 0.001 *...

thanks soooo much! one small question, dose `3. initiliaze the fully connected layer for the focal to a positive number` meaning to set the weight of FC to positive or...

appreciate for your patience. in Encoder class: ```python from torchvision.models import resnet152, ResNet152_Weights self.encoder = resnet152(weights=ResNet152_Weights.DEFAULT) self.shapeFC = nn.Linear(1000, 80) self.expFC = nn.Linear(1000, 75) self.albedoFC = nn.Linear(1000, 80) ...... nn.init.zeros_(self.focalFC.weight)...

do you mean weight of them? they are multiplied with the same weight as `1` : ```loss = lmLoss + regLoss + photoLoss``` Good news is with your instruction, my...

hey there, I got a new question while trying decoder. It converges, but `diffuse texture loss` is much less than `spec texture loss`. For instance, at the very beginning of...