HairFastGAN icon indicating copy to clipboard operation
HairFastGAN copied to clipboard

Official Implementation for "HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach"

Results 17 HairFastGAN issues
Sort by recently updated
recently updated
newest added

Hello HairFastGAN Team, First of all, thank you for your incredible work on the HairFastGAN project. It's a fascinating and valuable tool for the community. I would like to request...

First of all, thank you again for open-sourcing such excellent work. I am trying to run **blending_train.py,** but when I reach the following section: ``` class Blending_dataset(Dataset): def __init__(self, exps,...

``` (hairenv) PS E:\Python Project\HairFastGAN> pip list Package Version -------------------- ------------ addict 2.4.0 appdirs 1.4.4 build 1.2.1 CacheControl 0.14.0 certifi 2024.2.2 charset-normalizer 3.3.2 cleo 2.1.0 click 8.1.7 clip 1.0 colorama...

Can you describe about the training process , how can one train this for there own dataset...?

Regarding the latent_avg.pt weights, how should I obtain Savg?

No one cares, right? Create an issue to express support. Not sure how it will perform, will give it a try later to see if it can run.

I successfully set up the project and loaded the models without any errors. However, the inference output is just a flat-colored image (see attached screenshot). I used the provided images...

Pillow 10.0.0 is throwing an error: ``` ImportError: cannot import name 'is_directory' from 'PIL._util' (/usr/local/lib/python3.11/dist-packages/PIL/_util.py) ``` You can update this line: ``` !pip install --upgrade Pillow face_alignment dill==0.2.7.1 addict fpie...

I tried many times to install the project, however, I encountered many issues with ninja, c++ library, etc... Would it be possible to create a docker image for this project?

Hello, it won't start, tried limiting VRAM consumption, nothing helped. What can I do? I tried to run it on an RTX 4060, Ubuntu 24.04 LTS ```python torch.cuda.OutOfMemoryError: CUDA out...