faceswap-aws icon indicating copy to clipboard operation
faceswap-aws copied to clipboard

Can you post a link to the demo results?

Open kaisark opened this issue 8 years ago • 4 comments

AWS P2 instance might be the best way to go for training the model...

Can you post a link to the demo results (gif/mp4)? Some folks might want to see the quality of results before paying for AWS...

Example replacing nic cage as neo in the matrix.

I posted two of my results from running convert on my Nvidia TX1 (GPU).

Thanks.

55871042 577295120 giphy 1

python faceswap.py convert -i ~/faceswap/photo/trump/ -o ~/faceswap/output/ -m ~/faceswap/models/ Using TensorFlow backend. Input Directory: /home/nvidia/faceswap/photo/trump Output Directory: /home/nvidia/faceswap/output Starting, this may take a while... 2018-02-01 00:19:08.897026: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:857] ARM64 does not support NUMA - returning NUMA node zero 2018-02-01 00:19:08.897206: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate (GHz) 0.9984 pciBusID 0000:00:00.0 Total memory: 3.89GiB Free memory: 1010.12MiB 2018-02-01 00:19:08.897284: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 2018-02-01 00:19:08.897339: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y 2018-02-01 00:19:08.897401: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0) 2018-02-01 00:19:59.365563: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.14GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available. 2018-02-01 00:19:59.413316: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available. 2018-02-01 00:19:59.453769: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.07GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available. 2018-02-01 00:19:59.453875: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 547.21MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available. 2018-02-01 00:19:59.475675: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.04GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.

Images found: 376 Images processed: 376 Faces detected: 308

Done!

kaisark avatar Jan 31 '18 19:01 kaisark

I'm defaulting to P2's, at a 10x price difference the P3's don't seem worth it though there's nothing stopping someone else using one by modifying the variable at the top of code/aws.js.

Regarding a demo, I'd be happy to - for the sake of comparison would you be able to share the facesets+video you used to create the above?

permosegaard avatar Feb 01 '18 09:02 permosegaard

fyi, I think Reddit closed down the deepfakes community...

Whole project with training images and trained model (~300MB): https://anonfile.com/p7w3m0d5be/face-swap.zip

https://github.com/deepfakes/faceswap-playground/issues/1

kaisark avatar Feb 09 '18 07:02 kaisark

Did you have a chance to run the demo?

dataset: https://drive.google.com/file/d/1ZQiIxn31vda5cwEMbbjj1l4qMkunU8lu/view?usp=sharing

kaisark avatar Feb 19 '18 04:02 kaisark

Kind of late to the party - but this repo looks promising. There's a repo https://github.com/MitchellX/deepfake-models -that's yet to be published that looks kind of amazing https://mitchellx.github.io/#video

This code may be of help to me - but I wonder how long a 10 second video would take. I guess I gotta dig deeper - and then if there were trained models that didn't need the training - they could just be swapped in.... like make everyone / any one Nik cage. - no training - how long would that take. I found the aws cards old and slow in comparison to 2080/ 3080. Presumably aws will buy up new cards to replace k80s soon.

johndpope avatar Jan 10 '21 07:01 johndpope