yuangan

Results 31 comments of yuangan

Meet the same problem. My tensorflow version is 1.4.

> Did not work, the problem still exists. very strange. ![extract](https://user-images.githubusercontent.com/47731180/57853297-f3b7ca00-7817-11e9-82bb-08f973c65ff7.png) ![N_0000000376_N_0000000283](https://user-images.githubusercontent.com/47731180/57853310-ffa38c00-7817-11e9-8267-cf3f86e3c244.jpg) I meet the same problem in testing. I success in celeba from gdrive, but fail in my own...

This might be because of the device of GPU. I tried 2080Ti, and the results were right.

Hi, thank you for your attention. As an application of our proposed modules, we achieve this in a direct way: optimize the latent code z with the CLIP's loss. Given...

Thank you for your consistent attention. The answer is Yes. We are now considering releasing a script for zero-shot video editing this week. This is an interesting phenomenon and it...

I really appreciate your feedback. @G-force78 I've uploaded the zero-shot editing code, and you can find more details [here](https://github.com/yuangan/EAT_code?tab=readme-ov-file#zero-shot-editing). It has been a long journey for me to develop and...

> Hi, Im not sure what this refers to? > > Traceback (most recent call last): File "/content/EAT_code/prompt_st_dp_eam3d_mapper_full.py", line 162, in train(text, config, generator, None, kp_detector, audio2kptransformer, mapper, sidetuning, opt.checkpoint,...

> I did update the files, maybe Ive missed something but here it is > > https://github.com/yuangan/EAT_code/blob/622d5460d8308177e71edc5ee40ed0422a54ca82/train_transformer.py#L262 > > GeneratorFullModelBatchDeepPromptSTEAM3D Hi, I can find [GeneratorFullModelBatchDeepPromptSTEAM3D](https://github.com/yuangan/EAT_code/blob/main/modules/model_transformer.py#L655) and [GeneratorFullModelBatchDeepPromptSTEAM3DNewStyle3](https://github.com/yuangan/EAT_code/blob/main/modules/model_transformer.py#L1479) in `modules/model_transformer.py`. Could...

你好,这里的设置应该是为了兼容针对第一阶段的需要继续训练的情况,不会影响第二阶段的训练。a2kp的模型在emotion adaptation阶段因为加了一些EAM层,并只优化这些层,所以优化参数已经发生了改变,并不能用第一阶段的optimizer_a2kp,在这里是正常的。

mel是语音的梅尔频谱,是网络的输入之一。sync_mel是用来计算sync loss的。sync_mel的形状原因可以查看wav2vec这篇论文及其代码。这个的实现可以参考这里的[定义](https://github.com/yuangan/EAT_code/blob/8d1fd414b01e381f2fd6f2ea661458d683d6c261/frames_dataset_transformer25.py#L628)和[使用](https://github.com/yuangan/EAT_code/blob/8d1fd414b01e381f2fd6f2ea661458d683d6c261/frames_dataset_transformer25.py#L722).