facechain icon indicating copy to clipboard operation
facechain copied to clipboard

modelscope 执行lora失败

Open growmuye opened this issue 1 year ago • 2 comments

/mnt/workspace/facechain Looking in indexes: https://mirrors.aliyun.com/pypi/simple

Generating train split: 1 examples [00:00, 278.06 examples/s] 02/28/2024 19:57:59 - INFO - main - ***** Running training ***** 02/28/2024 19:57:59 - INFO - main - Num examples = 1 02/28/2024 19:57:59 - INFO - main - Num Epochs = 200 02/28/2024 19:57:59 - INFO - main - Instantaneous batch size per device = 1 02/28/2024 19:57:59 - INFO - main - Total train batch size (w. parallel, distributed & accumulation) = 1 02/28/2024 19:57:59 - INFO - main - Gradient Accumulation steps = 1 02/28/2024 19:57:59 - INFO - main - Total optimization steps = 200 2024-02-28 19:58:00,217 - modelscope - INFO - Use user-specified model revision: v1.0.0 Resuming from checkpoint /mnt/workspace/.cache/modelscope/damo/face_frombase_c4/face_frombase_c4.bin Steps: 0%| | 0/200 [00:00<?, ?it/s]/opt/conda/lib/python3.10/site-packages/diffusers/models/attention_processor.py:1871: FutureWarning: LoRAAttnProcessor is deprecated and will be removed in version 0.26.0. Make sure use AttnProcessor instead by settingLoRA layers to self.{to_q,to_k,to_v,to_out[0]}.lora_layer respectively. This will be done automatically when using LoraLoaderMixin.load_lora_weights deprecate( Traceback (most recent call last): File "/mnt/workspace/facechain/facechain/train_text_to_image_lora.py", line 1224, in main() File "/mnt/workspace/facechain/facechain/train_text_to_image_lora.py", line 1036, in main accelerator.backward(loss) File "/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 1964, in backward loss.backward(**kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward torch.autograd.backward( File "/opt/conda/lib/python3.10/site-packages/torch/autograd/init.py", line 251, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn Steps: 0%| | 0/200 [00:00<?, ?it/s] Traceback (most recent call last): File "/opt/conda/bin/accelerate", line 8, in sys.exit(main()) File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main args.func(args) File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1023, in launch_command simple_launcher(args) File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/launch.py", line 643, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '/mnt/workspace/facechain/facechain/train_text_to_image_lora.py', '--pretrained_model_name_or_path=ly261666/cv_portrait_model', '--revision=v2.0', '--sub_path=film/film', '--output_dataset_name=/mnt/workspace/facechain/worker_data/qw/training_data/ly261666/cv_portrait_model/person1', '--caption_column=text', '--resolution=512', '--random_flip', '--train_batch_size=1', '--num_train_epochs=200', '--checkpointing_steps=5000', '--learning_rate=1.5e-04', '--lr_scheduler=cosine', '--lr_warmup_steps=0', '--seed=42', '--output_dir=/mnt/workspace/facechain/worker_data/qw/ly261666/cv_portrait_model/person1', '--lora_r=4', '--lora_alpha=32', '--lora_text_encoder_r=32', '--lora_text_encoder_alpha=32', '--resume_from_checkpoint=fromfacecommon']' returned non-zero exit status 1. Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/gradio/queueing.py", line 407, in call_prediction output = await route_utils.call_process_api( File "/opt/conda/lib/python3.10/site-packages/gradio/route_utils.py", line 226, in call_process_api output = await app.get_blocks().process_api( File "/opt/conda/lib/python3.10/site-packages/gradio/blocks.py", line 1550, in process_api result = await self.call_function( File "/opt/conda/lib/python3.10/site-packages/gradio/blocks.py", line 1185, in call_function prediction = await anyio.to_thread.run_sync( File "/opt/conda/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/opt/conda/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread return await future File "/opt/conda/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run result = context.run(func, *args) File "/opt/conda/lib/python3.10/site-packages/gradio/utils.py", line 661, in wrapper response = f(*args, **kwargs) File "/mnt/workspace/facechain/app.py", line 804, in run train_lora_fn(base_model_path=base_model_path, File "/mnt/workspace/facechain/app.py", line 207, in train_lora_fn raise gr.Error("训练失败 (Training failed)") gradio.exceptions.Error: '训练失败 (Training failed)'

growmuye avatar Feb 28 '24 11:02 growmuye

Hi, did you figure this out: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn?

nikocraft avatar Mar 16 '24 00:03 nikocraft

可以参考https://github.com/modelscope/facechain/issues/527 需要加一段代码

ultimatech-cn avatar Mar 19 '24 03:03 ultimatech-cn

please try out the newest train-free, 10s inference version facechain-fact.

sunbaigui avatar Jun 04 '24 09:06 sunbaigui