person model training error
显存足够
** Setting base model to SD1.5 **
--------uuid: qw
----------work_dir: /content/facechain/worker_data/qw/ly261666/cv_portrait_model/person1
2023-12-23 13:51:02,739 - modelscope - INFO - Use user-specified model revision: v1.0.0
/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py:65: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'CPUExecutionProvider'
warnings.warn(
2023-12-23 13:51:07,248 - modelscope - INFO - PyTorch version 2.1.0+cu121 Found.
2023-12-23 13:51:07,251 - modelscope - INFO - TensorFlow version 2.15.0 Found.
2023-12-23 13:51:07,251 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
2023-12-23 13:51:07,290 - modelscope - INFO - Loading done! Current index file version is 1.10.0, with md5 1f7ecbe335b689008f5303bd30793944 and a total number of 946 components indexed
2023-12-23 13:51:09.337756: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-12-23 13:51:09.337810: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-12-23 13:51:09.339627: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-12-23 13:51:10.646206: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/content/facechain/app.py:1276: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead.
output_images = gr.Gallery(label='Output', show_label=False).style(columns=3, rows=2, height=600,
[['/content/facechain/resources/inpaint_template/5.jpg'], ['/content/facechain/resources/inpaint_template/4.jpg'], ['/content/facechain/resources/inpaint_template/2.jpg'], ['/content/facechain/resources/inpaint_template/1.jpg'], ['/content/facechain/resources/inpaint_template/3.jpg']]
/content/facechain/app.py:1379: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead.
output_images = gr.Gallery(
[['resources/tryon_garment/garment4.png'], ['resources/tryon_garment/garment1.png'], ['resources/tryon_garment/garment2.png'], ['resources/tryon_garment/garment3.png']]
/content/facechain/app.py:1530: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead.
output_images = gr.Gallery(
2023-12-23 13:51:15,736 - modelscope - INFO - Use user-specified model revision: v4.0
2023-12-23 13:51:18,822 - modelscope - INFO - Use user-specified model revision: v1.0.1
Downloading: 100%|████████████████████████████████████████████████████████████████| 121k/121k [00:00<00:00, 2.15MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████| 118/118 [00:00<00:00, 642kB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 146k/146k [00:00<00:00, 2.52MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 217M/217M [00:02<00:00, 94.7MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████| 97.8M/97.8M [00:00<00:00, 111MB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████| 12.4k/12.4k [00:00<00:00, 42.0MB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████| 51.2M/51.2M [00:00<00:00, 96.8MB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████| 4.90k/4.90k [00:00<00:00, 18.6MB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████| 104M/104M [00:00<00:00, 117MB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████| 76.4k/76.4k [00:00<00:00, 2.24MB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████| 82.0k/82.0k [00:00<00:00, 2.31MB/s]
Process Process-1:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/modelscope/utils/import_utils.py", line 450, in _get_module
requires(module_name_full, requirements)
File "/usr/local/lib/python3.10/dist-packages/modelscope/utils/import_utils.py", line 353, in requires
raise ImportError(''.join(failed))
ImportError:
modelscope.models.nlp.chatglm2.tokenization requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones
that match your environment.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/content/facechain/facechain/inference.py", line 25, in _data_process_fn_process
Blipv2()(input_img_dir)
File "/content/facechain/facechain/data_process/preprocessing.py", line 205, in init
self.skin_retouching = pipeline('skin-retouching-torch', model='damo/cv_unet_skin_retouching_torch', model_revision='v1.0.1')
File "/usr/local/lib/python3.10/dist-packages/modelscope/pipelines/builder.py", line 163, in pipeline
clear_llm_info(kwargs)
File "/usr/local/lib/python3.10/dist-packages/modelscope/pipelines/builder.py", line 227, in clear_llm_info
from .nlp.llm_pipeline import ModelTypeHelper
File "/usr/local/lib/python3.10/dist-packages/modelscope/pipelines/nlp/llm_pipeline.py", line 15, in
modelscope.models.nlp.chatglm2.tokenization requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones that match your environment.
instance_data_dir /content/facechain/worker_data/qw/training_data/ly261666/cv_portrait_model/person1
** project dir: /content/facechain
** params: >base_model_path:ly261666/cv_portrait_model, >revision:v2.0, >sub_path:film/film, >output_img_dir:/content/facechain/worker_data/qw/training_data/ly261666/cv_portrait_model/person1, >work_dir:/content/facechain/worker_data/qw/ly261666/cv_portrait_model/person1, >lora_r:4, >lora_alpha:32
The following values were not passed to accelerate launch and had defaults used instead:
--num_processes was set to a value of 1
--num_machines was set to a value of 1
--mixed_precision was set to a value of 'no'
--dynamo_backend was set to a value of 'no'
To avoid this warning pass in values for each of the problematic parameters or run accelerate config.
2023-12-23 13:51:42.913890: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-12-23 13:51:42.913941: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-12-23 13:51:42.915917: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-12-23 13:51:44.138992: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-12-23 13:51:44,753 - modelscope - INFO - PyTorch version 2.1.0+cu121 Found.
2023-12-23 13:51:44,755 - modelscope - INFO - TensorFlow version 2.15.0 Found.
2023-12-23 13:51:44,755 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
2023-12-23 13:51:44,793 - modelscope - INFO - Loading done! Current index file version is 1.10.0, with md5 1f7ecbe335b689008f5303bd30793944 and a total number of 946 components indexed
12/23/2023 13:51:46 - INFO - main - Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda
Mixed precision type: no
2023-12-23 13:51:47,726 - modelscope - INFO - Use user-specified model revision: v2.0
{'dynamic_thresholding_ratio', 'variance_type', 'clip_sample_range', 'sample_max_value', 'thresholding'} was not found in config. Values will be initialized to default values.
{'force_upcast'} was not found in config. Values will be initialized to default values.
{'reverse_transformer_layers_per_block', 'attention_type', 'dropout'} was not found in config. Values will be initialized to default values.
Traceback (most recent call last):
File "/content/facechain/facechain/train_text_to_image_lora.py", line 1224, in
do you know how i can fix this error?
Has it been resolved? I have the same error.
no, it is still the same error, what's happening? can you help me how can I fix this error?
may be you can try:pip install sentencepiece
may be you can try:pip install sentencepiece
这个可以,解决了
please try out the newest train-free, 10s inference version facechain-fact.