facechain
facechain copied to clipboard
RuntimeError: Cannot re-initialize CUDA in forked subprocess.
接入sd-webui 报以下错误,我的sd-webui是使用docker安装,并且内部安装了cuda118。
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1435, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1121, in call_function
prediction = await utils.async_iteration(iterator)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 350, in async_iteration
return await iterator.__anext__()
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 343, in __anext__
return await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 326, in run_sync_iterator_async
return next(iterator)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 695, in gen_wrapper
yield from f(*args, **kwargs)
File "/stable-diffusion-webui/extensions/facechain/app.py", line 478, in launch_pipeline_talkinghead
output = future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
@wangxingjun778 Please check on this when you are free
please try out the newest train-free, 10s inference version facechain-fact.