[bug]: RuntimeError: please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Is there an existing issue for this?
- [X] I have searched the existing issues
OS
macOS
GPU
cpu
VRAM
2GB
What version did you experience this issue on?
v2.3.5.post1
What happened?
This issue occurred in v2.3.5.post1, but it doesn't exist in InvokeAI, version 2.3.5.
Step to reproduce:
- invokeai --web
- Initializing, be patient...
Initialization file /Users/xixili/AI/InvokeAI/invokeai.init found. Loading... Internet connectivity is True InvokeAI, version 2.3.5 InvokeAI runtime directory is "/Users/xixili/AI/InvokeAI" GFPGAN Initialized CodeFormer Initialized ESRGAN Initialized Using device_type cpu xformers not installed NSFW checker is disabled Current VRAM usage: 0.00G Loading diffusers model from /Users/xixili/AI/InvokeAI/models/converted_ckpts/lyriel_v16 | Using more accurate float32 precision | Default image dimensions = 512 x 512 Model loaded in 9.27s Loading embeddings from /Users/xixili/AI/InvokeAI/embeddings Textual inversion triggers: Setting Sampler to k_lms (LMSDiscreteScheduler) Probing /Users/xixili/AI/stable-diffusion-webui/models for import |/Users/xixili/AI/stable-diffusion-webui/models appears to be a directory. Will scan for models to import Probing /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt for import | /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt appears to be a checkpoint file on disk | Already imported. Skipping sd-v1-5-inpainting successfully imported Probing /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/512-depth-ema.ckpt for import | /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/512-depth-ema.ckpt appears to be a checkpoint file on disk | Already imported. Skipping 512-depth-ema successfully imported Probing /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt for import | /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt appears to be a checkpoint file on disk | Already imported. Skipping sd-v1-4 successfully imported Probing /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4-full-ema.ckpt for import | /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4-full-ema.ckpt appears to be a checkpoint file on disk | Already imported. Skipping sd-v1-4-full-ema successfully imported Probing /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/projectUnrealEngine5_projectUnrealEngine5B.ckpt for import | /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/projectUnrealEngine5_projectUnrealEngine5B.ckpt appears to be a checkpoint file on disk | Already imported. Skipping projectUnrealEngine5_projectUnrealEngine5B successfully imported Probing /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt for import | /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt appears to be a checkpoint file on disk | Already imported. Skipping 768-v-ema successfully imported Probing /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt for import | /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt appears to be a checkpoint file on disk | Already imported. Skipping v1-5-pruned-emaonly successfully imported Probing /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt for import | /Users/xixili/AI/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt appears to be a checkpoint file on disk | Already imported. Skipping v2-1_768-ema-pruned successfully imported Probing /Users/xixili/AI/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.ckpt for import | /Users/xixili/AI/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.ckpt appears to be a checkpoint file on disk | Scanning Model: /Users/xixili/AI/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.ckpt | Model scanned ok ** /Users/xixili/AI/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.ckpt is a legacy checkpoint file but not a known Stable Diffusion model. Please provide the configuration file type or path. Probing /Users/xixili/AI/stable-diffusion-webui/models/VAE/emaPrunedVAE_emaPruned.ckpt for import | /Users/xixili/AI/stable-diffusion-webui/models/VAE/emaPrunedVAE_emaPruned.ckpt appears to be a checkpoint file on disk | Scanning Model: /Users/xixili/AI/stable-diffusion-webui/models/VAE/emaPrunedVAE_emaPruned.ckpt | Model scanned ok ** /Users/xixili/AI/stable-diffusion-webui/models/VAE/emaPrunedVAE_emaPruned.ckpt is a legacy checkpoint file but not a known Stable Diffusion model. Please provide the configuration file type or path. Probing /Users/xixili/AI/stable-diffusion-webui/models/VAE/kl-f8-anime.ckpt for import | /Users/xixili/AI/stable-diffusion-webui/models/VAE/kl-f8-anime.ckpt appears to be a checkpoint file on disk | Scanning Model: /Users/xixili/AI/stable-diffusion-webui/models/VAE/kl-f8-anime.ckpt | Model scanned ok ** /Users/xixili/AI/stable-diffusion-webui/models/VAE/kl-f8-anime.ckpt is a legacy checkpoint file but not a known Stable Diffusion model. Please provide the configuration file type or path. Probing /Users/xixili/AI/stable-diffusion-webui/models/VAE/kl-f8-anime2.ckpt for import | /Users/xixili/AI/stable-diffusion-webui/models/VAE/kl-f8-anime2.ckpt appears to be a checkpoint file on disk | Scanning Model: /Users/xixili/AI/stable-diffusion-webui/models/VAE/kl-f8-anime2.ckpt | Model scanned ok ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /Users/xixili/AI/InvokeAI/.venv/bin/invokeai:8 in
│ │ │ │ 5 from ldm.invoke.CLI import main │ │ 6 if name == 'main': │ │ 7 │ sys.argv[0] = re.sub(r'(-script.pyw|.exe)?$', '', sys.argv[0]) │ │ ❱ 8 │ sys.exit(main()) │ │ 9 │ │ │ │ /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/CLI.py:179 in main │ │ │ │ 176 │ │ ) │ │ 177 │ │ │ 178 │ if path := opt.autoconvert: │ │ ❱ 179 │ │ gen.model_manager.heuristic_import( │ │ 180 │ │ │ str(path), commit_to_conf=opt.conf │ │ 181 │ │ ) │ │ 182 │ │ │ │ /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py:754 in │ │ heuristic_import │ │ │ │ 751 │ │ │ │ for m in list(Path(thing).rglob(".ckpt")) + list( │ │ 752 │ │ │ │ │ Path(thing).rglob(".safetensors") │ │ 753 │ │ │ │ ): │ │ ❱ 754 │ │ │ │ │ if model_name := self.heuristic_import( │ │ 755 │ │ │ │ │ │ str(m), │ │ 756 │ │ │ │ │ │ commit_to_conf=commit_to_conf, │ │ 757 │ │ │ │ │ │ config_file_callback=config_file_callback, │ │ │ │ /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py:787 in │ │ heuristic_import │ │ │ │ 784 │ │ checkpoint = None │ │ 785 │ │ if model_path.suffix.endswith((".ckpt", ".pt")): │ │ 786 │ │ │ self.scan_model(model_path, model_path) │ │ ❱ 787 │ │ │ checkpoint = torch.load(model_path) │ │ 788 │ │ else: │ │ 789 │ │ │ checkpoint = safetensors.torch.load_file(model_path) │ │ 790 │ │ # additional probing needed if no config file provided │ │ │ │ /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/torch/serialization.py:809 in load │ │ │ │ 806 │ │ │ │ │ │ return _load(opened_zipfile, map_location, _weights_only_unpickl │ │ 807 │ │ │ │ │ except RuntimeError as e: │ │ 808 │ │ │ │ │ │ raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None │ │ ❱ 809 │ │ │ │ return load(opened_zipfile, map_location, pickle_module, **pickle_load │ │ 810 │ │ if weights_only: │ │ 811 │ │ │ try: │ │ 812 │ │ │ │ return _legacy_load(opened_file, map_location, _weights_only_unpickler, │ │ │ │ /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/torch/serialization.py:1172 in │ │ _load │ │ │ │ 1169 │ │ │ 1170 │ unpickler = UnpicklerWrapper(data_file, **pickle_load_args) │ │ 1171 │ unpickler.persistent_load = persistent_load │ │ ❱ 1172 │ result = unpickler.load() │ │ 1173 │ │ │ 1174 │ torch._utils._validate_loaded_sparse_tensors() │ │ 1175 │ │ │ │ /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/torch/serialization.py:1142 in │ │ persistent_load │ │ │ │ 1139 │ │ │ typed_storage = loaded_storages[key] │ │ 1140 │ │ else: │ │ 1141 │ │ │ nbytes = numel * torch._utils._element_size(dtype) │ │ ❱ 1142 │ │ │ typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location │ │ 1143 │ │ │ │ 1144 │ │ return typed_storage │ │ 1145 │ │ │ │ /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/torch/serialization.py:1116 in │ │ load_tensor │ │ │ │ 1113 │ │ # TODO: Once we decide to break serialization FC, we can │ │ 1114 │ │ # stop wrapping with TypedStorage │ │ 1115 │ │ typed_storage = torch.storage.TypedStorage( │ │ ❱ 1116 │ │ │ wrap_storage=restore_location(storage, location), │ │ 1117 │ │ │ dtype=dtype, │ │ 1118 │ │ │ _internal=True) │ │ 1119 │ │ │ │ /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/torch/serialization.py:217 in │ │ default_restore_location │ │ │ │ 214 │ │ 215 def default_restore_location(storage, location): │ │ 216 │ for _, _, fn in _package_registry: │ │ ❱ 217 │ │ result = fn(storage, location) │ │ 218 │ │ if result is not None: │ │ 219 │ │ │ return result │ │ 220 │ raise RuntimeError("don't know how to restore data location of " │ │ │ │ /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/torch/serialization.py:182 in │ │ _cuda_deserialize │ │ │ │ 179 │ │ 180 def _cuda_deserialize(obj, location): │ │ 181 │ if location.startswith('cuda'): │ │ ❱ 182 │ │ device = validate_cuda_device(location) │ │ 183 │ │ if getattr(obj, "_torch_load_uninitialized", False): │ │ 184 │ │ │ with torch.cuda.device(device): │ │ 185 │ │ │ │ return torch.UntypedStorage(obj.nbytes(), device=torch.device(location)) │ │ │ │ /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/torch/serialization.py:166 in │ │ validate_cuda_device │ │ │ │ 163 │ device = torch.cuda._utils._get_device_index(location, True) │ │ 164 │ │ │ 165 │ if not torch.cuda.is_available(): │ │ ❱ 166 │ │ raise RuntimeError('Attempting to deserialize object on a CUDA ' │ │ 167 │ │ │ │ │ │ 'device but torch.cuda.is_available() is False. ' │ │ 168 │ │ │ │ │ │ 'If you are running on a CPU-only machine, ' │ │ 169 │ │ │ │ │ │ 'please use torch.load with map_location=torch.device('cpu' │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Screenshots
Additional context
git pull 警告:unknown value given to http.version: 'http/1.1' remote: Enumerating objects: 849, done. remote: Counting objects: 100% (847/847), done. remote: Compressing objects: 100% (325/325), done. remote: Total 849 (delta 556), reused 782 (delta 519), pack-reused 2 接收对象中: 100% (849/849), 444.87 KiB | 636.00 KiB/s, 完成. 处理 delta 中: 100% (556/556), 完成 48 个本地对象. 来自 https://github.com/invoke-ai/InvokeAI 84b801d8..ff0e79fa main -> origin/main 9ecca132..efabf250 Convert-Model-Endpoint -> origin/Convert-Model-Endpoint fea9a6bf..47b0d5a9 feat/controlnet-nodes -> origin/feat/controlnet-nodes
- [新分支] feat/latents -> origin/feat/latents
- [新分支] feat/nodes/results-service -> origin/feat/nodes/results-service
- [新分支] feat/nodes/results-table -> origin/feat/nodes/results-table
- [新分支] fix/ui/send-to-canvas -> origin/fix/ui/send-to-canvas c8f765cc..bdf33f13 lstein/new-model-manager -> origin/lstein/new-model-manager
- [新分支] maryhipp/optional-middleware -> origin/maryhipp/optional-middleware 18f0cbd9..0ce628b2 v2.3 -> origin/v2.3 警告:unknown value given to http.version: 'http/1.1'
- [新标签] v2.3.5.post1 -> v2.3.5.post1 更新 84b801d8..ff0e79fa Fast-forward .github/workflows/test-invoke-pip.yml | 20 +- .gitignore | 2 + invokeai/app/api/dependencies.py | 15 +- invokeai/app/api_app.py | 34 +-- invokeai/app/cli/commands.py | 16 ++ invokeai/app/cli/completer.py | 9 +- invokeai/app/cli_app.py | 55 +++-- invokeai/app/invocations/compel.py | 4 +- invokeai/app/invocations/math.py | 43 ++-- invokeai/app/services/config.py | 521 ++++++++++++++++++++++++++++++++++++++++ invokeai/app/services/graph.py | 2 + invokeai/app/services/invocation_services.py | 6 +- invokeai/app/services/model_manager_initializer.py | 45 ++-- invokeai/backend/init.py | 3 - invokeai/backend/args.py | 1391 ----------------------------------------------------------------------------------------------------------- invokeai/backend/config/invokeai_configure.py | 234 +++++++++--------- invokeai/backend/config/legacy_arg_parsing.py | 390 ++++++++++++++++++++++++++++++ invokeai/backend/config/model_install_backend.py | 28 ++- invokeai/backend/generate.py | 1234 ----------------------------------------------------------------------------------------------- invokeai/backend/globals.py | 122 ---------- invokeai/backend/image_util/patchmatch.py | 5 +- invokeai/backend/image_util/txt2mask.py | 8 +- invokeai/backend/model_management/convert_ckpt_to_diffusers.py | 25 +- invokeai/backend/model_management/model_manager.py | 80 +++---- invokeai/backend/prompting/conditioning.py | 7 +- invokeai/backend/restoration/codeformer.py | 14 +- invokeai/backend/restoration/gfpgan.py | 9 +- invokeai/backend/restoration/realesrgan.py | 12 +- invokeai/backend/safety_checker.py | 7 +- invokeai/backend/stable_diffusion/concepts_lib.py | 8 +- invokeai/backend/stable_diffusion/diffusers_pipeline.py | 7 +- invokeai/backend/stable_diffusion/diffusion/shared_invokeai_diffusion.py | 6 +- invokeai/backend/training/textual_inversion_training.py | 18 +- invokeai/backend/util/devices.py | 10 +- invokeai/frontend/CLI/CLI.py | 1291 --------------------------------------------------------------------------------------------------- invokeai/frontend/CLI/readline.py | 497 -------------------------------------- invokeai/frontend/CLI/sd_metadata.py | 30 --- invokeai/frontend/install/model_install.py | 7 +- invokeai/frontend/merge/merge_diffusers.py | 28 +-- invokeai/frontend/training/textual_inversion.py | 38 +-- invokeai/frontend/web/public/locales/en.json | 2 +- invokeai/frontend/web/src/app/components/App.tsx | 3 +- invokeai/frontend/web/src/app/constants.ts | 9 +- invokeai/frontend/web/src/features/canvas/store/canvasPersistDenylist.ts | 10 - invokeai/frontend/web/src/features/gallery/components/CurrentImageButtons.tsx | 22 +- invokeai/frontend/web/src/features/gallery/components/CurrentImageDisplay.tsx | 8 +- invokeai/frontend/web/src/features/gallery/components/CurrentImagePreview.tsx | 21 +- invokeai/frontend/web/src/features/gallery/components/{ImageGalleryPanel.tsx => GalleryPanel.tsx} | 101 +------- invokeai/frontend/web/src/features/gallery/components/HoverableImage.tsx | 7 +- invokeai/frontend/web/src/features/gallery/components/ImageGalleryContent.tsx | 40 +++- invokeai/frontend/web/src/features/gallery/components/ImageMetaDataViewer/ImageMetadataViewer.tsx | 6 +- invokeai/frontend/web/src/features/gallery/store/galleryPersistDenylist.ts | 9 - invokeai/frontend/web/src/features/gallery/store/gallerySelectors.ts | 80 ------- invokeai/frontend/web/src/features/gallery/store/gallerySlice.ts | 25 +- invokeai/frontend/web/src/features/gallery/store/resultsPersistDenylist.ts | 8 +- invokeai/frontend/web/src/features/gallery/store/uploadsPersistDenylist.ts | 7 +- invokeai/frontend/web/src/features/lightbox/store/lightboxPersistDenylist.ts | 5 - invokeai/frontend/web/src/features/nodes/store/nodesPersistDenylist.ts | 5 - invokeai/frontend/web/src/features/nodes/util/graphBuilders/buildCanvasGraph.ts | 8 +- invokeai/frontend/web/src/features/nodes/util/nodeBuilders/buildImageToImageNode.ts | 6 +- invokeai/frontend/web/src/features/nodes/util/nodeBuilders/buildInpaintNode.ts | 52 ++-- invokeai/frontend/web/src/features/nodes/util/nodeBuilders/buildTextToImageNode.ts | 4 +- invokeai/frontend/web/src/features/parameters/components/Parameters/Core/{ParamSampler.tsx => ParamScheduler.tsx} | 20 +- invokeai/frontend/web/src/features/parameters/components/Parameters/Core/ParamSchedulerAndModel.tsx | 4 +- invokeai/frontend/web/src/features/parameters/components/Parameters/ImageToImage/InitialImagePreview.tsx | 28 ++- invokeai/frontend/web/src/features/parameters/components/ProcessButtons/CancelButton.tsx | 1 + invokeai/frontend/web/src/features/parameters/components/ProcessButtons/InvokeButton.tsx | 2 + invokeai/frontend/web/src/features/parameters/store/generationPersistDenylist.ts | 5 - invokeai/frontend/web/src/features/parameters/store/generationSlice.ts | 34 +-- invokeai/frontend/web/src/features/parameters/store/postprocessingPersistDenylist.ts | 5 - invokeai/frontend/web/src/features/parameters/store/setAllParametersReducer.ts | 6 +- invokeai/frontend/web/src/features/system/store/modelSlice.ts | 42 +--- invokeai/frontend/web/src/features/system/store/modelsPersistDenylist.ts | 5 - invokeai/frontend/web/src/features/system/store/systemPersistDenylist.ts | 2 - invokeai/frontend/web/src/features/system/store/systemSlice.ts | 74 ------ invokeai/frontend/web/src/features/ui/components/ParametersDrawer.tsx | 7 +- invokeai/frontend/web/src/features/ui/store/uiPersistDenylist.ts | 9 +- invokeai/frontend/web/src/features/ui/store/uiSlice.ts | 65 +---- invokeai/frontend/web/src/features/ui/store/uiTypes.ts | 7 - invokeai/frontend/web/src/services/api/core/request.ts | 4 +- invokeai/frontend/web/src/services/api/index.ts | 2 + invokeai/frontend/web/src/services/api/models/Graph.ts | 3 +- invokeai/frontend/web/src/services/api/models/ImageToImageInvocation.ts | 2 +- invokeai/frontend/web/src/services/api/models/InpaintInvocation.ts | 2 +- invokeai/frontend/web/src/services/api/models/LatentsOutput.ts | 10 +- invokeai/frontend/web/src/services/api/models/LatentsToLatentsInvocation.ts | 2 +- invokeai/frontend/web/src/services/api/models/NoiseOutput.ts | 8 + invokeai/frontend/web/src/services/api/models/RandomIntInvocation.ts | 15 ++ invokeai/frontend/web/src/services/api/models/TextToImageInvocation.ts | 2 +- invokeai/frontend/web/src/services/api/models/TextToLatentsInvocation.ts | 2 +- invokeai/frontend/web/src/services/api/schemas/$Graph.ts | 2 + invokeai/frontend/web/src/services/api/schemas/$LatentsOutput.ts | 10 + invokeai/frontend/web/src/services/api/schemas/$NoiseOutput.ts | 10 + invokeai/frontend/web/src/services/api/schemas/$RandomIntInvocation.ts | 16 ++ invokeai/frontend/web/src/services/api/services/SessionsService.ts | 5 +- pyproject.toml | 8 +- tests/preflight_prompts.txt | 4 - tests/test_config.py | 79 +++++++ tests/validate_pr_prompt.txt | 6 +- 99 files changed, 1665 insertions(+), 5562 deletions(-) create mode 100644 invokeai/app/services/config.py delete mode 100644 invokeai/backend/args.py create mode 100644 invokeai/backend/config/legacy_arg_parsing.py delete mode 100644 invokeai/backend/generate.py delete mode 100644 invokeai/backend/globals.py delete mode 100644 invokeai/frontend/CLI/CLI.py delete mode 100644 invokeai/frontend/CLI/readline.py delete mode 100644 invokeai/frontend/CLI/sd_metadata.py rename invokeai/frontend/web/src/features/gallery/components/{ImageGalleryPanel.tsx => GalleryPanel.tsx} (52%) rename invokeai/frontend/web/src/features/parameters/components/Parameters/Core/{ParamSampler.tsx => ParamScheduler.tsx} (70%) create mode 100644 invokeai/frontend/web/src/services/api/models/RandomIntInvocation.ts create mode 100644 invokeai/frontend/web/src/services/api/schemas/$RandomIntInvocation.ts delete mode 100644 tests/preflight_prompts.txt create mode 100644 tests/test_config.py
Contact Details
No response
I reverted back to 84b801d8, but this error still exists!
Steps:
-
git reflog ff0e79fa (HEAD -> main, origin/main, origin/HEAD) HEAD@{21}: pull: Fast-forward 84b801d8 HEAD@{22}: clone: from https://github.com/invoke-ai/InvokeAI
-
git checkout main
-
git reset --hard 84b801d8
invokeai.init has a parameter --always_use_cpu, but it seems not working for my case.
InvokeAI initialization file
This is the InvokeAI initialization file, which contains command-line default values.
Feel free to edit. If anything goes wrong, you can re-initialize this file by deleting
or renaming it and then running invokeai-configure again.
Place frequently-used startup commands here, one or more per line.
Examples:
--outdir=D:\data\images
--no-nsfw_checker
--web --host=0.0.0.0
--steps=20
-Ak_euler_a -C10.0
--outdir="/Users/xixili/AI/InvokeAI/outputs" --embedding_path="/Users/xixili/AI/InvokeAI/embeddings" --precision=auto --max_loaded_models=2 --no-nsfw_checker --xformers --ckpt_convert
--always_use_cpu --autoconvert "/Users/xixili/AI/stable-diffusion-webui/models"
As a workaround, in order to make it work, I add 'cpu' to /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/torch/serialization.py like below.
def load(
f: FILE_LIKE,
map_location: MAP_LOCATION = 'cpu',
pickle_module: Any = None,
*,
weights_only: bool = False,
**pickle_load_args: Any
) -> Any:
Another workaround is to add the code snippet to 786 line into model_manager: /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py
if Globals.always_use_cpu is True:
checkpoint = torch.load(model_path, map_location=torch.device('cpu'))
else:
checkpoint = torch.load(model_path)