[bug]: InvokeAI incorrectly recognizes the model type
Is there an existing issue for this problem?
- [X] I have searched the existing issues
Operating system
Windows
GPU vendor
Nvidia (CUDA)
GPU model
4090
GPU VRAM
24GB
Version number
4.2.0
Browser
Edge latest
Python dependencies
InvokeAI incorrectly recognizes the model type
What happened
InvokeAI refuses to load some models (unselectable). Editing the model type gives an error.
What you expected to happen
Non-standard models are not uncommon, InvokeAI should at least allow user to forcibly override the base model type. Better if InvokeAI provides a button to go back to the default in case the user messes up.
How to reproduce the problem
Let's take the WildCardX-XL lora as an example:
- download and import the lora from: https://civitai.com/models/264681?modelVersionId=298448
- Note that InvokeAI recognizes the model as SD-1 Lora, where it is actually a SDXL Lora. (screenshot 1)
- Trying to edit the model type gives an error: "Model Update Failed" then forcibly revert back to Sd-1 (screenshot 2, screenshot 3)
Additional context
No response
Discord username
No response
Error message:
[InvokeAI]::ERROR --> 1 validation error for LoRALyCORISConfig variant Object has no attribute 'variant' [type=no_such_attribute, input_value=<ModelVariantType.Normal: 'normal'>, input_type=ModelVariantType] For further information visit https://errors.pydantic.dev/2.6/v/no_such_attribute
[2024-12-11 18:09:12,490]::[ModelInstallService]::ERROR --> Model install error: black-forest-labs/FLUX.1-schnell InvalidModelConfigException: Unknown base model for /Volumes/frankXmini/frankmini/invokeai/models/tmpinstall_7z13gb4r/FLUX.1-schnell