OmniParser icon indicating copy to clipboard operation
OmniParser copied to clipboard

ValueError: Unrecognized configuration class <class 'transformers_modules.microsoft.Florence-2-base-ft.9803f52844ec1ae5df004e6089262e9a23e527fd.configuration_florence2.Florence2LanguageConfig'> for this kind of AutoModel: AutoModelForCausalLM.

Open wakakaming opened this issue 10 months ago • 2 comments

(omni) C:\OmniParser\omnitool\omniparserserver>python -m omniparserserver [2025-03-23 20:48:56,363] [ WARNING] easyocr.py:80 - Neither CUDA nor MPS are available - defaulting to CPU. Note: This module is much faster with a GPU. Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\OmniParser\omnitool\omniparserserver\omniparserserver.py", line 32, in omniparser = Omniparser(config) ^^^^^^^^^^^^^^^^^^ File "C:\OmniParser\util\omniparser.py", line 13, in init self.caption_model_processor = get_caption_model_processor(model_name=config['caption_model_name'], model_name_or_path=config['caption_model_path'], device=device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\OmniParser\util\utils.py", line 65, in get_caption_model_processor model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float32, trust_remote_code=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\envs\omni\Lib\site-packages\transformers\models\auto\auto_factory.py", line 576, in from_pretrained raise ValueError( ValueError: Unrecognized configuration class <class 'transformers_modules.microsoft.Florence-2-base-ft.9803f52844ec1ae5df004e6089262e9a23e527fd.configuration_florence2.Florence2LanguageConfig'> for this kind of AutoModel: AutoModelForCausalLM. Model type should be one of AriaTextConfig, BambaConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, Cohere2Config, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, DiffLlamaConfig, ElectraConfig, Emu3Config, ErnieConfig, FalconConfig, FalconMambaConfig, FuyuConfig, GemmaConfig, Gemma2Config, Gemma3Config, Gemma3TextConfig, GitConfig, GlmConfig, GotOcr2Config, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GraniteConfig, GraniteMoeConfig, GraniteMoeSharedConfig, HeliumConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MllamaConfig, MoshiConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, Olmo2Config, OlmoeConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PhimoeConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, ZambaConfig, Zamba2Config.

how to solve this problem?

wakakaming avatar Mar 23 '25 12:03 wakakaming

Same error here

P4l1ndr0m avatar Mar 23 '25 15:03 P4l1ndr0m

Same here, and after quick search I found other github projects having the same issue - it's coming from an incompatibility in Florence2 and transformers (which just released a new version a few days ago). If you roll back to version 4.49.0 of transformers it should resolve the issue:

python -m pip install --upgrade transformers==4.49.0

mferris77 avatar Mar 23 '25 18:03 mferris77