[Bug] Qwen Image FP8 Offload Crash: infer_block() Unexpected Keyword Argument temb on RTX 5090 (main branch)
Title
Qwen Image FP8 Offload Crash: infer_block() Unexpected Keyword Argument temb on RTX 5090 (main branch)
Description
When enabling offload in examples/qwen_image/qwen_2511_fp8.py and running the Qwen Image FP8 example, the program crashes during inference. The error indicates an API/signature mismatch between the offload inference path and the base transformer inference implementation, where infer_block() is called with an unsupported keyword argument temb.
Steps to Reproduce
- Checkout the main branch of
LightX2V. - Modify
examples/qwen_image/qwen_2511_fp8.pyto enable offload (so the pipeline uses the offload inference path). - Run:
python3 examples/qwen_image/qwen_2511_fp8.py
Expected Result
With offload enabled, the pipeline should run normally through the diffusion steps (e.g., progress beyond step_index: 1 / 8) and complete image generation without crashing.
Actual Result
The program crashes during inference at the first step with:
-
TypeError: QwenImageTransformerInfer.infer_block() got an unexpected keyword argument 'temb'
(Previously, an additional mismatch was observed: infer_with_blocks_offload() takes 7 positional arguments but 8 were given. After adjusting the call site, the run proceeds to the current temb keyword error.)
Environment Information
- Operating System: Ubuntu (version not specified)
- GPU: NVIDIA RTX 5090
- Python: 3.10
- Branch: main
- Commit ID: main branch (exact commit not pinned)
Log Information
2025-12-24 04:26:27.828 | INFO | lightx2v.models.runners.qwen_image.qwen_image_runner:run:175 - ==> step_index: 1 / 8
Traceback (most recent call last):
File "/data/aimodels/Qwen/LightX2V/examples/qwen_image/qwen_2511_fp8.py", line 48, in <module>
pipe.generate(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
File "/data/aimodels/Qwen/LightX2V/lightx2v/pipeline.py", line 377, in generate
self.runner.run_pipeline(input_info)
File "/data/aimodels/Qwen/LightX2V/lightx2v/models/runners/qwen_image/qwen_image_runner.py", line 299, in run_pipeline
latents, generator = self.run_dit()
File "/data/aimodels/Qwen/LightX2V/lightx2v/models/runners/qwen_image/qwen_image_runner.py", line 94, in _run_dit_local
latents, generator = self.run(total_steps)
File "/data/aimodels/Qwen/LightX2V/lightx2v/models/runners/qwen_image/qwen_image_runner.py", line 181, in run
self.model.infer(self.inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
File "/data/aimodels/Qwen/LightX2V/lightx2v/models/networks/qwen_image/model.py", line 356, in infer
noise_pred = self._infer_cond_uncond(latents_input, inputs["text_encoder_output"]["prompt_embeds"], infer_condition=True)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
File "/data/aimodels/Qwen/LightX2V/lightx2v/models/networks/qwen_image/model.py", line 374, in _infer_cond_uncond
hidden_states = self.transformer_infer.infer(
File "/data/aimodels/Qwen/LightX2V/lightx2v/models/networks/qwen_image/infer/transformer_infer.py", line 255, in infer
hidden_states = self.infer_func(block_weights, hidden_states, encoder_hidden_states, temb_img_silu, temb_txt_silu, image_rotary_emb)#, self.scheduler.modulate_index)
File "/data/aimodels/Qwen/LightX2V/lightx2v/models/networks/qwen_image/infer/offload/transformer_infer.py", line 40, in infer_with_blocks_offload
encoder_hidden_states, hidden_states = self.infer_block(
TypeError: QwenImageTransformerInfer.infer_block() got an unexpected keyword argument 'temb'
Additional Information
- The only intentional change was enabling offload in
examples/qwen_image/qwen_2511_fp8.py. - The stack trace suggests the offload implementation calls
infer_block()with atembkeyword, but the baseinfer_block()signature does not accept it, indicating the offload path and base inference code may be out of sync on the main branch.
The issue has been fixed, and you may reinstall lightx2v now. If there are cached files from previous versions, delete them first, then run: cd LightX2V && rm -rf build/ lightx2v.egg-info/ && pip install .