ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

Image Generation Issue: Generated Images Became Abnormal Texture

Open SaltyFishOTL opened this issue 1 year ago • 6 comments

Your question

Description: After updating my Mac system to Sonoma 14, I encountered a problem with ComfyUI. Regardless of the parameters, sampler, and scheduler combinations I use, the generated images always have a strange texture, as shown below: image

Steps to Reproduce:

  1. Open ComfyUI.
  2. Select any model and VAE.
  3. Try different sampler and scheduler combinations.
  4. Set different parameters (e.g., steps, prompts, etc.).
  5. Generate the image.

Expected Result: The generated images should match the prompt description without any abnormal texture.

Actual Result: The generated images always have a strange texture. image image

Workflow File: To demonstrate the issue, I used the most basic text-to-image workflow. The workflow file is attached to this issue. test.json

System Information:

  • Operating System Version: macOS Sonoma 14 (Tried both 14.5 and 14.6)
  • ComfyUI Version: [v0.0.3](Which I believe is the latest)
  • Model and VAE Version: anyloraCheckpoint_bakedvaeBlessedFp16.safetensors (and many others)
  • Prompt :

    p: anime style, young girl, surprised expression, black jacket, white T-shirt, red pencil, classroom setting, school environment, bookshelves, other students, natural lighting, bright colors, high detail, expressive face, smooth shading, (masterpiece: 2), best quality, ultra highres, original, extremely detailed, perfect lighting n: blurry, illustration, toy, clay, low quality, flag, nasa, mission patch

  • KSampler Info: 截屏2024-08-02 下午10 40 19

Console Output:

myhomefolder@MacBook-Pro ~ % python3.11 /Applications/ComfyUI-master/main.py 
Total VRAM 16384 MB, total RAM 16384 MB
pytorch version: 2.3.1
Set vram state to: SHARED
Device: mps
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
****** User settings have been changed to be stored on the server instead of browser storage. ******
****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******
[Prompt Server] web root: /Applications/ComfyUI-master/web

Import times for custom nodes:
   0.0 seconds: /Applications/ComfyUI-master/custom_nodes/websocket_image_save.py

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|███████████████████████████████████████████| 25/25 [00:23<00:00,  1.08it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 29.97 seconds
got prompt
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|███████████████████████████████████████████| 25/25 [00:17<00:00,  1.40it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 22.26 seconds
got prompt
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|███████████████████████████████████████████| 25/25 [00:18<00:00,  1.38it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 23.47 seconds

### Logs

_No response_

### Other

_No response_

SaltyFishOTL avatar Aug 02 '24 14:08 SaltyFishOTL

Update Pytorch to version 2.4

Creative-comfyUI avatar Aug 03 '24 02:08 Creative-comfyUI

Update Pytorch to version 2.4

I updated PyTorch to version 2.4 as suggested, but the issue persists. The generated images still have the same abnormal texture. image Console Output:

myhomefolder@MacBook-Pro ~ % python3.11 /Applications/ComfyUI-master/main.py 
Total VRAM 16384 MB, total RAM 16384 MB
pytorch version: 2.4.0
Set vram state to: SHARED
Device: mps
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
[Prompt Server] web root: /Applications/ComfyUI-master/web
/opt/homebrew/lib/python3.11/site-packages/kornia/feature/lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)

Import times for custom nodes:
   0.0 seconds: /Applications/ComfyUI-master/custom_nodes/websocket_image_save.py

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|███████████████████████████████████████████| 25/25 [00:19<00:00,  1.31it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 24.18 seconds
got prompt
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|███████████████████████████████████████████| 25/25 [00:18<00:00,  1.34it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 25.43 seconds

SaltyFishOTL avatar Aug 03 '24 05:08 SaltyFishOTL

Downgrade the Pytorch to 2.3 pip install torch==2.3.1 torchaudio==2.3.1 torchvision==0.18.1 https://github.com/comfyanonymous/ComfyUI/issues/4165#issuecomment-2264948167

RekarBotany avatar Aug 03 '24 10:08 RekarBotany

Downgrade the Pytorch to 2.3 pip install torch==2.3.1 torchaudio==2.3.1 torchvision==0.18.1 #4165 (comment)

From my first post, you can see that my Pytorch was 2.3 from the begining.

SaltyFishOTL avatar Aug 03 '24 18:08 SaltyFishOTL

Update Pytorch to version 2.4

I updated PyTorch to version 2.4 as suggested, but the issue persists. The generated images still have the same abnormal texture. image Console Output:

myhomefolder@MacBook-Pro ~ % python3.11 /Applications/ComfyUI-master/main.py 
Total VRAM 16384 MB, total RAM 16384 MB
pytorch version: 2.4.0
Set vram state to: SHARED
Device: mps
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
[Prompt Server] web root: /Applications/ComfyUI-master/web
/opt/homebrew/lib/python3.11/site-packages/kornia/feature/lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)

Import times for custom nodes:
   0.0 seconds: /Applications/ComfyUI-master/custom_nodes/websocket_image_save.py

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|███████████████████████████████████████████| 25/25 [00:19<00:00,  1.31it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 24.18 seconds
got prompt
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|███████████████████████████████████████████| 25/25 [00:18<00:00,  1.34it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 25.43 seconds

Update Python to the last version 3.12...

Creative-comfyUI avatar Aug 03 '24 18:08 Creative-comfyUI

Update Pytorch to version 2.4

I updated PyTorch to version 2.4 as suggested, but the issue persists. The generated images still have the same abnormal texture. image Console Output:

myhomefolder@MacBook-Pro ~ % python3.11 /Applications/ComfyUI-master/main.py 
Total VRAM 16384 MB, total RAM 16384 MB
pytorch version: 2.4.0
Set vram state to: SHARED
Device: mps
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
[Prompt Server] web root: /Applications/ComfyUI-master/web
/opt/homebrew/lib/python3.11/site-packages/kornia/feature/lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)

Import times for custom nodes:
   0.0 seconds: /Applications/ComfyUI-master/custom_nodes/websocket_image_save.py

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|███████████████████████████████████████████| 25/25 [00:19<00:00,  1.31it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 24.18 seconds
got prompt
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|███████████████████████████████████████████| 25/25 [00:18<00:00,  1.34it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 25.43 seconds

Update Python to the last version 3.12...

Update my python hasn't made any change... image

(comfyui) myhomefolder@MacBook-Pro ~ % python3 --version
Python 3.12.4
(comfyui) myhomefolder@MacBook-Pro ~ % python3 /Applications/ComfyUI-master/main.py
Total VRAM 16384 MB, total RAM 16384 MB
pytorch version: 2.3.1
Set vram state to: SHARED
Device: mps
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
[Prompt Server] web root: /Applications/ComfyUI-master/web

Import times for custom nodes:
   0.0 seconds: /Applications/ComfyUI-master/custom_nodes/websocket_image_save.py

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
  0%|                                                    | 0/25 [00:00<?, ?it/s]/Users/myhomefolder/comfyui/lib/python3.12/site-packages/torchsde/_brownian/brownian_interval.py:608: UserWarning: Should have tb<=t1 but got tb=14.614643096923828 and t1=14.614643.
  warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.")
100%|███████████████████████████████████████████| 25/25 [00:37<00:00,  1.48s/it]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 43.26 seconds

SaltyFishOTL avatar Aug 04 '24 10:08 SaltyFishOTL

This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically.

github-actions[bot] avatar Mar 15 '25 11:03 github-actions[bot]