diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

[Bug fix] "previous_timestep()" in DDPM scheduling compatible with "trailing" and "linspace" options

Open AnandK27 opened this issue 1 year ago • 5 comments

What does this PR do?

Fixes the bug of previous_timestep() not giving the right timestep for numbers not a factor of 1000 during "trailing" and "linspace" options. The previous_timestep has to be accruate for the options only during inference and since during inference the timesteps array is available, I just grab the value from there as for custom_timesteps. During training it is going to be 1 less than the current timestep.

(This is simplest solution I can do, if you need calculating the previous timestep for each case without the timesteps array let me know!)

Test Script

import torch
from diffusers import DDPMScheduler

# Initialize the DDPM scheduler
scheduler = DDPMScheduler.from_pretrained("google/ddpm-celebahq-256")

#test params
scheduler.config.timestep_spacing = 'trailing' # or 'linspace'
scheduler.set_timesteps(num_inference_steps = 7) # any prime number

# Set up a dummy pipeline
generator = torch.manual_seed(0)

# Initialize some dummy latents (normally this would come from your model)
latents = torch.randn((1, 3, 64, 64))

# Inference loop
for i, t in enumerate(scheduler.timesteps):
    # In a real scenario, you'd run your model here to get model_output
    model_output = torch.randn_like(latents)
    
    # Get the previous timestep
    prev_timestep = scheduler.previous_timestep(t)
    
    # Print current and previous timesteps
    print(f"Step {i}:")
    print(f"  Current timestep: {t}")
    print(f"  Previous timestep: {prev_timestep}")
    
    # Scheduler step
    latents = scheduler.step(model_output, t, latents, generator=generator).prev_sample

print("Test complete.")

Output

Step 0:
  Current timestep: 999
  Previous timestep: 856
Step 1:
  Current timestep: 856
  Previous timestep: 713
Step 2:
  Current timestep: 713
  Previous timestep: 570
Step 3:
  Current timestep: 570
  Previous timestep: 428
Step 4:
  Current timestep: 428
  Previous timestep: 285
Step 5:
  Current timestep: 285
  Previous timestep: 142
Step 6:
  Current timestep: 142
  Previous timestep: -1
Test complete.

Fixes #9261

Before submitting

  • [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • [X] Did you read the contributor guideline?
  • [X] Did you read our philosophy doc (important for complex PRs)?
  • [ ] Was this discussed/approved via a GitHub issue or the forum? Please add a link to it if that's the case.
  • [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
  • [X] Did you write any new necessary tests?

Who can review?

@yiyixuxu @RochMollero @bghira Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.

AnandK27 avatar Sep 07 '24 07:09 AnandK27

I think it indeed fixes the issue . That said:

  • it does not fix ddim or any other scheduler that are likely to have the same mistake such as pndm.

  • the current code around schedulers feel generally too complicated and duplicated across schedulers. Would you guys be interested in simplifying (ddim and other could inherit some functions from ddpm) and / or having some test framework ? (For another PR maybe)

Otherwise its good to me for this PR


De : Anand Kumar @.> Envoyé : samedi 7 septembre 2024 09:22 À : huggingface/diffusers @.> Cc : RochMollero @.>; Mention @.> Objet : [huggingface/diffusers] [Bug fix] "previous_timestep()" in DDPM scheduling compatible with "trailing" and "linspace" options (PR #9384)

What does this PR do?

Fixes the bug of previous_timestep() not giving the right timestep for numbers not a factor of 1000 during "trailing" and "linspace" options. The previous_timestep has to be accruate for the options only during inference and since during inference the timesteps array is available, I just grab the value from there as for custom_timesteps. During training it is going to be 1 less than the current timestep.

(This is simplest solution I can do, if you need calculating the previous timestep for each case without the timesteps array let me know!)

Test Script

import torch from diffusers import DDPMScheduler

Initialize the DDPM scheduler

scheduler = DDPMScheduler.from_pretrained("google/ddpm-celebahq-256")

#test params scheduler.config.timestep_spacing = 'trailing' # or 'linspace' scheduler.set_timesteps(num_inference_steps = 7) # any prime number

Set up a dummy pipeline

generator = torch.manual_seed(0)

Initialize some dummy latents (normally this would come from your model)

latents = torch.randn((1, 3, 64, 64))

Inference loop

for i, t in enumerate(scheduler.timesteps): # In a real scenario, you'd run your model here to get model_output model_output = torch.randn_like(latents)

# Get the previous timestep
prev_timestep = scheduler.previous_timestep(t)

# Print current and previous timesteps
print(f"Step {i}:")
print(f"  Current timestep: {t}")
print(f"  Previous timestep: {prev_timestep}")

# Scheduler step
latents = scheduler.step(model_output, t, latents, generator=generator).prev_sample

print("Test complete.")

Output

Step 0: Current timestep: 999 Previous timestep: 856 Step 1: Current timestep: 856 Previous timestep: 713 Step 2: Current timestep: 713 Previous timestep: 570 Step 3: Current timestep: 570 Previous timestep: 428 Step 4: Current timestep: 428 Previous timestep: 285 Step 5: Current timestep: 285 Previous timestep: 142 Step 6: Current timestep: 142 Previous timestep: -1 Test complete.

Fixes #9261https://github.com/huggingface/diffusers/issues/9261

Before submitting

Who can review?

@yiyixuxuhttps://github.com/yiyixuxu @RochMollerohttps://github.com/RochMollero @bghirahttps://github.com/bghira Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.


You can view, comment on, or merge this pull request online at:

https://github.com/huggingface/diffusers/pull/9384

Commit Summary

File Changes

(1 filehttps://github.com/huggingface/diffusers/pull/9384/files)

Patch Links:

  • https://github.com/huggingface/diffusers/pull/9384.patch
  • https://github.com/huggingface/diffusers/pull/9384.diff

— Reply to this email directly, view it on GitHubhttps://github.com/huggingface/diffusers/pull/9384, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AA374DJHQLB4JB2WY5LDVJTZVKSUXAVCNFSM6AAAAABNZZCRSOVHI2DSMVQWIX3LMV43ASLTON2WKOZSGUYTCNJRG4YTAOI. You are receiving this because you were mentioned.Message ID: @.***>

RochMollero avatar Sep 07 '24 08:09 RochMollero

Yeah the change has to be propagated to remaining schedulers. There is also a lot of duplication in schedulers and will need an overhaul to modularize it. Probably the moderators can open issues for these.

AnandK27 avatar Sep 07 '24 19:09 AnandK27

thanks so much for your PR! will take a look! all our newer schedulers has a step counter so would not have this issue https://github.com/huggingface/diffusers/blob/d08ad65819cde8c762c9185407ff689c2a9a4706/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py#L283

yiyixuxu avatar Sep 09 '24 16:09 yiyixuxu

I will close the PR then

AnandK27 avatar Sep 09 '24 19:09 AnandK27

hey! sorry I just noticed this it appears my comment https://github.com/huggingface/diffusers/pull/9384#issuecomment-2338558745 has been misleading, I only meant that we do not need to apply the change for remaining schedulers (not all of them anyway), this is still very much an issue with DDPM

feel free to re-open this

yiyixuxu avatar Oct 16 '24 00:10 yiyixuxu

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Nov 09 '24 15:11 github-actions[bot]

can you take a look? @hlky

yiyixuxu avatar Dec 03 '24 03:12 yiyixuxu

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.