Custom Optimizer
Hi everybody,
I want to define my own custom optimizer. How can I do that? Can you explain this or provide an example?
@rockerBOO @kohya-ss
You can import a module using optimizer_type
optimizer_type = "prodigyplus.ProdigyPlusScheduleFree"
And you would install your optimizer or have the file in the directory to import it.
can set arguments using optimizer_args
--optimizer_args "weight_decay=0.1"
@rockerBOO Could you help giving an example of SequentialLR command (with two different schedulers as arguments)? By some reason it didn’t work for me 😒 (for example, CosineAnnealingWarmRestarts command that i wrote works fine). I guess, the way i give the two other schedulers as arguments is not correct.
Hi everybody,
I want to define my own custom optimizer. How can I do that? Can you explain this or provide an example?
@FidanVural You can download and see an example of custom scheduler REX with restarts made by user on https://civitai.com/articles/5603/how-to-never-overundertrain-a-lora read the comments for use and install of that custom scheduler. Then i think the calculation formula in python code is not well wrote, but it works. Then you can edit the code and the formula in custom.py as you wish
Thank you so much @rockerBOO @MegaCocos. I will look at what you have mentioned.
SequentialLR
Those require instances of the other schedulers, so that wouldn't work with the current system unless you made a custom one that initialized the 2 and then makes a interface for SequentialLR
SequentialLR
Those require instances of the other schedulers, so that wouldn't work with the current system unless you made a custom one that initialized the 2 and then makes a interface for SequentialLR
Thank you for explanation, suspected this
@rockerBOO have one more question with lr_scheduler options for sd-scripts, could you help to clear it? When i use
--optimizer_type adafactor --optimizer_args "relative_step=False" "scale_parameter=False" "warmup_init=False" --lr_warmup_steps 0.1 --lr_scheduler cosine --lr_decay_steps 0.5
the option --lr_decay_steps 0.5 does not work correctly for me. As i understand in this case i should have 0.4 of steps constant max lr, but it is not so. First it goes up during 0.1 till get the max lr (warming is OK) and then directly starts to lowering till the end (cosine). What am i doing wrong? How can i get this constant max lr between warmup and cosine start?
How can i get this constant max lr between warmup and cosine start?
you would need to use another lr scheduler as cosine doesn't have a gap between warmup end and cosine start. You can see the options on https://huggingface.co/docs/transformers/main_classes/optimizer_schedules for the built in ones.
How can i get this constant max lr between warmup and cosine start?
you would need to use another lr scheduler as cosine doesn't have a gap between warmup end and cosine start. You can see the options on https://huggingface.co/docs/transformers/main_classes/optimizer_schedules for the built in ones.
@rockerBOO Thank you very much, i have found there what i was needed😊👍