DPM Solver doesn't support FP16 mode
I'm running on an 8Gb card (1070) so have limited VRAM.
One of the changes I make is to use FP16 (Half) to use less VRAM, so for example, in txt2img.py, I have modified the code like this:
seed_everything(opt.seed)
torch.set_default_tensor_type(torch.HalfTensor)
config = OmegaConf.load(f"{opt.config}")
model = load_model_from_config(config, f"{opt.ckpt}")
model = model.half()
the set_default... and model.half() lines are the additions.
This has generally worked okay, except when trying to use --dpm.
With --dpm, there is an error that comes from dpm_solver.py and any use of "torch.linspace"
torch.linspace when running on the CPU (as that part of the DPM Solver does) isn't supported for FP16.
As a workaround I have modified those lines to specify a dtype of float, like this:
self.t_array = torch.linspace(0., 1., self.total_N + 1,dtype=torch.float)[1:].reshape((1, -1))
This seems to be working okay, but I don't know if this is the best fix.
It would be good to get an official fix in the repo for this, and also a command line option to officially support using FP16.
Thanks
Use kdiffusion. Works great and rarely exceeds 8GB vram