hau
hau
is there some way to easy pull new updates / new components that are added to the library? since it's not released as a package
what's the supported context window length for each model?
- stateful load models once and then generate again - use torch compile?
for those with limited VRAM, any plans to support quantized versions of models?
TypeError: streamdiffusion.acceleration.tensorrt.compile_unet() got multiple values for keyword argument 'opt_batch_size' ?
Would it be possible to export a nn.Module that uses odeint under the hood to ONNX?
webgpu?
do you think switching the engine to webgpu would help with performance? babylon is supposed to support webgpu out of the box