InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: 5.0 release ignores quantization

Open zethfoxster opened this issue 1 year ago • 2 comments

Is there an existing issue for this problem?

  • [X] I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

rtx 4090

GPU VRAM

24g

Version number

5

Browser

chrome

Python dependencies

No response

What happened

loading fp8 models uses the same amount of vram as loading the full unquantized versions of flux. capping my 24gigs

What you expected to happen

it should run at about 20 gigs or less depending on which of the Q models I choose.

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

zethfoxster avatar Sep 25 '24 01:09 zethfoxster

image Yeah Because Invoke 5.0 cannot read the internal clip and t5 model inside the fp8 model..... The speed is painfuless slow now https://github.com/invoke-ai/InvokeAI/issues/6940

LiJT avatar Sep 25 '24 05:09 LiJT

meet the same situation, we need fp8

RCBelmont avatar Sep 25 '24 19:09 RCBelmont

I noticed this as well.

Vigilence avatar Nov 06 '24 06:11 Vigilence