this post was submitted on 03 Mar 2024
11 points (86.7% liked)

Stable Diffusion

4254 readers
45 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] idkman@lemmy.dbzer0.com 1 points 6 months ago (1 children)

What I dislike about lower quantization is quality degradation. In my small experience, i find 7b models dumb (I've only tested Q4KM GGUF), and needed to be provided proper context before moving forward with the constructive conversation (be chat or instruct).

If this issue can be circumvented in lower quantization, I'm all in.

In context of SD, going below fp16 would only make things faster at cost of quality, and I personally like to go in depth with my prompts. For simpler prompts sure, even lighting and turbo are good in that regard.

[โ€“] turkishdelight@lemmy.ml 1 points 6 months ago

You can't shrink a model to 1/8 the size and expect it to run at the same quality. Quantization allows me to move from a cloud gpu to my laptops crappy cpu/igpu, so I'm ok with that tradeoff.