**Edit: **Thought it's more appropriate to update an old post than start a new one! Since I seem to enjoy suffering under windows, I've been looking still for any possible optimisation of stable diffusion models with AMD GPUs that doesn't require Linux. This article seems quite promising: https://community.amd.com/t5/gaming/how-to-running-optimized-automatic1111-stable-diffusion-webui-on/ba-p/625585 It's pointing to an optimised use of DirectML as @jasparagus@lemmy.world mentioned, but if performance is as good as claimed I would hope for more widespread adoption.
A few things have me curious though, and the more knowledgeable of you might answer faster than my trial and error attempts!
- I understand there's a general need to convert the model to ONNX so that it isn't using pytorch, though the article (under section 2) makes a note about quantisation converting 'most layers from FP32 to FP16'. I'm guessing in most cases it might not even be obvious, but wouldn't it mean there's an all-up reduction in quality in the model?
- Are ONNX versions of models (like SDXL) available, so that the conversion step could be skipped entirely and just substitute the model into section 5 of the article? I assume not, huggingface pages for SD/SDXL comment on the ability to convert but I've only seen the .safetensor files listed.
- Pure speculation now, would it ever be possible for A1111, to incorporate this process? I assume not if we are needing models of a specific format...
In any case, I thought it was an interesting article for some. Out of interest I may try SD.Next and see if the experience differs greatly from A1111.
Hello world, Forgive an obvious question, I'm just hoping to find out whether any specific support for AMD GPUs on windows has been confirmed? I've only seen it mentioned with regards to Linux specifically.
I'm sure it's a matter of time, particularly after sdxl 1.0 is released, but would appreciate any more information now all the same. Holding out hope it's a little smoother than the sd1.5 forks. Thankyou!-
Which GPU specifically?
Hi, I have a 6800xt
Me too. I recommend using ROCm on Linux. Windows and DirectML is painfully slow in comparison.
Thanks. Sounds like I've got more reading to do 😃
I used this video, although it is a bit frustrating at times. https://youtu.be/2XYbtfns1BU