this post was submitted on 29 Jul 2023
35 points (97.3% liked)

Stable Diffusion

4307 readers
9 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

Hello everyone!

My name's Benjamin, I'm the developer of ENFUGUE, a self-hosted Stable Diffusion Web UI that's built around an intuitive canvas interface, while still trying to deliver the power and deep customization of the popular tab-and-slider web UI's.

I'm taking it out of Alpha and into Beta with the v0.2 release, which brings SDXL support while still maintaining most of the feature set of 1.5 by allowing you to configure multiple checkpoints for various diffusion plans. It also has a ton of changes since 0.1 as suggested by other users, like the the ability to point ENFUGUE to the directories of other Web UI installations to share models and other files.

This is not monetized software in any way; I simply built the tool I wanted to use, and wanted to share. Thanks you taking a look!

top 22 comments
sorted by: hot top controversial new old
[–] Stampela@startrek.website 4 points 1 year ago (1 children)

Saving this for later, you got my attention with the combo of Mac and portable. Anything I can delete a single folder to get rid of in case I don't like it, is a great plus.

[–] benjamin@lemmy.dbzer0.com 2 points 1 year ago (1 children)

I had hoped that would sell a few people on it! I agree entirely on the motivation - I was able to test it on work machines without even needing to log out of an unprivileged user thanks to the portable install working nicely for me. MPS is of course slower than an equivalent CUDA device, but I was able to ensure the entire E2E test plan passed on Mac, including all ControlNets, inpainting, schedulers, upscaling, etc.

If you want SDXL on Mac, your mileage will definitely vary. I ran out of memory while loading the checkpoint on my M1 Pro 12GB, it may have been able to work if I allotted it a dangerously large amount of memory, but I could also have crashed the machine and I don't feel like bothering with that. In theory there's nothing stopping it from working, you just might need an M2 Max to get it off the ground.

Please let me know if you encounter any unforeseen issues!

[–] Stampela@startrek.website 1 points 1 year ago (1 children)

So, here's my findings. Easy install, portable enough (having to specify a bunch of folders and manually creating them could be better) at first sight the interface is nice. And that's really where I stop, because it took... what, I think it was over 5 minutes to initialize the first render? All to error out (gracefully! Kudos for that) because it was out of memory. I couldn't find anything else to close, so basically on the M2 with 8gb it won't run. It was 512x512 SD 1 model. https://apps.apple.com/app/id6444050820 works on all sorts of Apple devices, is free, and kept very updated... I mention it because it's fast (14 seconds 512x512 20 steps V1 on M2), can do SDXL and refiner even with 8gb (once, I doubt many will do it a second time, but a couple of minutes for 1024x1024 20 is still "doable"). I'll stick with that on Apple stuff :)

I had hopes to try it on the Steam Deck but I saw no mention of AMD at all. Still! I'm probably going to try the tensorRT stuff on Windows, my 3060 should do it and I don't know how to do it with Automatic1111 XD

[–] benjamin@lemmy.dbzer0.com 2 points 1 year ago* (last edited 1 year ago) (1 children)

I can't thank you enough for linking that!!!

It made me realize that there must have been a way to effectively downcast without getting NaN's. There's just no way that app could work on the devices it does with SDXL without having figured it out, so I scoured the web for references and dug in to figure it out. I'm happy to say I got it working on my M1 Pro! That also means memory usage is cut down by about a third, and speed is up by about 50% on Mac in general thanks to being able to work in half-precision instead of full.

I was able to do the same 512x512 20 steps in 17 seconds using a fine-tuned model (Realistic Vision 5.) SDXL took it's sweet time coming in at almost 3 minutes, so yeah probably not my usual workflow, but SDXL isn't even in my usual workflow on my 3090 Ti Windows/Ubuntu hybrid machine. I still use TensorRT and fine-tuned SD 1.5 models - 512x512 is roughly 3 seconds on that, but the beautiful part is when doing a 2000-iteration upscale and TensorRT caps out at ~30 it/s on Windows or ~40 it/s on Linux.

I have a little bit more testing to do for this, but I'm going to be releasing a 0.2.1 build in the next couple days. I would love it if you would give it another shot - I'll send you a message with a link, if that's okay with you!

With respect to AMD - that's a complicated answer, I'm trying to work with some AMD users to test out the combination of dependencies that will work for them. I'm not sure if anyone has managed to successfully use the GPU for AI on the steam deck, but I do know officially ROCm is unsupported and will be for the foreseeable future on the deck. I've seen people successfully use Stable Diffusion with CPU inference on it, which Enfugue will allow - but those same people reported it took half an hour to generate a single image on the deck, so I'm not sure it's worth trying.

[–] Stampela@startrek.website 2 points 1 year ago (1 children)

Really glad that could help! Since I've got your attention, I couldn't get TensorRT to work on Windows. At least 50% chance I didn't install it properly, BUT at the same time your gui was showing my 1650 instead of the 3060. After looking for some setting about Cuda devices and finding none I gave up. Generation times and usage pointed clearly at a normal 3060 task, even if the gui had the temperature for the 1650.

But anyway! One thing I'd like to ask is (now that there's a viable way to use it on my Mini) an option to allow other computers to access it, and better yet the API like in Automatic1111. Like that I could do some kind of LLM on the 3060 (I like Pygmalion 6b) and stable diffusion on the Mac.

All that aside, thanks for making a viable alternative to Draw Things. As much as I like it and the interface, choice is always good... and yours has the potential to be usable in remote :D

[–] benjamin@lemmy.enfugue.ai 2 points 1 year ago (1 children)

I'm back! 0.2.1 is now released, which defaults MacOS to half-precision. It also includes SDXL LoRA and ControlNet support, which I did get working on my Mac. :) It's available at https://github.com/painebenjamin/app.enfugue.ai/releases/tag/0.2.1.

As for API - there always was one! It was just never documented until now. There's still a few endpoints left to document, but the big ones are covered. Documentation is at https://github.com/painebenjamin/app.enfugue.ai/wiki/JSON-API.

[–] Stampela@startrek.website 1 points 1 year ago (1 children)

So, feedback. To begin with, it works! That's a massive improvement and allowed me to actually try it. Civitai.com downloading works quite nicely and... the generation is kinda slow. Slower than my iPhone 13 pro with Draw Things, a minute give or take 10 seconds. Poor phone crunches the same model in 30 something seconds.

Don't get me wrong, I appreciate it works to begin with, it's also easy to setup, but there's a fair amount of performance left on the table. Now depending on how much work there's to do it might make sense to chase further performance, but that's something only you can decide :D

[–] benjamin@lemmy.dbzer0.com 1 points 1 year ago (1 children)

You're the best, thanks so much for trying it and getting it working!

I don't think it's ever not worth chasing improved performance, so I'm definitely going to continue looking for optimizations. While cannibalizing the code for Comfy and A1111, I saw a lot (and I mean a lot) of shortcuts being made over the official Stability code release that improves performance in specific situations. I'm going to try and see how I can leverage some of those shortcuts into options for the user to tune to their hardware.

This latest release has attracted some more developer attention (and also some inquiries from hosting providers about offering Enfugue in the cloud!) I'm hoping that some of the authors of those improvements find their way to the Enfugue repository and perhaps are inspired to contribute.

With that being said, TensorRT will definitely knock your socks off in terms of speed if you haven't used it before, if you've got the hardware for it. I'd be happy to troubleshoot whatever went wrong with your Windows install - there should be up to three enfugue-engine.log files in your ~/.cache/ directory that will have more information about what went wrong, if you'd like to share them here (or we can start a GitHub thread if you have that.)

Thank you again for all your help!

[–] Stampela@startrek.website 0 points 1 year ago (1 children)

Now knowing where to look, I did some fixing by myself! Main issue is that I had CUDA 10 and 12, no 11. Then after going insane about that tiny difference... I landed on something I lack the knowledge to decipher: "PyInstallerImportError: Failed to load dynlib/dll 'C:\Program Files\NVIDIA GPU Computing Toolkit\TensorRT-8.6.1.6\lib\nvinfer_plugin.dll'. Most likely this dynlib/dll was not found when the application was frozen."

All I can say is that the file is there.

[–] benjamin@lemmy.enfugue.ai 2 points 1 year ago (2 children)

Hey! I am able to reproduce that error by using the CUDA 12 version of TensorRT.

PyInstallerImportError: Failed to load dynlib/dll 'C:\\TensorRT-8.6.1.6\\lib\\nvinfer_plugin.dll'. Most likely this dynlib/dll was not found when the application was frozen.

Please make sure you downloaded the top file here, not the bottom.

I was able to modify my PATH and point to the right TensorRT, then restart the server, and it worked for me (no machine restart needed.)

Please let me know if that works for you :)

[–] Stampela@startrek.website 1 points 1 year ago (1 children)

My request is dumb, the UI is glitching a little but hot damn 12 iterations per second! Impressive.

[–] benjamin@lemmy.dbzer0.com 2 points 1 year ago (1 children)

YOU GOT IT WORKING?

You are the first person to stick through to the end and do it. Seriously. Thank you so much for confirming that it works on some machine besides mine and monster servers in the cloud.

The configuration is obviously a pain point, but we're running along the cutting edge with TensorRT on Windows at all. I'm hoping Nvidia makes it easier soon, or at least relaxes the license so I'm not running afoul if I redistribute required dll's (for comparison, Nvidia publishes TensorRT binary libraries for Linux directly on pip, no license required.)

It's also a pain that 11.7 is the best CUDA version for Stable Diffusion with TensorRT. I couldn't even get 11.8, 12.0 or 12.1 to work at all on Windows with TensorRT (they work fine on their own.) On Linux, they would work, but would at best give me the same speed as regular GPU inference, and at worst would be slower, completely defeating the point.

[–] Stampela@startrek.website 1 points 1 year ago

Not going to lie, I almost gave up a few times. But I can also be stubborn... anyway since this is apparently the first confirmation it works, it's probably be helpful if I mention that it's a 12gb 3060. :)

[–] Stampela@startrek.website 1 points 1 year ago

... I'll check later, but I do remember grabbing the "right one" as I had version 12, so this might very well be it.

[–] Bishma@discuss.tchncs.de 3 points 1 year ago (1 children)

I will give this a try later when I've got some time. This looks like it covers all my gripes about the Easy Diffusion ux.

[–] benjamin@lemmy.dbzer0.com 2 points 1 year ago

If you find the time, please let me know how it goes! And that was specifically a part of my motivation; it was mostly gripes with EasyDiffusion/A111/Vlads/SD.Next/Invoke/Comfy that drove the original design.

[–] fhein@lemmy.world 2 points 1 year ago (1 children)

Looks cool! If it's browser based, have you got any plans to dockerize it?

[–] benjamin@lemmy.dbzer0.com 2 points 1 year ago (1 children)

Yes! That is the very next big feature to tackle after just adding MacOS support (and the surprise that was needing to add SDXL support.) I've been trying to weave between addressing bug reports and feature requests while also trying to understand what hardware people are actually trying to use - It seems like I've covered the vast majority of use cases for casual tinkerers and self-hosters, now it's time to make the docker build for the advanced users and individuals wanting to run this on a remote server.

In theory, the portable installation should "just work" in Docker, though the Nvidia runtime could cause troubles - but I'll start publishing Docker containers to the repository starting with 0.2.1.

Thank you for the feedback!

[–] fhein@lemmy.world 2 points 1 year ago (1 children)

Sounds good, looking forwards to trying it! Personally I like to use docker on my Linux desktop PC for web server based apps. Makes it easy to run and update everything without having to rely on custom installers and updaters. Usually gives better control over which port to use, and where to store data. Been using AbdBarho's docker files for A1111 and ComfyUI, which makes it very easy to share models and other large files between the two.

I've used cuda in docker quite a lot, and it has even helped me solve problems, e.g. some llama apps needed cuda toolkit, which was't available for Fedora 38. I think the biggest challenge with docker is to make sure the right dependencies get built into the image, and that all run-time data is contained to mounted volumes. If you need any help with docker let us know, I'm not some kind of super pro but I have a fair amount of experience with it.

If you're collecting info about users' hardware, I have a Ryzen 7 7700X, 32GB ram, RTX3080 12GB vram.

[–] benjamin@lemmy.dbzer0.com 2 points 1 year ago

Hi! The docker version is out! 😁 Just run docker pull ghcr.io/painebenjamin/app.enfugue.ai:latest to get it. There's some more documentation on ports, volumes, etc. on the wiki.

[–] FactorSD@lemmy.dbzer0.com 1 points 1 year ago (1 children)

It does seem to work fairly well, although I will say that it doesn't fit my workflow at all so I haven't done a lot of testing. I do think there are some UI things that you could look at though. Engine and Dimensions shouldn't be minimizable lists, because the fields only take up as much space as the label does. Also, your tooltips are outrageously large, covering about 75% the width of a 1080p monitor which makes them quite hard to actually read.

[–] benjamin@lemmy.dbzer0.com 1 points 1 year ago

Thank you so much for the feedback! I know there is always work to be done on the UI, I'm too close to it at this point to view it objectively, so I really do appreciate it. I'm going to see what I can do to make things less snug.

I would love to hear a little more about your workflow, if you find the time to indulge me. I'm guessing your workflow is something I've never even thought of, there could be something in there that Enfugue might be able to benefit from. Don't feel bad if you want to keep your secrets instead, though :)

Cheers!