vluz

joined 1 year ago
[–] vluz@kbin.social 2 points 9 months ago

Got one more for you: https://gossip.ink/
I use it via a docker/podman container I've made for it: https://hub.docker.com/repository/docker/vluz/node-umi-gossip-run/general

[–] vluz@kbin.social 3 points 10 months ago (4 children)

I got cancelled too and chose Hetzner instead. Will not do business with a company that can't get their filters working decently.

[–] vluz@kbin.social 5 points 11 months ago (1 children)

Lovely! I'll go read the code as soon as I have some coffee.

[–] vluz@kbin.social 2 points 1 year ago

I do SDXL generation in 4GB at extreme expense of speed, by using a number of memory optimizations.
I've done this kind of stuff since SD 1.4, for the fun of it. I like to see how low I can push vram use.

SDXL takes around 3 to 4 minutes per generation including refiner but it works within constraints.
Graphics cards used are hilariously bad for the task, a 1050ti with 4GB and a 1060 with 3GB vram.

Have an implementation running on the 3GB card, inside a podman container, with no ram offloading, 1 vcpu and 4GB ram.
Graphical UI (streamlit) run on a laptop outside of server to save resources.

Working on a example implementation of SDXL as we speak and also working on SDXL generation on mobile.
That is the reason I've looked into this news, SSD-1B might be a good candidate for my dumb experiments.

[–] vluz@kbin.social 4 points 1 year ago (1 children)

Oh my Gwyn, this comment section is just amazing.

[–] vluz@kbin.social 6 points 1 year ago (1 children)

Goddammit! Don't tell that one, I use it to impress random people at parties.

[–] vluz@kbin.social 2 points 1 year ago

Not joking, although I understand it seems very silly at face value.
Dark Souls 3 PvP specifically SL60+6 at gank town (after pontiff).
It used to be my go-to wind down after a work day.
It made me smile and actually relaxed me enough to go to bed and sleep, especially after a hard day.

[–] vluz@kbin.social 6 points 1 year ago

HateLLM will be a smash. /s

[–] vluz@kbin.social -2 points 1 year ago
[–] vluz@kbin.social 1 points 1 year ago

That's wonderful to know! Thank you again.
I'll follow your instructions, this implementation is exactly what I was looking for.

[–] vluz@kbin.social 2 points 1 year ago (2 children)

Absolutely stellar write up. Thank you!

I have a couple of questions.
Imagine I have a powerful consumer gpu card to trow at this solution, 4090ti for the sake of example.
- How many containers can share one physical card, taking into account max memory will not be exceeded?
- How does one virtual gpu look like in the container? Can I run standard stuff like PyTorch, Tensorflow, and CUDA stuff in general?

[–] vluz@kbin.social 5 points 1 year ago

Don't trust brave, never will.

view more: next ›