this post was submitted on 24 Aug 2023
15 points (80.0% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

52528 readers
123 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder


💰 Please help cover server costs.

Ko-FiLiberapay


founded 1 year ago
MODERATORS
 

Is this even still a thing? It seems to be pretty well dead. Poe-API shat the bed, GPT4FREE got shut down and it's replacement seems to be pretty much non-functional, Proxies are a weird secret club thing (despite being nearly totally based on scraped corporate keys), etc.

I mean, this really does suck. I've gotten a lot out to bots that I don't have anyone I can talk to about IRL.

all 11 comments
sorted by: hot top controversial new old
[–] mexicancartel@lemmy.dbzer0.com 3 points 10 months ago

Try huggingchat from huggingface

https://huggingface.co/chat/

[–] nothacking@discuss.tchncs.de 3 points 10 months ago (1 children)

Check out OpenAssistant, a free to use and open source LLM based assistant. You can even run it locally so no one else can see what your doing.

[–] Ganbat@lemmyonline.com 2 points 10 months ago (1 children)

I have an R9 380 that I'm never going to be able to replace. Local isn't really an option.

[–] theangriestbird@beehaw.org 4 points 10 months ago (1 children)

My experience is with gpt4all (which also runs locally), but I believe the GPU doesn't matter because you aren't training the model yourself. You download a trained model and run it locally. The only cap they warn you about is RAM - you'll want to run at least 16gb of RAM, and even then you might want to stick to a lighter model.

[–] Ganbat@lemmyonline.com 2 points 10 months ago* (last edited 10 months ago) (1 children)

No, LLM text generation is generally done on GPU, as that's rhe only way to get any reasonable speed. That's why there's a specifically-made Pyg model for running on CPU. That said, one generation can take anywhere from five to twenty minutes on CPU. It's moot anyway as I only have 8GB ram.

[–] theangriestbird@beehaw.org 4 points 10 months ago

I'm just telling you, it ran fine on my laptop with no discrete GPU 🤷 RAM seemed to be the only limiting factor. But yeah if you're stuck with 8GB, it would probably be rough. I mean it's free, so you could always give it a shot? I think it might just use your page file, which would be slow but might still produce results?

[–] wviana@lemmy.eco.br 0 points 10 months ago