L_Acacia

joined 1 year ago
[–] L_Acacia@lemmy.one 6 points 1 week ago

Revolt tries to be a discord clone/replacement and suffer from some of the same issues. Matrix happens to have a lot of feature in common, but is focused on privacy and security at its core.

[–] L_Acacia@lemmy.one 2 points 1 month ago

Mistral modèles don't have much filter don't worry lmao

[–] L_Acacia@lemmy.one 3 points 1 month ago (4 children)

They is no chance they are the one training it. It costs hundreds of millions to get a descent model. Seems like they will be using mistral, who have scrapped pretty much 100% of the web to use as training data.

[–] L_Acacia@lemmy.one 3 points 2 months ago

Buying second hand 3090/7090xtx will be cheaper for better performances if you are not building the rest of the machine.

[–] L_Acacia@lemmy.one 2 points 2 months ago

You are limited by bandwidth not compute with llm, so accelerator won't change the interferance tp/s

[–] L_Acacia@lemmy.one 6 points 2 months ago

I use similar feature on discord quite extensively (custom emote/sticker) and i don't feel they are just a novelty. Allows us to have inside joke / custom reaction to specific event and I really miss it when trying out open source alternatives.

[–] L_Acacia@lemmy.one 3 points 2 months ago

Too be fair to Gemini, even though it is worse than Claude and Gpt. The weird answer were caused by bad engineering and not by bad model training. They were forcing the incorporattion off the Google search results even though the base model would most likely have gotten it right.

[–] L_Acacia@lemmy.one 4 points 7 months ago

Whatsapp is europe's iMessage

[–] L_Acacia@lemmy.one 4 points 7 months ago

You can take a look at exllama and llama.cpp source code on github if you want to see how it is implemented.

[–] L_Acacia@lemmy.one 5 points 7 months ago

If you have good enough hardware, this is a rabbithole you could explore. https://github.com/oobabooga/text-generation-webui/

[–] L_Acacia@lemmy.one 3 points 7 months ago

Around 48gb of VRAM if you want to run it in 4bits

[–] L_Acacia@lemmy.one 2 points 7 months ago

To run this model locally at gpt4 writing speed you need at least 2 x 3090 or 2 x 7900xtx. VRAM is the limiting factor in 99% of cases for interference. You could try a smaller model like mistral-instruct or SOLAR with your hardware though.

view more: next ›