this post was submitted on 17 Jul 2024
371 points (99.2% liked)
Open Source
31135 readers
335 users here now
All about open source! Feel free to ask questions, and share news, and interesting stuff!
Useful Links
- Open Source Initiative
- Free Software Foundation
- Electronic Frontier Foundation
- Software Freedom Conservancy
- It's FOSS
- Android FOSS Apps Megathread
Rules
- Posts must be relevant to the open source ideology
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
- !libre_culture@lemmy.ml
- !libre_software@lemmy.ml
- !libre_hardware@lemmy.ml
- !linux@lemmy.ml
- !technology@lemmy.ml
Community icon from opensource.org, but we are not affiliated with them.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It was struggling harder than I was ;-)
I noticed those language models don't work well for articles with dense information and complex sentence structure. Sometimes they forget the most important point.
They are useful as a TLDR but shouldn't be taken as fact, at least not yet and for the foreseeable future.
A bit off topic, but I've read a comment in another community where someone asked chatgpt something and confidently posted the answer. Problem: the answer is wrong. That's why it's so important to mark ~~AI~~ LLM generated texts (which the TLDR bots do).
Not calling ML and LLM "AI" would also help. (I went offtopic even more)
I think the Internet would benefit a lot, if peope would mark their Informations with sources!
Yeah that's right. Having to post sources rules out usage of LLMs for the most part, since most of them do a terrible job at providing them - even if the information is correct for once.