this post was submitted on 02 Jan 2025
18 points (72.5% liked)
Asklemmy
44260 readers
1508 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm not talking about AI in general here. I know some form of AI has been out there for ages and ML definitely has some field specific usecases. Here the objective is to discuss the feeling about gen AI produced content in contrast to human made content, potentially pondering the hypothetical scenario that the gen AI infrastructure is used ethically. I hope the notion of generative AI is sort of clear, but it includes LLMs, photo (not computer vision) and audio generators and any multimodal combination of these.
That's a good start, but where do you draw the line? If I use a template, is that AI? What if I am writing a letter based on that template and use a grammar checker to fix the grammar. Is that AI? And then I use the thesaurus to automatically beef up the vocabulary. Is that AI?
In other words, you can't say LLM and think it's a clear proposition. LLMs have been around and used for various things for quite a while, and some of those things don't feel unnatural.
So I'm afraid we still have a definitional problem. And I don't think it is easy to solve. There are so many interesting edge cases.
Let's consider an old one. Weather forecasting. Of course the forecasts are in a sense AI models. Or models, if you don't want to say AI. Doesn't matter. And then that information can be displayed in a table, automatically, on a website. That's a script, not really AI, but hey, you could argue the whole system now counts as AI. So then let's use an LLM to put it in paragraph form, the table is boring. I think Weather.com just did this recently and labeled it "AI forecast", in fact. But is this really an LLM being used in a new way? Is this actually harmful when it's essentially the same general process that we've had for decades? Of course it's benign. But it is LLM, technically...