this post was submitted on 04 Aug 2023
91 points (88.9% liked)

Technology

34981 readers
45 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

you are viewing a single comment's thread
view the rest of the comments
[–] radarsat1@lemmy.ml 1 points 1 year ago

I would argue that what's going on is that they are compressing information. And it just so happens that the most compact way to represent a generative system (like mathematical relations for instance) is to model their generative structure. For instance, it's much more efficient to represent addition by figuring out how to add two numbers, than by memorizing all possible combinations of numbers and their sum. So implicit in compression is the need to discover generalizations. But, the network has limited capacity and limited "looping power", and it doesn't really know what a number is, so it has to figure all this out by example and as a result will often come to approximate versions of these generalizations. Thus, it will often appear to be intelligent until it encounters something that doesn't quite fit whatever approximation it came up with and will suddenly get something wrong that seems outside the pattern that you thought it understood, because it's hard to predict what it's captured at a very deep level and what it only has surface concepts of.

In other words, I think it is "kind of" thinking, if thinking can be considered a kind of computation, but it doesn't always capture concepts completely because it's not quite good enough at generalizing what it's learned, but it's just good enough to appear really smart within a certain distribution of inputs.

Which, in a way, isn't so different from us, but is maybe not the same as how we learn and naturally integrate information.