this post was submitted on 06 Mar 2023
12 points (92.9% liked)

Technology

34442 readers
221 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] shreddy_scientist@lemmy.ml 2 points 2 years ago (1 children)

The fact sources aren't provided with ChatGPT is detrimental. We're in the early stages where errors are not uncommon, sourcing the info used to generate the paraphrases with ChatGPT would help instill confidence in its responses. This is a primary reason I prefer using perplexity.ai over ChatGPT, plus there's no need to create an account with perplexity as well!

[–] Ephera@lemmy.ml 2 points 2 years ago (2 children)

I mean, I just looked at one example: https://www.perplexity.ai/?s=u&uuid=aed79362-b763-40a2-917a-995bf2952fd7

While it does provide sources, it partially did not actually take information from them (the second source contradicts with the first source) or lost lots of context on the way, like the first source only talking about black ants, not all ants.

That's not the point I'm trying to make here. I just didn't want to dismiss your suggestion without even looking at it.

Because yeah, my understanding is that we're not in "early stages". We're in the late stages of what these Large Language Models are capable of. And we'll need significantly more advances in AI before they truly start to understand what they're writing.
Because these LLMs are just chaining words together that seem to occur together in the wild. Some fine-tuning of datasets and training techniques may still be possible, but the underlying problem won't go away.

You need to actually understand what these sources say to compare them and determine that they are conflicting and why. It's also not enough to pick one of these sources and summarize it. As a human, it's an essential part of research to read multiple sources to get a proper feel for the 'correct' answer and why there is not just one correct answer.

[–] KLISHDFSDF@lemmy.ml 4 points 2 years ago (1 children)

you could say human brains are also "just chaining words together that seem to occur together in the wild". What is thinking if not bringing ideas (words) together that we've learned from our environment (the wild)?

[–] Ephera@lemmy.ml 2 points 2 years ago (1 children)

The major difference I see, is that current AIs only provide narrow AI. They have very few sensors and are optimized for very few tasks.

Broad AI or human intelligence involves tons of sensors/senses which may not directly be involved in a given task, but still allow you to judge its success independently. We also need to perform many different tasks, some of which may be similar to a new task we need to tackle.
And humans spend several decades running around with those senses in different situations, performing different tasks, constantly evaluating their own success.

For example, writing a poem. ChatGPT et al can do that. But they can't listen to someone reading their poem, to judge how the rhythm of the words activates their reward system for successful pattern predictions, like it does for humans.

They also don't have complex associations with certain words. When we speak of a dreary sky, we associate coldness from sensing it with our skin, and we associate a certain melancholy, from our brain not producing the right hormones to keep us fully awake.
A narrow AI doesn't have a multitude of sensors + training data for it, so it cannot have such impressions.

[–] blank_sl8@lemmy.ml 1 points 2 years ago

Google especially is working on multimodal models that do both language and image, audio, etc understanding in the same model. Their latest work, PaLM-E, demonstrates that learning in one domain (eg images) can indirectly benefit the model's performance in other domains (eg text) without additional training in the other domain.

[–] shreddy_scientist@lemmy.ml 0 points 2 years ago

In your search result, click "view detailed" in the upper right, should be more what you're looking for. I hope setting detailed view as default becomes an option.

[–] Ephera@lemmy.ml 2 points 2 years ago (1 children)

Will letting a large language model handle the boilerplate allow writers to focus their attention on the really creative parts?

I especially don't see that there should be boilerplate in your text. If you're writing a word which literally does not change the meaning or perception of your text, why are you writing that word? If it does change the meaning or perception, you should be choosing that word.

[–] KLISHDFSDF@lemmy.ml 3 points 2 years ago (1 children)

While it is true that using unnecessary words can clutter your writing and make it less effective, it's also important to consider the context and purpose of your writing. In some cases, boilerplate language or repetitive phrasing can serve a useful purpose, such as providing clarity or emphasizing a point.

Additionally, it's important to consider your audience and their familiarity with the subject matter. While a particular word or phrase may seem superfluous to you, it may be necessary for readers who are less familiar with the topic.

Ultimately, the goal of effective writing is to communicate your message clearly and efficiently, and sometimes that means using words or phrases that may seem redundant or unnecessary to some readers. As a writer, it's important to strike a balance between clarity and concision, and to make thoughtful choices about the language you use to convey your ideas.

[–] Ephera@lemmy.ml 1 points 2 years ago

I agree with all of that, except that I wouldn't call it "boilerplate", if it is clarifying or emphasizing a point to some target group.

Might be that I'm just very intolerant to boilerplate as a reader. I'm part of the younger generations with short attention spans and tons of authors will artificially pad their online articles, because more time spent reading useless words means more time spent potentially looking at ads, which means more money for them.

So, I think, what I'm really arguing is: Fuck the whole monetisation of attention.