this post was submitted on 05 Mar 2024
42 points (93.8% liked)

Technology

59092 readers
6622 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The article discusses the mysterious nature of large language models and their remarkable capabilities, focusing on the challenges of understanding why they work. Researchers at OpenAI stumbled upon unexpected behavior while training language models, highlighting phenomena such as "grokking" and "double descent" that defy conventional statistical explanations. Despite rapid advancements, deep learning remains largely trial-and-error, lacking a comprehensive theoretical framework. The article emphasizes the importance of unraveling the mysteries behind these models, not only for improving AI technology but also for managing potential risks associated with their future development. Ultimately, understanding deep learning is portrayed as both a scientific puzzle and a critical endeavor for the advancement and safe implementation of artificial intelligence.

top 8 comments
sorted by: hot top controversial new old
[–] Redacted@lemmy.world 5 points 8 months ago* (last edited 8 months ago) (2 children)

This article, along with others covering the topic, seem to foster an air of mystery about machine learning which I find quite offputting.

Known as generalization, this is one of the most fundamental ideas in machine learning—and its greatest puzzle. Models learn to do a task—spot faces, translate sentences, avoid pedestrians—by training with a specific set of examples. Yet they can generalize, learning to do that task with examples they have not seen before.

Sounds a lot like Category Theory to me which is all about abstracting rules as far as possible to form associations between concepts. This would explain other phenomena discussed in the article.

Like, why can they learn language? I think this is very mysterious.

Potentially because language structures can be encoded as categories. Any possible concept including the whole of mathematics can be encoded as relationships between objects in Category Theory. For more info see this excellent video.

He thinks there could be a hidden mathematical pattern in language that large language models somehow come to exploit: “Pure speculation but why not?”

Sound familiar?

models could seemingly fail to learn a task and then all of a sudden just get it, as if a lightbulb had switched on.

Maybe there is a threshold probability of a positied association being correct and after enough iterations, the model flipped it to "true".

I'd prefer articles to discuss the underlying workings, even if speculative like the above, rather than perpetuating the "It's magic, no one knows." narrative. Too many people (especially here on Lemmy it has to be said) pick that up and run with it rather than thinking critically about the topic and formulating their own hypotheses.

[–] orclev@lemmy.world 5 points 8 months ago

Yeah pretty much this. My understanding of the way LLMs function is that they operate on statistical associations of words which would amount to categories in Category Theory. Basically the training phase is classifying words into categories based on the examples in the training input. Then when you feed it a prompt it just uses those categories to parse and "solve" your prompt. It's not "mysterious" it's just opaque because it's an incredibly complicated model. Exactly the sort of thing that people are really bad at working with, but which computers are really good with.

[–] PipedLinkBot@feddit.rocks 2 points 8 months ago

Here is an alternative Piped link(s):

this excellent video

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] kromem@lemmy.world 5 points 8 months ago* (last edited 8 months ago)

It's really so much worse than this article even suggests.

For example, one of the things it doesn't really touch on is the unexpected results emerging over the last year that a trillion parameter network may develop capabilities which can then be passed on to a network with less than a hundredth the parameter size by generating synthetic data from the larger model to feed into the smaller. (I doubt even a double digit percentage of researchers would have expected that result before it showed up.)

Even weirder was a result that CoT prompting models to improve their answers and then feeding the questions and final answers into a new model but without the 'chain' from the CoT will still train the second network in the content of the chain.

The degree to which very subtle details in the training data is ending up modeled seems to go beyond even some of the wilder expectations by researchers right now. Just this past week I saw a subtle psychological phenomenon I used to present about appearing very clearly and very by the book in GPT-4 outputs given the correct social context. I didn't expect that to be the case for at least another generation or two of models and hadn't expected the current SotA models to replicate it at all.

For the first time two weeks ago I saw a LLM code switch to a different language when there was a more fitting translation to the concept being discussed. There's no way the most statistical likelihood of discussing motivations in English was to drop into a language barely represented in English speaking countries. This was with the new Gemini, which also seems to have internalized a bias towards symbolic representations in its generation, to the point they appear to be filtering out emojis (in the past I've found examples where switching from nouns to emojis improves critical reasoning abilities of models as it breaks token similarity patterns in favor of more abstracted capabilities).

Adding the transformer's self attention to diffusion models has suddenly resulted in correctly simulating things like fluid dynamics and physics in Sora's video generation.

We're only just starting to unravel some of the nuances of self-attention, such as recognizing the attention sinks in the first tokens and the importance of preserving them across larger sliding context windows.

For the last year at least, especially after GPT-4 leapfrogged expectations, it's very much been feeling as the article states - this field is eerily like the early 20th century in Physics where experimental results were regularly turning a half century of accepted theories on their head and fringe theories generally dismissed were suddenly being validated by multiple replicated results.

[–] lvxferre@mander.xyz 3 points 8 months ago (2 children)

“The magic is not that the model can learn math problems in English and then generalize to new math problems in English,” says Barak, “but that the model can learn math problems in English, then see some French literature, and from that generalize to solving math problems in French. That’s something beyond what statistics can tell you about.”

It is not magic and all this "it's magic" discourse is IMO counter-productive. When a model does something interesting, people need to dig on what it's doing and why, for better models; and by "interesting" I mean both accurate and inaccurate (enough of this "it's hallu, move on!" nonsense).

And it's still maths and statistics. Yes, even if it's complex enough to make you lose track of. To give you an example, it's like trying to determine exactly the position of every atom of oxygen and silicon in a quartz crystal, to know how it should behave - it should be doable if not by the scale.

Now, explaining it: LLMs are actually quite good at translation (or at least better than other machine-based translation methods). Three things might be happening here:

  1. It converts the prompt into French, then operates on French tokens.
  2. It operates on English tokens, then converts the output to French tokens.
  3. It converts the logical problem itself into an abstract layer, then into French.

I find #1 unlikely, #2 the most likely, but the one that would interest me the most is #3. It would be closer to how humans handle language; we don't really think too much by chaining morphemes ("tokens"), we mostly handle what those morphemes convey.

It would be far, far, far more interesting if this was coded explicitly into the model, but if it appeared as emergent behaviour it would be better than nothing.

[–] Redacted@lemmy.world 3 points 8 months ago

Yep my sentiment entirely.

I had actually written a couple more paragraphs using weather models as an analogy akin to your quartz crystal example but deleted them to shorten my wall of text...

We have built up models which can predict what might happen to particular weather patterns over the next few days to a fair degree of accuracy. However, to get a 100% conclusive model we'd have to have information about every molecule in the atmosphere, which is just not practical when we have a good enough models to have an idea what is going on.

The same is true for any system of sufficient complexity.

[–] General_Effort@lemmy.world 2 points 8 months ago (1 children)
It converts the prompt into French, then operates on French tokens.

It operates on English tokens, then converts the output to French tokens.

It converts the logical problem itself into an abstract layer, then into French.

What does any of that actually mean?

You download an LLM. Now what? How do you test this?

[–] lvxferre@mander.xyz 2 points 8 months ago* (last edited 8 months ago)

What does any of that actually mean?

I was partially rambling so I expressed the three hypotheses poorly. A better way to convey it would be which set of tokens is the LLM using to solve the problem? 1. from French?, 2. from English?, or 3. neither?

In #1 and #2 it's still doing nothing "magic", it's just handling tokens as it's supposed to. In #3 it's using the tokens for something more interesting - still not "magic", but cool.

You download an LLM. Now what? How do you test this?

For maths problems, I don't know a way to test it. However, for general problems:

If the LLM is handling problems through the tokens of a specific language, it should fall for a similar "trap" as plenty monolinguals do, when 2+ concepts are conveyed through the same word and they confuse said concepts.

For example. Let's say that we train an LLM with the following corpuses:

  1. English corpus talking about software, but omitting any clarification distinguishing between free "unrestricted" (as Linux) and free "costless" (as Skype).
  2. French corpus that includes the words "libre" (free/unrestricted) and "gratuit" (free/costless), enough context to associate each with their semantic fields, and to associate both with English "free".

Then we start asking it about free software, in both languages. Will the LLM be able to distinguish between both concepts?