this post was submitted on 17 Nov 2024
9 points (68.0% liked)

Videos

14313 readers
453 users here now

For sharing interesting videos from around the Web!

Rules

  1. Videos only
  2. Follow the global Mastodon.World rules and the Lemmy.World TOS while posting and commenting.
  3. Don't be a jerk
  4. No advertising
  5. No political videos, post those to !politicalvideos@lemmy.world instead.
  6. Avoid clickbait titles. (Tip: Use dearrow)
  7. Link directly to the video source and not for example an embedded video in an article or tracked sharing link.
  8. Duplicate posts may be removed

Note: bans may apply to both !videos@lemmy.world and !politicalvideos@lemmy.world

founded 1 year ago
MODERATORS
 

Will progress in artificial intelligence continue to accelerate, or have we already hit a plateau? Computer scientist Jennifer Golbeck interrogates some of the most high-profile claims about the promises and pitfalls of AI, cutting through the hype to clarify what's worth getting excited about — and what isn't.

you are viewing a single comment's thread
view the rest of the comments
[–] horse_battery_staple@lemmy.world 15 points 12 hours ago* (last edited 12 hours ago) (1 children)

YES

The transforms these LLMs are built on are not as efficient as they are novel. Without repeatability there is little hope for improvement. There isn't enough energy in the world to get to an AGI using a transform model. We're also running out of LLM free datasets to train on.

https://arxiv.org/html/2211.04325v2

https://arxiv.org/pdf/2302.06706v1

I really love that training llms on LLM output has been proven to cause it to unravel into nonsense. And rather than thinking about that before releasing, all these mega corps had to make profit in the short term first, and now the Internet is polluted with LLM output everywhere. I don't know that they will be able to generate a newer version than 2021