this post was submitted on 01 Oct 2023
1111 points (97.5% liked)

Technology

59605 readers
3416 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] phoenixz@lemmy.ca 9 points 1 year ago (6 children)

Mega corporations should not be allowed to use nuclear power plants purely for themselves.

Also, if you need that much power to do something bthat a human brain does with under 100 watts, I really think you're doing it wrong

[–] Simran@lemm.ee 21 points 1 year ago (3 children)

If you're so smart why don't you come up with a way to do it under 100 watts???

Also this is training them not using them. Using an ai consumes significantly less power than the process to train it sort of like how humans take more to learn than to put something in practice.

[–] joemo@lemmy.sdf.org 9 points 1 year ago

Yeap. They are trying to train the AI quickly, not over the course of 18 years.

Also, it's early still early for these AI/LLM tools. The first few iterations of things are generally not very efficient. After you can prove it works, THEN you can make improvements to it, make it more efficient, etc.

However, I think that the approach of feeding in more power instead of optimizing it is the wrong approach. I feel Microsoft could get further ahead of it could find ways to train models more efficiently 🤷

[–] SineSwiper@discuss.tchncs.de 9 points 1 year ago

People tend to forgot the millions of years of horribly inefficient evolution it took to develop the human brain.

[–] Potatisen@lemmy.world 1 points 1 year ago (1 children)

Don't think the point that's being made is "smarturr" but rather that stay within the margins of available power.

[–] Zima@kbin.social 4 points 1 year ago* (last edited 1 year ago) (1 children)

I think his point is that the person he responded to is proposing well meaning feeling based policies without having any real knowledge of any of the negative impacts his policy would have.

[–] Potatisen@lemmy.world 1 points 1 year ago

I feel like I'm in that butterfly meme, going like is this what reasonable conversation is like.

<3

[–] j4k3@lemmy.world 19 points 1 year ago (2 children)

Organic technology is hard. If you can figure out how to grow a compute system you will take human technology hundreds of years into the future. Silicon tech is the stone age of compute.

The brain has a slow clock rate to keep within its power limitations, but it is a parallel computational beast compared to current models.

It takes around ten years for new hardware to really take shape in our current age. AI hasn't really established what direction it is going in yet. The open source offline model is the likely winner, meaning the hardware design and scaling factors are still unknown. We probably won't see a good solution for years. We are patching video hardware as a solution until AI specific hardware is readily available.

[–] ricdeh@lemmy.world 2 points 1 year ago

I am so excited for the advances that neuromorphic processors will bring, which is not exactly my field, but adjacent to it. The concept of modelling chips after the human brain instead of traditional computing doctrines sounds extremely promising, and I would love to get to work on systems like Intel's Loihi or IBM's TrueNorth! If you think about it, it's a bit ridiculous how corporations like Nvidia are currently approaching AI with graphics processors. I mean, it makes more sense than general-purpose CPUs, but it is at the very least a subideal solution.

[–] Astroturfed@lemmy.world 2 points 1 year ago

I bet it'd be a whole lot easier to grow and organic computer if you didn't have to worry about pesky things like people thinking you grew genetically engineered slaves.

[–] SlopppyEngineer@discuss.tchncs.de 14 points 1 year ago* (last edited 1 year ago)

The whole language model scene system started with "we accidently found something that kinda works" and is now in full "somebody please accidently find a way so it uses less power" mode.

[–] Deiv@lemmy.ca 9 points 1 year ago (1 children)

Why should they not be allowed? Nuclear power plants are great options and will mean less demand on worse energy providing sources

[–] topinambour_rex@lemmy.world 9 points 1 year ago (2 children)

Because safety and profits aren't going in the same direction. They would cut corner for reduce the costs. Which is how you end with a nuclear accident. And then it would be to the tax payer to kick the bill.

[–] frezik@midwest.social 1 points 1 year ago

SMRs are pretty safe. That's not the issue. It's that they're thinking about using a whole fucking nuclear reactor to train AI to sell you shit.

[–] SineSwiper@discuss.tchncs.de -2 points 1 year ago (1 children)

Microsoft is big enough that government would force them to pay up. There is just too much public pressure for that kind of disaster to get waved away.

Also, there are nuclear options that are far safer than water-based reactors. WCRs are literally the worst possible design for a nuclear reactor, and we were stupid enough to choose that over dry material reactors in the 60s.

[–] TheGrandNagus@lemmy.world 0 points 1 year ago

Microsoft is big enough that government would force them to pay up.

Lmao

Just like they made the banks pay up? Like how they make oil companies pay up?

Right now it's commonplace for oil rigs and nuclear plants to be decommissioned on the taxpayer, sometimes entirely funded by them even, rather than by the company.

[–] Potatisen@lemmy.world 1 points 1 year ago

Not a terrible argument. How do we limit compute vs. output?