this post was submitted on 24 Nov 2023
-3 points (49.2% liked)

Technology

58055 readers
4884 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 44 comments
sorted by: hot top controversial new old
[–] nicetriangle@kbin.social 166 points 9 months ago (1 children)

Geez the reporting around this has been ridiculously sensationalist

[–] DemBoSain@midwest.social 3 points 9 months ago

Superintelligent AI Just Pried the Keyboard from my Cold, Dead Hands

[–] toothbrush@lemmy.blahaj.zone 66 points 9 months ago (1 children)

just bs. They are trying to come up with an explanation for why altman was fired that is not: we caught him doing lots of illegal stuff.

[–] GONADS125@lemmy.world 36 points 9 months ago (3 children)

I think it's a hype move at this point. Like the guy who claimed he believed google's chat bot was sentient.

I read another article that stated they had a computational breakthrough, in which their program can now carry out basic grade school math. No other model is able to actually carry out math equations, not even basic arithmetic.

This is a significant development, but it's not like they're on the cusp of developing superintelligence now. I bet they are taking this small inch towards superintelligence, and hyping it like they've just huddled miles forward.

[–] dustyData@lemmy.world 6 points 9 months ago* (last edited 9 months ago) (1 children)

The thing is, this could actually be a several miles jump. But where they want to go is not the grocery down the road. They are trying to fly to another galaxy. This is more like hyping up that you are going to land on the moon next year, at a time when you just figured out that rubbing two sticks together it makes a fire. Technically it's truly a leap, but we are so far away still.

[–] GONADS125@lemmy.world 4 points 9 months ago

Technically it's truly a leap, but we are so far away still.

I completely agree and was trying to convey that. Not trying to downplay the significance of the development, but they are far from superintelligence and they're going to hype it up as much as they can.

[–] Siegfried@lemmy.world 1 points 9 months ago (1 children)

Is that the chatbot that they had to shutdown cause it wandered a little bit to much in 4chan?

[–] RobotToaster@mander.xyz 4 points 9 months ago

That was microsoft's tay.

[–] Korne127@lemmy.world 0 points 9 months ago (1 children)

The worst part about is it that there have been already two winters in AI development, in the early twothousands and sometimes in the 70/80s? I think? because of exactly this: They always hyped up AI and said they'd solve all the world's problems in a short time, and when that obviously didn't happen, people got disappointed in it and pulled funding…

[–] c0mbatbag3l@lemmy.world 1 points 9 months ago (1 children)

Well the models we have now are already useful for things, so it's unlikely it'll just disappear now.

We didn't have the computer technology to make it happen back then, they just didn't know it at the time.

[–] Korne127@lemmy.world 1 points 9 months ago* (last edited 9 months ago) (1 children)

That's not my point. We've had good AIs and much development in that area of research already 50 years ago. Chess computers started being better than the best humans in the early 2000s. It's not a particularly new field. But the development and research of artificial intelligence already completely stopped two times and it took over a decade each time to really start research in the field as well.
The reasons why this happened is because of too big promises; even if they succeeded in some things, they promised way too much. If they continue promising way too much in the current AI hype as well, I can see the exact same thing happening again: People getting disappointed and the field getting isolated for another decade.
I'm not saying the current successes will disappear, but that future development might, for a good while, just as it happened back then.

[–] c0mbatbag3l@lemmy.world 2 points 9 months ago

None of the previous stabs at AI were more than a parlour trick, modern AI are capable of not only full and natural conversations but have the unique ability to turn that into completing tasks based on how well the human operator can describe the problem and explain the proposed solution.

It's not always perfect, but it gets close enough for the professional to make use of it by cutting out the research phase of any given project. Or by getting the bulk of the work done without the hours it would have taken to do it. Refining the solution might take ten to fifteen minutes but you don't have to be a math genius to see the benefits. Plus the models we have now are exploding in niche use-cases. We have image generation, voice generation, code generation, all at near human standards. I've had it walk me through how to deploy python scripts via VSC, then I had it walk me through setting up a Git repository, then I asked it to take me through a DnD/Choose your own adventure scenario with specific choices having consequences down the line. It was a little basic but I gave it a preestablished universe and the general premise, it researched the rest on its own and used the data to fill in the gaps in a way I hadn't even suggested based on what it found of the universe.

That last one isn't a productive use case, sure. The point is that what we have now isn't just some one off computer like a chess bot or a Smash Bros CPU set to its highest level, it's a seed for every future version of machine learning algorithm that will be used to specifically design models for special scenarios. It's become ingrained in our society now, and it's unlikely to just disappear like the rest of what you're describing.

[–] DrCake@lemmy.world 34 points 9 months ago (1 children)

So was it all just a marketing stunt?

[–] otter@lemmy.ca 30 points 9 months ago* (last edited 9 months ago)

CEO ousting shenanigans = 📉

Release rumor = 📈

They're not publicly traded, but I assume public sentiment still has an effect on things (ex. Partnerships, users buying memberships, etc.)

[–] gedaliyah@lemmy.world 33 points 9 months ago (2 children)

But can it open the pod bay doors?

[–] Enkers@sh.itjust.works 17 points 9 months ago (1 children)
[–] Bonehead@kbin.social 6 points 9 months ago (1 children)
[–] reflex@kbin.social 1 points 9 months ago

Dave's not here, man.

What about Buster?

[–] random_character_a@lemmy.world 4 points 9 months ago (1 children)

Take your upvote and go watch more artsy 60's scifi, you brilliant sod.

[–] Nomad@infosec.pub 1 points 9 months ago

Is this a dave reference?

[–] Melt@lemm.ee 19 points 9 months ago

Hope it replaces the most expensive job position: CEO

[–] satans_crackpipe@lemmy.world 16 points 9 months ago (1 children)

What are these con artists up to? And why are so many people self replicating the propaganda?

[–] boatswain@infosec.pub 4 points 9 months ago

self replicating the propaganda?

You can't self-replicate anything other than yourself. You replicate things; we use "self-replicating" because it's shorthand for "thing that replicates itself."

[–] ZILtoid1991@kbin.social 14 points 9 months ago (1 children)

The "superintelligence" in question: the same old tech, but with a larger context window, which will make it hallucinate a bit less often.

[–] Tattorack@lemmy.world 11 points 9 months ago* (last edited 9 months ago)

Alright, so the article really doesn't prove anything, just says OpenAI claims something and then fills it with words.

Let's be clear here; we don't even have an AGI. That is to say, artificial general intelligence, a man-made intelligence that is at least as capable and general purpose as Human intelligence.

That would be a intelligence that is self aware and can actually think and understand. Data from Star Trek would be an AGI.

THESE motherfuckers are now claiming they made a breakthrough on potentially creating an SI, a super intelligence. An artificial, man-made intelligence that not only has the self awareness and understanding of an AGI, but is vastly more intelligent than a Human, and likely has awareness that surpasses Human awareness.

I think not.

[–] RiikkaTheIcePrincess@kbin.social 9 points 9 months ago (1 children)

Why do I keep looking at these threads? The way people talk about this stuff on all sides is so asinine. Nearly every good point is accompanied by missing a big one or just ricocheting off the good one, flying off into space and hitting a fully automated luxury gay space commulist. Hopes, dreams, assumptions, and ignorance all just headbutting each other and getting nowhere.

Oh yeah, I wanted to know what "superintelligence" was and whether I should care. Welp.

[–] Dadifer@lemmy.world 1 points 9 months ago

I think the takeaway is that they're trying to create a LLM that can answer questions that it wasn't trained on.

[–] reflex@kbin.social 5 points 9 months ago* (last edited 9 months ago) (2 children)

Yawn.

Let me know when we get a real Terminator or Matrix situation.

[–] Moof_Kenubi@lemmy.world 3 points 9 months ago

heck, I'd settle for half a Short Circuit

[–] gedaliyah@lemmy.world -1 points 9 months ago (1 children)
[–] gedaliyah@lemmy.world -2 points 9 months ago
[–] sentient_loom@sh.itjust.works 4 points 9 months ago

Almost sounds like the whole thing was a performance.

[–] teft@startrek.website 3 points 9 months ago

I for one welcome our new ~~robotic~~ super intelligence overlords.

[–] TallonMetroid@lemmy.world 3 points 9 months ago

I'll believe it when Judgement Day happens and I die ~~in nuclear fire~~ when the wi-fi turns against me.

[–] Enkers@sh.itjust.works 1 points 9 months ago

Hahaha. Yeah. :(

[–] ShaunaTheDead@kbin.social -1 points 9 months ago

Who the hell would have guessed that we'd have to deal with not one but two potentially civilization ending threats in our lifetimes? I want off this crazy ride please!

[–] amir_s89@lemmy.ml -1 points 9 months ago (1 children)

The whole organization structure & how it functions is just not so smart after all. Have management team considered the Lean Methodology with their business objectives?

[–] FaceDeer@kbin.social 3 points 9 months ago (1 children)

The problem that precipitated all this is that they don't have business objectives. They have a "mission." The board of directors of OpenAI aren't beholden to shareholders, and though the staff mocked their statement that allowing the company to be destroyed “would be consistent with the mission” it's actually true.

[–] amir_s89@lemmy.ml 1 points 9 months ago

Appreciate your clerification.