this post was submitted on 20 Nov 2023
1153 points (98.3% liked)

Technology

59052 readers
6622 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Pohl@lemmy.world 257 points 11 months ago (8 children)

If you ever needed a lesson in the difference between power and authority, this is a good one.

The leaders of this coup read the rules and saw that they could use the board to remove Altman, they had the authority to make the move and “win” the game.

It seems that they, like many fools mistook authority for power. The “rules” said they could do it! Alas they did not have the power to execute the coup. All the rules in the world cannot make the organization follow you.

Power comes from people who grant it to you. Authority comes from paper. Authority is the guidelines for the use of power, without power, it is pointless.

[–] FishFace@lemmy.world 97 points 11 months ago (3 children)

Well, surely it's premature to be making grand statements like this until it actually causes a reversal?

[–] Tyfud@lemmy.one 56 points 11 months ago (1 children)

Even if it doesn't, the consequences of the board ignoring this is catastrophic to the company. One way or another, the workers will have a victory here.

load more comments (1 replies)
load more comments (2 replies)
[–] meco03211@lemmy.world 56 points 11 months ago (1 children)

Supreme executive power derives from a mandate from the masses, not some farcical aquatic ceremony.

[–] h3mlocke@lemm.ee 39 points 11 months ago

Strange women, lying in ponds, distributing swords, is no basis for government!

load more comments (5 replies)
[–] ribboo@lemm.ee 249 points 11 months ago (13 children)

It’s rather interesting here that the board, consisting of a fairly strong scientific presence, and not so much a commercial one, is getting such hate.

People are quick to jump on for profit companies that do everything in their power to earn a buck. Well, here you have a company that fires their CEO for going too much in the direction of earning money.

Yet every one is all up in arms over it. We can’t have the cake and eat it folks.

[–] rookie@lemmy.world 57 points 11 months ago (1 children)

Well, here you have a company that fires their CEO for going too much in the direction of earning money.

Yeah, honestly, that's music to my ears. Imagine a world where organizations weren't in the business of pursuing capital at any cost.

load more comments (1 replies)
[–] PersnickityPenguin@lemm.ee 49 points 11 months ago

Sounds like the workers all want to end up with highly valued stocks when it goes IPO. Which is, and I'm just guessing here, the only reason anyone is doing AI right now.

[–] justawittyusername@lemmy.world 30 points 11 months ago

I immediately thought that the board was bad, then read the context…

so are the employees backing Altman because it means more money for the company/them? Or is there another reason?

[–] theneverfox@pawb.social 27 points 11 months ago (2 children)

This was my first thought... But then why are the employees taking a stand against it?

There's got to be more to this story

[–] gmtom@lemmy.world 30 points 11 months ago* (last edited 11 months ago) (4 children)

Bandwagoning. The narrative is so easy to spin "hey the evil board of directors forced our beloved CEO to leave. If they do that to /US/ we need to do it back to /them/.

I think that would get most people with moral concerns on board, the rest are just tech bros and would fully support a money grubbing unethical CEO if they thought they might get a bigger bonus out of it.

load more comments (4 replies)
[–] PersnickityPenguin@lemm.ee 28 points 11 months ago

They all want to become millionaires. Think IPO.

load more comments (9 replies)
[–] Even_Adder@lemmy.dbzer0.com 168 points 11 months ago (9 children)

You're not going to develop AI for the benefit of humanity at Microsoft. If they go there, we'll know "Open"AI's mission was all a lie.

[–] Gork@lemm.ee 103 points 11 months ago (2 children)

Yeah Microsoft is definitely not going to be benevolent. But I saw this as a foregone conclusion since AI is so disruptive that heavy commercialization is inevitable.

We likely won't have free access like we do now and it will be enshittified like everything else now and we'll need to pay yet another subscription to even access it.

[–] MeatsOfRage@lemmy.world 104 points 11 months ago* (last edited 11 months ago) (4 children)

"Hey Bing AI can I get a recipe that includes cinnamon"

"Sure! Before we begin did you hear about the great Black Friday deals at Sephora"

"Not interested"

"No problem. You're using query 9 of 20 this month. Do you want to proceed?"

"Yes"

"Before we begin, Bing Max+ has a one month trial starting at just $1 for your first month*. Want to give that a try?"

"Not now"

"No problem. With cinnamon you can make Cinnamon Rolls"

"What else?"

"Sure! You are using query 10 of 20 this month. Before I continue did you hear the McRib is back for a limited time at McDonald's. (ba, da, ba, ba, ba) I'm lovin' it."

[–] Valthorn@feddit.nu 49 points 11 months ago

Please drink one verification can!

load more comments (3 replies)
load more comments (1 replies)
load more comments (8 replies)
[–] conditional_soup@lemm.ee 137 points 11 months ago (21 children)

I'd like to know why exactly the board fired Altman before I pass judgment one way or the other, especially given the mad rush by the investor class to re-instate him. It makes me especially curious that the employees are sticking up for him. My initial intuition was that MSFT convinced Altman to cross bridges that he shouldn't have (for $$$$), but I doubt that a little more now that the employees are sticking up for him. Something fucking weird is going on, and I'm dying to know what it is.

[–] los_chill@programming.dev 47 points 11 months ago* (last edited 11 months ago) (2 children)

Altman wanted profit. Board prioritized (rightfully, and to their mission) responsible, non-profit care of AI. Employees now side with Altman out of greed and view the board as denying them their mega payday. Microsoft dangling jobs for employees wanting to jump ship and make as much money possible. This whole thing seems pretty simple: greed (Altman, Microsoft, employees) vs the original non-profit mission (the board).

Edit: spelling

load more comments (2 replies)
[–] scarabic@lemmy.world 42 points 11 months ago* (last edited 11 months ago) (3 children)

Wanting to know why is reasonable but it’s sus that we don’t already know. Why haven’t they made that clear? How did they think they could do this without a solid explanation? Why hasn’t one been delivered to set the rumors to rest?

It stinks of incompetence, or petty personal drama. Otherwise we’d know by now the very good reason they had.

load more comments (3 replies)
[–] Ullallulloo@civilloquy.com 23 points 11 months ago

The only explanation I can come up with is that the workers and Altman both agreed in monetizing AI as much as possible. They're worried that if the board doesn't resign, the company will remain a non-profit more conservative in selling its products, so they won't get their share of the money that could be made.

load more comments (18 replies)
[–] just_change_it@lemmy.world 121 points 11 months ago (1 children)

It's supposed to be a nonprofit benefiting humanity, not a pay day for owners or workers. The board isn't making money off of it.

Giving microsoft control is a bad idea. (duh?)

Giving a single person control is a bad idea, per sam altman.

[–] slaacaa@lemmy.world 94 points 11 months ago* (last edited 11 months ago) (5 children)

My take on what happened (we are now at step 8):

  1. Sam wants to push for more & quicker profit with MS and VC backing, but board resists, constant conflicts
  2. Sam aligns with MS, hatch a plan on how to gut OpenAI for its know-how, ppl, and tech, leaving the non-profit part bleeding out in the gutter
  3. Sam & MS set a trap: Sam crosses some red lines, maybe taking commercial decisions without board approval. Potentially there was also some whispering in key ears (e.g, Ilya) by seemingly helpful advisors/VCs to push & pull at the same time on both sides
  4. Board has enough after Sam doesn’t back down, fires him & other co-founder guy
  5. MS and VCs go full attack to discredit board. After some info gathering, they realize they have been utterly fucked
  6. Some chaos, quick decision of appointing/replacing ppl, trying to manage the fire, even talking to Sam (btw this might have been a fallback option for MS, that the board reinstates him with more control and guardrails, weakening the power of the non-profit)
  7. Sam joins MS, masks are off
  8. Employees on the sinking ship revolt, even Ilya realizes he was manipulated/fucked
  9. OpenAI dead, key ppl join MS, tech and rest of the company bought for scraps. Non-profit part dead. Capitalist victory

Source: subjective interpretation/deduction based on the available info and my experience working as a management consultant for 10 years (dealing with lot of exec politics, though nothing this serious)

load more comments (5 replies)
[–] NounsAndWords@lemmy.world 111 points 11 months ago

You also informed the leadership team that allowing the company to be destroyed "would be consistent with the mission."

You are God damned right that shutting everything down is one of the roles of a non-profit Board focused on AI safety.

[–] Jolteon@lemmy.zip 99 points 11 months ago (3 children)

Later: All 195 employees of OpenAI in support of board of directors.

load more comments (3 replies)
[–] HiddenLayer5@lemmy.ml 90 points 11 months ago* (last edited 11 months ago) (19 children)

https://time.com/6247678/openai-chatgpt-kenya-workers/

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.

The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance. For this story, TIME reviewed hundreds of pages of internal Sama and OpenAI documents, including workers’ payslips, and interviewed four Sama employees who worked on the project. All the employees spoke on condition of anonymity out of concern for their livelihoods.

[...]

Documents reviewed by TIME show that OpenAI signed three contracts worth about $200,000 in total with Sama in late 2021 to label textual descriptions of sexual abuse, hate speech, and violence. Around three dozen workers were split into three teams, one focusing on each subject. Three employees told TIME they were expected to read and label between 150 and 250 passages of text per nine-hour shift. Those snippets could range from around 100 words to well over 1,000. All of the four employees interviewed by TIME described being mentally scarred by the work. Although they were entitled to attend sessions with “wellness” counselors, all four said these sessions were unhelpful and rare due to high demands to be more productive at work. Two said they were only given the option to attend group sessions, and one said their requests to see counselors on a one-to-one basis instead were repeatedly denied by Sama management.

[...]

One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.

[...]

That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT.

Gonna leave this here.

[–] Clbull@lemmy.world 47 points 11 months ago* (last edited 11 months ago) (16 children)

So they paid Kenyan workers $2 an hour to sift through some of the darkest shit on the internet.

Ugh.

[–] ExLisper@linux.community 24 points 11 months ago

What? And here I am doing it for free...

load more comments (15 replies)
load more comments (18 replies)
[–] InvaderDJ@lemmy.world 77 points 11 months ago (2 children)

The biopic on this whole thing is going to be hilarious. The rumors are that the board didn’t like how fast the CEO is moving with AI and they’re afraid of consequences of possible AGI (which I don’t think these new LLMs are even close to) but that doesn’t feel like what modern boards of directors are so I don’t trust it.

It’s just baffling how this golden goose was half way strangled in the nest.

They are a non-profit board set up precisely to exercise caution over rapid AI development.

[–] chiliedogg@lemmy.world 26 points 11 months ago (3 children)

Or this is essentially a hostile takeover by Microsoft. OpenAI is a non-profit with non-shareholders as it's board. They don't have a profit motive to develop AI quickly and without safety measures. But the tech they've developed has quickly become the hottest product on the planet.

Microsoft was clearly prepared to take on all the employees the second this happened.

load more comments (3 replies)
[–] nucleative@lemmy.world 69 points 11 months ago

I feel like this is Satya's wet dream. He woke up on Friday like normal and went to bed on Sunday owning what, 85% of OpenAI's top people? Acquisitions aren't usually that easy.

It seems obvious Sam would want to grow his company to infinity. That's what VC people do. The board expecting otherwise is strange in hindsight. Now they can oversee the slow, measured adoption of much smaller business while the rest of the team shoots for the stars.

Anyways, RIP y'all. Skynet launches next year.

[–] NeoNachtwaechter@lemmy.world 65 points 11 months ago (15 children)

Muuuhahahaha.... What a shitshow this organisation has become.

load more comments (15 replies)
[–] helenslunch@feddit.nl 51 points 11 months ago* (last edited 11 months ago) (1 children)

:grabs popcorn:

Nothing more entertaining than employees standing up against management.

[–] Heresy_generator@kbin.social 72 points 11 months ago* (last edited 11 months ago) (5 children)

It's just weird when it's employees standing up against management on behalf of a rich scumbag scammer.

load more comments (4 replies)
[–] CorneliusTalmadge@lemmy.world 36 points 11 months ago (1 children)

Image Text:

To the Board of Directors at OpenAl,

OpenAl is the world's leading Al company. We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.

The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAl.

When we all unexpectedly learned of your decision, the leadership team of OpenAl acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.

The leadership team suggested that the most stabilizing path forward - the one that would best serve our mission, company, stakeholders, employees and the public - would be for you to resign and put in place a qualified board that could lead the company forward in stability. Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed "would be consistent with the mission."

Your actions have made it obvious that you are incapable of overseeing OpenAl. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.

  1. Mira Murati
  2. Brad Lightcap
  3. Jason Kwon
  4. Wojciech Zaremba
  5. Alec Radford
  6. Anna Makanju
  7. Bob McGrew
  8. Srinivas Narayanan
  9. Che Chang
  10. Lillian Weng
  11. Mark Chen
  12. Ilya Sutskever
load more comments (1 replies)
[–] 9thSun@midwest.social 34 points 11 months ago

This whole situation happened so fast and it confuses me

[–] Sanyanov@lemmy.world 33 points 11 months ago

Ain't that simply a curtain drama for practical acquisition of OpenAI by Microsoft, circumventing potential legal issues?

This started months ago.

[–] cypherpunks@lemmy.ml 33 points 11 months ago
[–] SocialMediaRefugee@lemmy.world 25 points 11 months ago (5 children)

Seeing as Sam and Greg now work for microsoft I'd say this is late

load more comments (5 replies)
[–] M137@lemm.ee 24 points 11 months ago (1 children)
load more comments (1 replies)
[–] xenomor@lemmy.world 24 points 11 months ago

I don’t know enough about why the board did this, or what Altman was up to, to form a meaningful opinion about what happened. However, I do know that anything that empowers Microsoft in this industry is a bad thing. Microsoft is a bad actor in every regard and will always behave in ways that ultimately produce worse products than we would get otherwise. Given the potential implications of these technologies, and all the reasons to not trust Microsoft to protect public interests, this news is terrible.

load more comments
view more: next ›