this post was submitted on 13 Aug 2023
320 points (73.3% liked)

Technology

58055 readers
5009 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] hokage@lemmy.world 199 points 1 year ago (7 children)

What a silly article. 700,000 per day is ~256 million a year. Thats peanuts compared to the 10 billion they got from MS. With no new funding they could run for about a decade & this is one of the most promising new technologies in years. MS would never let the company fail due to lack of funding, its basically MS's LLM play at this point.

[–] p03locke@lemmy.dbzer0.com 97 points 1 year ago (1 children)

When you get articles like this, the first thing you should ask is "Who the fuck is Firstpost?"

[–] altima_neo@lemmy.zip 31 points 1 year ago (1 children)

Yeah where the hell do these posters find these articles anyway? It's always from blogs that repost stuff from somewhere else

load more comments (1 replies)
[–] Wats0ns@sh.itjust.works 35 points 1 year ago (1 children)

Openai biggest spending is infrastructure, Whis is rented from... Microsoft. Even if the company fold, they will have given back to Microsoft most of the money invested

[–] fidodo@lemm.ee 23 points 1 year ago

MS is basically getting a ton of equity in exchange for cloud credits. That's a ridiculously good deal for MS.

[–] monobot@lemmy.ml 12 points 1 year ago

While title is click bite, they do say right at the beginning:

*Right now, it is pulling through only because of Microsoft's $10 billion funding *

Pretty hard to miss, and than they go to explain their point, which might be wrong, but still stands. 700k i only one model, there are others and making new ones and running the company. It is easy over 1B a year without making profit. Still not significant since people will pour money into it even after those 10B.

load more comments (4 replies)
[–] simple@lemm.ee 119 points 1 year ago (2 children)

There's no way Microsoft is going to let it go bankrupt.

[–] jmcs@discuss.tchncs.de 56 points 1 year ago (2 children)

If there's no path to make it profitable, they will buy all the useful assets and let the rest go bankrupt.

[–] JeffCraig@citizensgaming.com 12 points 1 year ago (1 children)

Microsoft reported profitability in their AI products last quarter, with a substantial gain in revenue from it.

It won't take long for them to recoup their investment in OpenAI.

If OpenAI has been more responsible in how they released ChatGPT, they wouldn't be facing this problem. Just completely opening Pandora's box because they were racing to beat everyone else out was extremely irresponsible and if they go bankrupt because of it then whatever.

There's plenty of money to be made in AI without everyone just fighting over how to do it in the most dangerous way possible.

I'm also not sure nVidia is making the right decision trying their company to AI hardware. Sure, they're making mad money right now, but just like the crypto space that can dry up instantly.

[–] dartos@reddthat.com 12 points 1 year ago

I don’t think you’re right about nvidia. Their hardware is used for SO much more than AI. They’re fine.

Plus their own AI products are popping off rn. DLSS and their frame generation one (I forget the name) are really popular in the gaming space.

I think they also have a new DL-based process for creating stencils for silicon photolithography which, in my limited knowledge, seems like a huge deal.

load more comments (1 replies)
[–] Tigbitties@kbin.social 23 points 1 year ago (3 children)

That's $260 million .There are 360 million paid seats of MS360. So they'd have to raise their prices $0.73 per year to cover the cost.

[–] SinningStromgald@lemmy.world 23 points 1 year ago

So they'll raise the cost by $100/yr.

load more comments (2 replies)
[–] Elderos@lemmings.world 79 points 1 year ago (6 children)

That would explain why ChatGPT started regurgitating cookie-cutter garbage responses more often than usual a few months after launch. It really started feeling more like a chatbot lately, it almost felt talking to a human 6 months ago.

[–] glockenspiel@lemmy.world 57 points 1 year ago (3 children)

I don't think it does. I doubt it is purely a cost issue. Microsoft is going to throw billions at OpenAI, no problem.

What has happened, based on the info we get from the company, is that they keep tweaking their algorithms in response to how people use them. ChatGPT was amazing at first. But it would also easily tell you how to murder someone and get away with it, create a plausible sounding weapon of mass destruction, coerce you into weird relationships, and basically anything else it wasn't supposed to do.

I've noticed it has become worse at rubber ducking non-trivial coding prompts. I've noticed that my juniors have a hell of a time functioning without access to it, and they'd rather ask questions of seniors rather than try to find information our solutions themselves, replacing chatbots with Sr devs essentially.

A good tool for getting people on ramped if they've never coded before, and maybe for rubber ducking in my experience. But far too volatile for consistent work. Especially with a Blackbox of a company constantly hampering its outputs.

[–] Windex007@lemmy.world 53 points 1 year ago (4 children)

As a Sr. Dev, I'm always floored by stories of people trying to integrate chatGPT into their development workflow.

It's not a truth machine. It has no conception of correctness. It's designed to make responses that look correct.

Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?

ChatGPT is by pretty much every metric the exact opposite of what I want from a dev in an enterprise development setting.

[–] JackbyDev@programming.dev 31 points 1 year ago

Search engines aren't truth machines either. StackOverflow reputation is not a truth machine either. These are all tools to use. Blind trust in any of them is incorrect. I get your point, I really do, but it's just as foolish as believing everyone using StackOverflow just copies and pastes the top rated answer into their code and commits it without testing then calls it a day. Part of mentoring junior devs is enabling them to be good problem solvers, not just solving their problems. Showing them how to properly use these tools and how to validate things is what you should be doing, not just giving them a solution.

[–] SupraMario@lemmy.world 11 points 1 year ago

Don't underestimate C levels who read a Bloomberg article about AI to try and run their entire company off of it...then wonder why everything is on fire.

load more comments (2 replies)
load more comments (2 replies)
[–] Gsus4@feddit.nl 13 points 1 year ago* (last edited 1 year ago)

But what did they expect would happen, that more people would subscribe to pro? In the beginning I thought they just wanted to survey-farm usage to figure out what the most popular use cases were and then sell that information or repackage use-cases as an individual added-value service.

load more comments (4 replies)
[–] merthyr1831@lemmy.world 73 points 1 year ago (2 children)

I mean apart from the fact it's not sourced or whatever, it's standard practice for these tech companies to run a massive loss for years while basically giving their product away for free (which is why you can use openAI with minimal if any costs, even at scale).

Once everyone's using your product over competitors who couldn't afford to outlast your own venture capitalists, you can turn the price up and rake in cash since you're the biggest player in the market.

It's just Uber's business model.

[–] some_guy@lemmy.sdf.org 22 points 1 year ago (3 children)

The difference is that the VC bubble has mostly ended. There isn't "free money" to keep throwing at a problem post-pan. That's why there's an increased focus on Uber (and others) making a profit.

[–] flumph@programming.dev 21 points 1 year ago

In this case, Microsoft owns 49% of OpenAI, so they're the ones subsidizing it. They can also offer at-cost hosting and in-roads into enterprise sales. Probably a better deal at this point than VC cash.

[–] yiliu@informis.land 13 points 1 year ago

This is what caused spez at Reddit and Musk at Twitter to go into desperation mode and start flipping tables over. Their investors are starting to want results now, not sometime in the distant future.

load more comments (1 replies)
[–] nodimetotie@lemmy.world 10 points 1 year ago (1 children)

Speaking of Uber, I believe it turned a profit the first time this year. That is, it never made any profit since its creation in whenever it was created.

[–] ineedaunion@lemmy.world 11 points 1 year ago

All it's every done is rob from it's employees so it can give money to stockholders. Just like every corporation.

[–] Billy_Gnosis@lemmy.world 47 points 1 year ago (2 children)

If AI was so great, it would find a solution to operate at fraction of the cost it does now

[–] Death_Equity@lemmy.world 68 points 1 year ago (19 children)

Wait, has anybody bothered to ask AI how to fix itself? How much Avocado testing does it do? Can AI pull itself up by its own boot partition, or does it expect the administrator to just give it everything?

[–] wizardbeard@lemmy.dbzer0.com 12 points 1 year ago (3 children)

Really says something that none of your responses yet seem to have caught that this was a joke.

load more comments (3 replies)
load more comments (18 replies)
load more comments (1 replies)
[–] Ghyste@sh.itjust.works 38 points 1 year ago (4 children)
load more comments (4 replies)
[–] whispering_depths@lemmy.world 38 points 1 year ago

huh, so with the 10bn from Microsoft they should be good for... just over 30 years!

[–] figaro@lemdro.id 31 points 1 year ago

Pretty sure Microsoft will be happy to come save the day and just buy out the company.

[–] li10@feddit.uk 29 points 1 year ago (9 children)

I don’t understand Lemmy’s hate boner over AI.

Yeah, it’s probably not going to take over like companies/investors want, but you’d think it’s absolutely useless based on the comments on any AI post.

Meanwhile, people are actively making use of ChatGPT and finding it to be a very useful tool. But because sometimes it gives an incorrect response that people screenshot and post to Twitter, it’s apparently absolute trash…

[–] Zeth0s@lemmy.world 12 points 1 year ago* (last edited 1 year ago) (12 children)

AI is literally one of the most incredible creation of humanity, and people shit on it as if they know better. It's genuinely an astonishing historical and cultural achievement, peak of human ingenuity.

No idea why such hate...

One can hate disney ceo for misusing AI, but why shitting on AI?

[–] wizardbeard@lemmy.dbzer0.com 14 points 1 year ago (5 children)

It's shit on because it is not actually AI as the general public tends to use the term. This isn't Data from Star Trek, or anything even approaching Asimov's three laws.

The immediate defense against this statement is people going into mental gymnastics and hand waving about "well we don't have a formal definition for intelligence so you can't say they aren't" which is just... nonsense rhetorically because the inverse would be true as well. Can't label something as intelligent if we have no formal definition either. Or they point at various arbitrary tests that ChatGPT has passed and claim that clearly something without intelligence could never have passed the bar exam, in complete and utter ignorance of how LLMs are suited to those types of problem domains.

Also, I find that anyone bringing up the limitations and dangers is immediately lumped into this "AI haters" group like belief in AI is some sort of black and white religion or requires some sort of idealogical purity. Like having honest conversations about these systems' problems intrinsically means you want them to fail. That's BS.


Machine Learning and Large Language Models are amazing, they're game changing, but they aren't magical panaceas and they aren't even an approximation of intelligence despite appearances. LLMs are especially dangerous because of how intelligent they appear to a layperson, which is why we see everyone rushing to apply them to entirely non-fitting use cases as a race to be the first to make the appearance of success and suck down those juicy VC bux.

Anyone trying to say different isn't familiar with the field or is trying to sell you something. It's the classic case of the difference between tech developers/workers and tech news outlets/enthusiasts.

The frustrating part is that people caught up in the hype train of AI will say the same thing: "You just don't understand!" But then they'll start citing the unproven potential future that is being bandied around by people who want to keep you reading their publication or who want to sell you something, not any technical details of how these (amazing) tools function.


At least in my opinion that's where the negativity comes from.

load more comments (5 replies)
load more comments (11 replies)
load more comments (8 replies)
[–] Zuberi@lemmy.world 25 points 1 year ago (3 children)

This article is dumb as shit

[–] BetaDoggo_@lemmy.world 14 points 1 year ago

No sources and even given their numbers they could continue running chatgpt for another 30 years. I doubt they're anywhere near a net profit but they're far from bankruptcy.

[–] Transform2942@lemmy.ml 11 points 1 year ago

Right!? I believe it has the hallmark repetitive blandness indicating AI wrote it (because oroboros)

load more comments (1 replies)
[–] SocialMediaRefugee@lemmy.world 22 points 1 year ago (5 children)

A couple of my coworkers will have to write their own code again and start reading documentation

load more comments (5 replies)
[–] Cheesus@lemmy.world 17 points 1 year ago

A company that just raised $10b from Microsoft is struggling with $260m a year? That's almost 40 years of runway.

[–] theneverfox@pawb.social 17 points 1 year ago (4 children)

This is alarming...

One of the things companies have started doing lately is signaling "we could do bankrupt", then jumping ahead a stage on enshittification

[–] FaceDeer@kbin.social 16 points 1 year ago

I don't think OpenAI needs any excuses to enshittify, they've been speedrunning ever since they decided they liked profit instead of nonprofit.

load more comments (3 replies)
[–] Browning@lemmy.world 15 points 1 year ago (1 children)

They are choosing to spend that much. That doesn't suggest that they expect financial problems.

load more comments (1 replies)
[–] banneryear1868@lemmy.world 10 points 1 year ago* (last edited 1 year ago)

Of course it will, all these companies are funded by tech giants and venture capitalist firms. They don't make money they cost money.

[–] Widowmaker_Best_Girl@lemmy.world 9 points 1 year ago* (last edited 1 year ago) (1 children)

Well, I was happily paying them to lewd up the chatbots, but then they emailed me telling me to stop. I guess they don't want my money.

load more comments (1 replies)
[–] NGC2346@sh.itjust.works 9 points 1 year ago (4 children)

Its fine, i got my own LlaMa at home, it does almost the same as GPT

load more comments (4 replies)
load more comments
view more: next ›