this post was submitted on 14 Mar 2024
23 points (96.0% liked)

Technology

59419 readers
4899 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] pixxelkick@lemmy.world 1 points 8 months ago

Note that ChatGPT indeed implemented a state parameter, but their state was not a random value, and therefore could be guessed by the attacker.

Bruh wut, rookie mistake.

State is supposed to be mathematically random and should expire fairly quickly.

I always have used a random guid that expires after 10-15 minutes for state, if they try and complete the oauth with an expired state value I reject ad ask them to try again.

Also yeah the redirect uri trick is common, that's why oath apis must always have a "whitelist urls" functionality. And not just domain, the whole url.

That's why when you make a Google api token you gotta specify what urls it's valid for explicitly. That way any other different redirect uri gets rejected, to prevent an injection attack from a third party providing their own different redirect uri to a victim.

Oath is pretty explicit about all these things in its spec. It really sucks people treat it as optional "not important" factors.

It's important. Do it. Always.

[–] EdibleFriend@lemmy.world -1 points 8 months ago (1 children)

I still love how stupid 'hacking' these things are. Like the poem shit. Thats the future. Tell a bot to say something a bunch of times and it spits out someone's address.

[–] pixxelkick@lemmy.world 3 points 8 months ago* (last edited 8 months ago) (1 children)

Not related to the article at all mate.

This article is about how many plugins have Bern discovered to have implemented oath in a very insecure way and simply using them can expose your sensitive info you have linked to your chatgpt account.

IE:

  1. You connect your github account to your chatgpt account (so you can ask chatgpt questions about your private codebase)

  2. You install and use one of many other compromisable weakly implemented plugins

  3. Attacker uses the weak plugin to compromise your whole account and can now access anything you attached to your account, IE they can now access your private git repos you hooked up in step 1...

Most of the attack vectors involve a basic (hard to notice) phish attack on weak oath urls.

The tricky part is the urls truly are and look legit. It isn't a fake url, it actually links to the legit page, but they added some query params (the part after the ? In the url) that compromise the way it behaves

[–] catloaf@lemm.ee 1 points 8 months ago

Yeah, it's a legit exploit.

But it could also be mitigated by not giving your sensitive data to chatgpt.