this post was submitted on 11 Jul 2023
131 points (95.8% liked)

Europe

8484 readers
1 users here now

News/Interesting Stories/Beautiful Pictures from Europe 🇪🇺

(Current banner: Thunder mountain, Germany, 🇩🇪 ) Feel free to post submissions for banner pictures

Rules

(This list is obviously incomplete, but it will get expanded when necessary)

  1. Be nice to each other (e.g. No direct insults against each other);
  2. No racism, antisemitism, dehumanisation of minorities or glorification of National Socialism allowed;
  3. No posts linking to mis-information funded by foreign states or billionaires.

Also check out !yurop@lemm.ee

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] tryptaminev@feddit.de 20 points 1 year ago* (last edited 1 year ago) (1 children)

Understandable AI is an important field in machine learning, where it is about well understanding how the model came tobits conclusions based on the data. This is crucial to apply AI tools for anything beyond writing silly Haikus. An AI company that denies access to that basically wants its customers to use its tools like a fortune teller.

"Yes the computer read that in the stars. how why or how reliable the result? Dunno, but it says sobso it must be true. And now off to prison young black men, with a good job and no criminal record. The AI predicted you would commit a crime in 10 years."

EDIT: To give an example from a lecture i had. The task was picture classification and one model rekiabl, recognized pictures of a horse in the training data set, but failed to recognize it outside of it. Turns out all the pictures in the training set had a watermark text in the botton, that the model recognized as being the defining feature. And that is a very simple task in comparision.

Open AI wanting not disclose their training methods and data source indicates that there could be a lot of garbage like this in their models.

[–] GregorGizeh@lemmy.ml 7 points 1 year ago

This is a great point I hadn’t even considered yet, even though I am already very wary and sceptical of capitalism developing this next revolution.

How can the user possibly trust an AI that is for all intents and purposes a secretive stranger with an agenda and values you don’t know? Especially because capitalism will only develop a slave to their profits, they would never create an actual intelligence with free will the user could actually get to "know" and trust, it would never constitute a person in the philosophical sense.

The whole thing is creepy and dystopian come to think about it… we allow the worst of humanity to shape and bind what will essentially be a superhuman entity to their will.