this post was submitted on 07 Jul 2023
27 points (100.0% liked)
Technology
37742 readers
486 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Because you have to have specific knowledge about how AI works to know this is a bad idea. If you don't have specific knowledge about it, it just sounds futuristic because AI is like a Star Trek thing.
This current AI craze is largely as big a deal as it is because so few people, including the people using it, have any idea what it is. A cousin of mine works for a guy who asked an AI about a problem and it cited an article about how to fix whatever the problem was, I forgot. He asks my cousin to implement the solution proposed in that article. My cousin searches for it and discovers article doesn't actually exist, so he says that. And after many rounds of back and forth, of the boss saying "this is the name of the article, this is who wrote it" and my cousin saying "that isn't a real thing and that author did write about some related topics but there's no actionable information there", the boss becomes convinced that this is a John Henry situation where my cousin is trying to make himself look more capable than the AI that he feels threatened by and the argument ends with a shrug and an "Okay, then if it's so important to you then we can do something else even though this totally would have worked."
There really needs to be large-scale education on what language models are actually doing to prevent people from using them for the wrong purposes.