this post was submitted on 22 Jul 2023
167 points (87.1% liked)

Asklemmy

43516 readers
1866 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Feel like we've got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you've got all these people invested in AI companies running around with flashlights under their chins like "bro this is so scary how good we made this thing". Seems like bullshit.

I've seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don't think I'd just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?

(page 2) 50 comments
sorted by: hot top controversial new old
[–] pelespirit@sh.itjust.works 4 points 1 year ago (1 children)

Yes and it should not be in a handful of companies and also be regulated up the ying yang. https://www.smartless.com/episodes/episode/256975de/mit-professor-max-tegmark-live-in-boston

[–] ezmack@lemmy.ml 2 points 1 year ago (1 children)

That tegmark guy is a good example of what I was talking about. That future of life institute he's a part of has jaan tallinn as one of its founders; a person who is invested in AI companies. So I have a hard time telling what's neutral information and what's marketing

[–] pelespirit@sh.itjust.works 2 points 1 year ago (1 children)

He is not marketing anything except his awful news site and he answers everything very carefully. He talks about them being murder machines but can cure cancer, etc. He said it's like fire in that it's neither good nor bad. I say we try and control fire though.

I was trying to find the NHK World show where they had 6 experts on to talk about he future but couldn't find it. They had one guy saying AI is wonderful and perfect and will only do good. They had one woman saying, regulate, regulate, regulate that used to work for Google. The other 3 were using it all the time so liked it but were still worried about it. Couldn't find it though. It was on last week if you want to give it a go.

[–] ezmack@lemmy.ml 2 points 1 year ago

Yeah ill check it out

[–] worfamerryman@beehaw.org 4 points 1 year ago

I am super amateur with python and I don’t work in IT, but I’ve used it to write code for me that allows me to significantly save time in my work flow.

Like something that used to take me an hour to do now takes 15-20 minutes.

So as a nonprogrammer, im able to get it to write enough code that I can tweak until it works instead of just not having that tool.

[–] MxM111@kbin.social 4 points 1 year ago (1 children)

I will give you just one example. Pharmaceutical companies often create aggregate reports where they have to process a large number of cases. Say, 5000. Such processing sometimes includes analysis of x-Ray or other images. Very specialized and highly paid people (radiologists) do this. It is expensive and is part of the reason why medicine prices are high. One company recently had a trial - if AI can do that job. Turns out it can. Huge savings for the company. And the radiologist lost their job. This is just one example of good and bad things that will and already are happening in our society due to AI.

[–] DrunkenPirate@feddit.de 2 points 1 year ago* (last edited 1 year ago) (3 children)

You know this personally or did you just read an article? My wife works in a pharmaceutical company. And if I learned one thing by her stories: there will always be some person responsible for decisions! I doubt the radiologist lost her/ his job. I mean who’s going to jail if the quality was poor and people die?

I rather think AI downsized her/ his engagement. Either just doing an supervision and sanity check or used the tool by itself and increased productivity.

load more comments (3 replies)
[–] jmp242@sopuli.xyz 4 points 1 year ago (1 children)

First of all AI is a buzzword that's meaning has changed a lot since at least the 1950s. So... what do you actually mean? If you mean LLM like ChatGPT, it's not AGI that's for sure. It is another tool that can be very useful. For coding, it's great for getting you very large blocks of code prepopulated for you to polish and verify it does what you want. For writing, it's useful to create a quick first draft. For fictional game senses it's useful for "embedding a character quickly", but again you likely want to edit it some even for say a D&D game.

I think it can replace most first line chat based customer service people, especially ones who already just make stuff up to say something to you (we all have been there). I could imagine it improving call routing if hooked into speech recognition and generation - the current menus act like you can "say anything" but really only "work" if you're calling about stuff you could also do with simple press 1,2,3 menus. ChatGPT based things trained on the companies procedures and data probably could also replace that first line call queues because it can seem to more usefully do something with wider issues. Although companies still would need to get their head out of their asses somewhat too.

Where I've found it falls down currently is very specific technical questions, ones you might have asked on a forum and maybe gotten an answer. I hope it improves, especially as companies start to add some of their own training data. I could imagine Microsoft more usefully replacing the first few lines of tech support for their products, and eventually having the AI pass up the chain to a ticket if it can't solve the issue. I could imagine in the next 10 years most tech companies having purchased a service from some AI company to provide them AI support bots like they currently pay for ticket systems and web hosting. And I think in general it probably will be better for the users, because for less than the cost of the cheapest outsourced front line support person (who has near 0 knowledge) you can have the AI provide pretty good chat based access to a given set of knowledge that is growing all the time, and every customer gets that AI with that knowledge base rather than the crap shoot of if you get the person who's been there 3 years or 1 day.

I think we are a long way from having AI just write the program or CNC code or even important blog posts. The hallucination has to be fixed without breaking the usefulness of the model (people claim guardrails on GPT4 make it stupider), and the thing needs to recursively look at it's output and run that through a "look for bugs" prompt followed by a "fix it" prompt at the very least. Right now, it can write code with noticeable bugs, you can tell it to check for bugs and it'll find them, and then you can ask it to fix those bugs and it'll at least try to do that. This kind of needs to be built in and automatic for any sort of process - like humans check their work, we need to program the AI to check it's work too. And then we might need to also integrate multiple different models so "different eyes" see the code and sign off before being pushed. And even then, I think we'd need additional hooks, improvement, and test / simulation passes before we "don't need human domain experts to deploy". The thing is - it might be something we can solve in a few years with traditional integrations - or it might not be entirely possible with current LLM designs given the weirdness around guardrails. We just don't know.

[–] magic_lobster_party@kbin.social 1 points 1 year ago (1 children)

AI hasn’t really changed meaning since the 50s. It has always been the field of research about how to make computers perform tasks that previously were limited to only humans. The target is always moving because once AI researchers figure out how to solve one task with computers it’s no longer limited to humans anymore. It gets reduced to “just computations”.

There’s even a Wikipedia page describing this phenomenon: https://en.wikipedia.org/wiki/AI_effect

AGI is the ultimate goal of AI research. That’s when there’s no more tasks left that only humans can do.

load more comments (1 replies)
[–] lowleveldata@programming.dev 4 points 1 year ago

It is a useful tool to do something that I already know the answer but too lazy to work out. E.g. generate dummy data

[–] Semi-Hemi-Demigod@kbin.social 3 points 1 year ago (1 children)

I've been using it at my job to help me write code, and it's a bit like having a soux chef. I can say "I need an if statement that checks these values" or "Give me loop that does x y and z" and it'll almost always spit out the right answer. So coding, at least most of the time, changes from avoiding syntax errors and verifying the exact right format and turns into asking for and assembling parts.

But the neat thing is that if you have a little experience with a language you can suddenly start writing a lot of code in it. I had to figure out something with Ansible with zero experience. ChatGPT helped me get a fully functioning Ansible deployment in a couple days. Without it I'd have spent weeks in StackOverflow and documentation trying to piece together the exact syntax.

[–] shootwhatsmyname@lemm.ee 2 points 1 year ago

You should try out Codeium if you haven’t. It’s a VSCode toolkit completely free for personal use. I’ve had better results with it than ChatGPT

[–] PirateRabbits@sh.itjust.works 3 points 1 year ago (1 children)

We’ve been using it at my day job to help us outline ideas for our content writers. It writes garbage content on its own, but it is a decent tool for organizing ideas.

At least that is what we use it for. I’m sure there are other valuable uses, but it is not as valuable (to me at least) as it has been made out to be.

[–] snooggums@kbin.social 2 points 1 year ago (1 children)

Would you say it is comparable for summarizing ideas as a spelling/grammar checker is at checking spelling/grammar?

Helpful, but not close to perfect?

[–] PirateRabbits@sh.itjust.works 4 points 1 year ago

I think that is a great way to look at it.

[–] sparse_neuron@beehaw.org 3 points 1 year ago

As someone who works in machine learning (ML) research the use of ML has hit almost every scientific discipline you can imagine and it's been tremendously helpful in pushing research forward.

[–] zephyrvs@lemmy.ml 3 points 1 year ago

I'm currently building a Jungian shadow work (a kind of psycho therapy) web app using local machine learning and it's doing a decent enough job to continue developing it.

ChatGPT 4.0 is also quite helpful in making my python code less terrible and it's good at guiding me through wherever I'm facing challenges, since I'm more of an ops person instead of a developer. Can't complain, though the coding quality of GPT4.0 has declined noticably within the last weeks.

[–] ericskiff@beehaw.org 3 points 1 year ago

In my personal opinion, it’s under-hyped. The average person has maybe heard about it on the news but not yet tried it. The models we have show the spark of wit, but are clearly limited. The news cycle moves on.

Even still, some huge changes are coming.

My reasoning is this - in David Epstein’s book “Range” he outlines how and why generalists thrive and why specialization has hurt progress. In narrow fields, specialization gives an advantage, but in complex fields, generalists or people from other disciplines can often see novel approaches and cause leaps ahead in the state of the art. There are countless examples of this in practice, and as technology has progressed, most fields are now complex.

Today, in every university, in every lab, there are smart, specialized people using ChatGPT to riff on ideas, to think about how their problem has been addressed in other industries, and to bring outsider knowledge to bear on their work. I have a strong expectation that this will lead to a distinct acceleration of progress. Conversely, an all-knowing oracle can assist a generalist in becoming conversant in a specialization enough to make meaningful contributions. A chat model is a patient and egoless teacher.

It’s a human progress accelerant. And that’s with the models we have today. With next generation models specialized behind corporate walls with fine tuning on all of their private research, or open source models tuned to specific topics and domains, the utility will only increase. Even for smaller companies, combining ChatGPT with a vector database of their docs, customer support chats, etc will give their rank and file employees better tools to work with

Simply put, what we have today can make average people better at their jobs, and gifted people even more extraordinary.

[–] bignavy@programming.dev 2 points 1 year ago

Just because it's 'the hot new thing' doesn't mean it's a fad or a bubble. It doesn't not mean it's those things, but....the internet was once the 'hot new thing' and it was both a bubble (completely overhyped at the time) and a real, tidal wave change to the way that people lived, worked, and played.

There are already several other outstanding comments, and I'm far from a prolific user of AI like some folks, but - it allows you to tap into some of the more impressive capabilities that computers have without knowing a programming language. The programming language is English, and if you can speak it or write it, AI can understand it and act on it. There are lots of edge cases, as others have mentioned below, where AI can come up with answers (by both the range and depth of its training data) where it's seemingly breaking new ground. It's not, of course - it's putting together data points and synthesizing an output - but even if mechanically it's 2 + 3 = 5, it's really damned impressive if you don't have the depth of training to know what 2 and 3 are.

Having said that, yes, there are some problematic components to AI (from my perspective, the source and composition of all that training data is the biggest one), and there are obviously use cases that are, if not problematic in and of themselves, at very least troubling. Using AI to generate child pornography would be one of the more obvious cases - it's not exactly illegal, and no one is being harmed, but is it ethical? And the more societal concerns as well - there are human beings in a capitalist system who have trained their whole lives to be artists and writers and those skills are already tragically undervalued for the most part - do we really want to incentivize their total extermination? Are we, as human beings, okay with outsourcing artistic creation to this mechanical turk (the concept, not the Amazon service), and whether we are or we aren't, what does it say about us as a species that we're considering it?

The biggest practical reasons to not get too swept up with AI is that it's limited in weird and not totally clearly understood ways. It 'hallucinates' data. Even when it doesn't make something up, the first time that you run up against the edges of its capabilities, or it suggests code that doesn't compile or an answer that is flat, provably wrong, or it says something crazy or incoherent or generates art that features humans with the wrong number of fingers or bodily horror or whatever....well then you realize that you should sort of treat AI like a brilliant but troubled and maybe drug addicted coworker. Man, there are some things that it is just spookily good at. But it needs a lot of oversight, because you can cross over from spookily good to what the fuck pretty quickly and completely without warning. 'Modern' AI is only different from previous AI systems (I remember chatting with Eliza in the primordial moments of the internet) because it maintains the illusion of knowing much, much better.

Baseless speculation: I think the first major legislation of AI models is going to be to require an understanding of the training data and 'not safe' uses - much like ingredient labels were a response to unethical food products and especially as cars grew in size, power, and complexity the government stepped in to regulate how, where, and why cars could be used, to protect users from themselves and also to protect everyone else from the users. There's also, at some point, I think, going to be some major paradigm shifting about training data - there's already rumblings, but the idea that data (including this post!) that was intended for consumption by other human beings at no charge could be consumed into an AI product and then commercialized on a grand scale, possibly even at the detriment of the person who created the data, is troubling.

[–] Sinnerman@kbin.social 2 points 1 year ago

AI has gone through several cycles of hype and winter. There's even a Wikipedia page for it: https://en.m.wikipedia.org/wiki/AI_winter

Of course it's valuable to discuss the dangers and inequities of a new technology. But one of the dangers is being misled.

[–] mojo@lemm.ee 2 points 1 year ago

Crypto and AI can't be compared at all. One is an extremely useful and revolutionary tool. The other is just pump & dump ponzi schemes for libertarians.

load more comments
view more: ‹ prev next ›