Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
This needs to be hammered into techbro's heads until they shut the fuck up about the so-called "AI" revolution.
I've been doing a lot of using, testing, and evaluating LLMs and GPT-style models for generating code and text/prose. Some of it is just general use to see how it behaves, some has been explicit evaluation of creative writing, and a bunch of it is code generation to test out how we need to modify our CS curriculum in light of these new tools.
It's an impressive piece of technology, but it's not very creative. It's meh. The results are meh. Which is to be expected since it's a statistical model that's using a large body of prior work to produce a reasonable approximation of what it's seen before. It trends towards the mean, not the best.
This'd explain why inexperienced users of ai would inevitably get mediocre results. Still takes creativity to get stolen mediocrity.
You have to know how to operate the oven to reheat store bought pie. Generative LLMs are machines like ovens, and turning the knobs is not creativity. Not operating the oven correctly gets you Sharon Weiss results.
I guess a protip is you have to tell it explicitly in the prompt who it's supposed to steal from.
For instance, midjourney or SD will produce much better results if you put specific artstation channel names along with 'artstation' in the prompt.
I'm curious if you've gotten anything decent out of them. I've tried to use it for tech/code questions, and it's been nothing but disappointment after disappointment. I've tried to use it to get help with new concepts, but it hallucinates like crazy and always give me bad results, some of the time it's so bad that it gives me answers I've already told it we're wrong.
Yeah, I've just set up a hotkey that says something like "back up your answer with multiple reputable sources" and I just always paste it at the end of everything I ask. If it can't find webpages to show me to back up its claims then I can't trust it. Of course this isn't the case with coding, for that I can actually run the code to verify it.
What version are you using?
GPT-4 is quite impressive, and the dedicated code LLMs like Codex and Copilot are as well. The latter must have had a significant update in the past few months, as it's become wildly better almost overnight. If trying it out, you should really do so in an existing codebase it can use as a context to match style and conventions from. Using a blank context is when you get the least impressive outputs from tools like those.
That's where some of the significant advances over the past 12 months of research have been, specifically around using the fine tuning phase to bias towards excellence. The biggest advance there has been that capabilities in larger models seem to be transmissible to smaller models by feeding in output from the larger more complex models.
Also, the process supervision work to enhance CoT from May is pretty nuts.
So while you are correct that the pretrained models come out with a regression towards the mean, there are very promising recent advances in taking that foundation and moving it towards excellence.
I get the sentiment, but don't really agree. Humans' inputs are also from what already exists, and music is generally inspired from other music which is why "genres" even exist. AI's not there yet, but the statement "real creativity comes solely from humans" Needs Citation. Humans are a bunch of chemical reactions and firing synapses, nothing out of the realm of the possible for a computer.
Yeah, I'd actually make a more limited statement. Real creativity requires the subjective experience and the ability to generate inputs solely from subjectivity i.e. experience the redness of the color red. AI could definitely do that, which is why LLMs are not AI imo
It's not the techbros leading this, it's the BBAs and MBAs that wouldn't know art if Michelangelo came to life and slapped them in the face with the sistine chapel.
I would never call an actual technician a techbro! Techbros are Rick&Morty ledditor "fuck yeah science!" dorks.
I see it an more an inability to analyze, evaluate, and edit. A lot of "creativity" in the world of musical composition is putting together existing elements and seeing what happens. Any composer from pop to the very avant-garde, is influenced and sometimes even borrow from their predecessors (it's why copyright law is so complex in music).
It's the ability to make judgements, does this sound good/interesting, does this have value, would anyone want to listen to this, and adjust accordingly that will lead to something original and great. Humans are so good at this, we might be making edits before the notes hit the page (Brainstorming). This AI clearly wasn't. And deciding on value, seems wildly complex for modern day computers. Humans can agree on it (if you like Rock, but hate country for example).
So in the end, they are "creative" but in a monkey-typewritter situation, but who is going to sort through the billions of songs like this to find the one masterpiece?
One of the overlooked aspects of generative AI is that effectively by definition generative models can also be classifiers.
So let's say you were Spotify and you fed into an AI all the songs as well as the individual user engagement metadata for all those songs.
You'd end up with a model that would be pretty good at effectively predicting the success of a given song on Spotify.
So now you can pair a purely generative model with the classifier, so you spit out song after song but only move on to promoting it if the classifier thinks there's a high likelihood of it being a hit.
Within five years systems like what I described above will be in place for a number of major creative platforms, and will be a major profit center for the services sitting on audience metadata for engagement with creative works.
Right, the trick will be quantifying what is 'likely to be a hit', which if we're honest, has already been done.
Also, neural networks and other evolutionary algorithms can inject random perturbations/mutations to the system which, operate a bit like uninformed creativity (something like banging on a piano and hearing something interesting that's worth pursuing). So, while not 'inspired' or 'soulful' as we would generally think of it, these algorithms are capable of being creative In some sense. But it would need to be recognized as 'good' by someone or something...and back to your point.
"Generative" is such a misleading term. It's not generating anything, it is replicative.
The anger comes from the fact that companies are using AI instead of hiring artists.
There is a distinction between a human being inspired by an existing piece of art and an ai creating something from other art. The human has to experience it through the lens of the human experience and create using the human body. AI takes multiple pieces of art and essentially makes a collage.
Eh, humans still take inspiration from others even in their original art. Most professionals draw from reference, or emulate styles, or follow some common method. Drawing from a singular source is ethically questionable, but imitating elements from many sources is just part of the process.
Arguably, no human creation is purely original, the originality comes from the creativity of the remix.
I’m not arguing for originality. I’m saying that you can have a human connection with a human made piece of art that, by definition, canon exist for AI art.
For now.
And don't forget, humans are also trained on the inputs of others.
Meat goes in. Sausage comes out.
The problem for a lot of the companies behind these things, is that they've run into problems now their investors want them to turn meat into a black forest gateau.
I'm sceptical if they can manage that feat. But what do I know.
Hate to break it to you but human creativity doesn't exist in a vacuum. You call it theft, artists call it inspiration.
Still, AI is able to "create" new things by a combination of existing concepts. It can generate a Roomba in the style of Van Gogh for example, which is probably not something that currently exists.
Are you saying the idea of a unicorn wasn't new and original because it was drawing on the pre-existing features of a horse and narwhal?
what have you seen that wasnt there before
i mostly have qualms with the quote i have no illusions about the levels of discussions around ai
Right just as soon as all the people proclaiming that can point to the soul bit of my brain. There is absolutely no reason to say that AI cannot be creative there's nothing fundamentally magic about creativity that means only humans can do it.
You're equating creativity to the soul. They're not the same thing. But we can definitely look at the brain and see what parts light up when perform creative tasks.
The belief that only humans can be creative is interestingly parallel to intelligent design creationism. The latter is fundamentally a religious faith, but it strongly appeals to the intuition that anything that happens needs a humanoid creator.