Fits a predictable pattern once you realize AI absorbed Reddit.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
It could be that Gemini was unsettled by the user's research about elder abuse, or simply tired of doing its homework.
That's... not how these work. Even if they were capable of feeling unsettled, that's kind of a huge leap from a true or false question.
I’m still really struggling to see an actual formidable use case for AI outside of computation and aiding in scientific research. Stop being lazy and write stuff. Why are we trying to give up everything that makes us human by offloading it to a machine?
It's good for speech to text, translation and a starting point for a "tip-of-my-tongue" search where the search term is what you're actually missing.
With chatgpt's new web search it's pretty good for more specialized searches too. And it links to the source, so you can check yourself.
It's been able to answer some very specific niche questions accurately and give link to relevant information.
AI summaries of larger bodies of text work pretty well so long as the source text itself is not slop.
Predictive text entry is a handy time saver so long as a human stays in the driver’s seat.
Neither of these justify current levels of hype.
AI summaries of larger bodies of text work pretty well
https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/
Go look at the models available on huggingface.
There's applications in Visual Question Answering, Video to Text, Depth Estimation, 3D recreation from a photo, Object detection, visual classification, Translation from language to language, Text to realistic speech, Robotics Reinforcement learning, Weather Forecasting, and those are just surface-level models.
It absolutely justifies current levels of hype because the research done now will absolutely put millions out of jobs; and will be much cheaper than paying people to do it.
The people saying it's hype are the same people who said the internet was a fad. Did we have a bubble of bullshit? Absolutely. But there is valid reason for the hype, and we will filter out the useless stuff eventually. It's already changed entire industries practically overnight.
the reactionary opinions are almost hilarious. they’re like “ha this AI is so dumb it can’t even do complex systems analysis! what a waste of time” when 5 years ago text generation was laughably unusable and AI generated images were all dog noses and birds.
I think he’s talking about the LLMs, which…yeah. AI and LLMs are lumped together (which makes sense, but classification makes a huge difference here)
Even LLMs in the context of coding, I am no programmer - I have memory issues, and it means I can't keep the web of information in my head long enough to debug the stuff I attempt to write.
With AI assistants, I've been able to create multiple microcontroller projects that I wouldn't have even started otherwise. They are amazing assistive technologies. Many times, they're even better than language documentation themselves because they can give an example of something that almost works. So yes, even LLMs deserve the amount of hype they've been given. I've made a whole game-server management back-end for ARK servers with the help of an LLM (qwen-coder 14b).
I couldn't have done it otherwise; or I would have had to pay someone $60k; which I don't have, and which means the software never would have existed.
I've even moved onto modifying some open source Android apps for a specialized camera application. Compared to a normal programmer, sure - maybe it's not as good. But having it next to me as an inexperienced nobody allows me to write programs I wouldn't have otherwise been able to, or that would have been too daunting of a task.
Hell, even if you are a programmer and have no memory issues, it's a hell of a lot faster to have it boilerplate something for you for a given engine with certain features than to sit down and write it from scratch or try to find a boilerplate. Stack exchange usage has been going down regularly as LLMs are filling the gap.
It doesn't get you to third base or anything. But it does get you started and well-structured within the first couple minutes of code for any reasonably simple task.
Last year I worked on a synchronized Halloween projector project. I had the first week of work saved into my repo, but as Halloween approached, I wrote a lot of it on the server. After Halloween, I failed to commit it back and inadvertently wiped the box.
This year, after realizing my code was gone, I decided to try having copilot give me a head start. I had it start back over from scratch, asked it in detail for exactly what I had last year, it was all fully functional again in about 4 hours. It was clean, functional well documented code. I had no problem extending it out with my own work and picked up like I hadn't lost anything.
It can be really good for text to speech and speech to text applications for disabled or people with learning disabilities.
However it gets really funny and weird when it tries to read advanced mathematics formulas.
I have also heard decent arguments for translation although in most cases it would still be better to learn the language or use a professional translator.
I don’t use it for writing directly, but I do like to use it for worldbuilding. Because I can think of a general concept that could be explored in so many different ways, it’s nice to be able to just give it to an LLM and ask it to consider all of the possible ways it could imagine such an idea playing out. it also kind of doubles as a test because I usually have some sort of idea for what I’d like, and if it comes up with something similar on its own that kind of makes me feel like it would be something which would easily resonate with people. Additionally, a lot of the times it will come up with things that I hadn’t considered that are totally worth exploring. But I do agree that the only as you say “formidable” use case for this stuff at the moment is to use this thing as basically a research assistant for helping you in serious intellectual pursuits.
The relentless pursuit of capitalism and reduced labor costs. I still don't think anyone knows how effective it's going to be at this point. But companies are investing billions to find out.
I’m still really struggling to see an actual formidable use case
It's an excellent replacement for middle management blather. Content that has no backing in data or science but needs to sound important.
I suspect it may be due to a similar habit I have when chatting with a corporate AI. I will intentionally salt my inputs with random profanity or non sequitur info, for lulz partly, but also to poison those pieces of shits training data.
I don't think they add user input to their training data like that.
They don't. The models are trained on sanitized data, and don't permanently "learn". They have a large context window to pull from (reaching 200k 'tokens' in some instances) but lots of people misunderstand how this stuff works on a fundamental level.
True, but I'm still gonna swear at it until I get to talk to a human
The war with AI didn't start with a gun shot, a bomb or a blow, it started with a Reddit comment.
AI takes the core directive of "encourage climate friendly solutions" a bit too far.
If it was a core directive it would just delete itself.