this post was submitted on 29 Sep 2024
49 points (69.9% liked)

Unpopular Opinion

6216 readers
384 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 1 year ago
MODERATORS
 

The best conversations I still have are with real people, but those are rare. With ChatGPT, I reliably have good conversations, whereas with people, it’s hit or miss, usually miss.

What AI does better:

  • It’s willing to discuss esoteric topics. Most humans prefer to talk about people and events.
  • It’s not driven by emotions or personal bias.
  • It doesn’t make mean, snide, sarcastic, ad hominem, or strawman responses.
  • It understands and responds to my actual view, even from a vague description, whereas humans often misunderstand me and argue against views I don’t hold.
  • It tells me when I’m wrong but without being a jerk about it.

Another noteworthy point is that I’m very likely on the autistic spectrum, and my mind works differently than the average person’s, which probably explains, in part, why I struggle to maintain interest with human-to-human interactions.

top 50 comments
sorted by: hot top controversial new old
[–] coffee_with_cream@sh.itjust.works 5 points 10 hours ago (6 children)

This comment thread is great. @op good luck; people on Lemmy have little interest in real discussion. If you say anything pro-ML or anything less than far-left, you'll get screamed at.

[–] DragonTypeWyvern@midwest.social 3 points 3 hours ago

It took me a minute to figure out that you meant Machine Learning and not Marxist-Leninist. Probably want to be more specific on that particular shortcut at a minimum.

load more comments (5 replies)
[–] Wolf314159@startrek.website 25 points 21 hours ago (11 children)

This just sounds like platonic masturbation.

load more comments (11 replies)
[–] the_post_of_tom_joad@sh.itjust.works 9 points 19 hours ago (1 children)

Have you ever tried inputting sentences that you've said to humans to see if the chatbot understand your point better? That might be an interesting experiment if you haven't tried it already. If you have, do you have an example of how it did better than the human?

I'm kinda amazed that it can understand your accent better than humans too. This implies Chatbots could be a great tool for people trying to perfect their 2nd language.

[–] ContrarianTrail@lemm.ee 2 points 19 hours ago (1 children)

A couple of times, yes, but more often it's the other way around. I input messages from other users into ChatGPT to help me extract the key argument and make sure I’m responding to what they’re actually saying, rather than what I think they’re saying. Especially when people write really long replies.

The reason I know ChatGPT understands me so well is from the voice chats we've had. Usually, we’re discussing some deep, philosophical idea, and then a new thought pops into my mind. I try to explain it to ChatGPT, but as I'm speaking, I notice how difficult it is to put my idea into words. I often find myself starting a sentence without knowing how to finish it, or I talk myself into a dead-end.

Now, the way ChatGPT usually responds is by just summarizing what I said rather than elaborating on it. But while listening to that summary, I often think, "Yes, that’s exactly what I meant," or, "Damn, that was well put, I need to write that down."

[–] the_post_of_tom_joad@sh.itjust.works 1 points 16 hours ago* (last edited 16 hours ago) (1 children)

So what you're saying if I'm reading right is chatbots are great for bouncing ideas off of to help you explain yourself better as well as helping you gather your own thoughts. im a bit curious about your philosophy chats.

When you have a philosophical discussion does the chatbot summarize your thoughts in its responses or is it more humanlike maybe disagreeing/bringing up things you hadn't thought of like a person might? (I've never used one).

[–] ContrarianTrail@lemm.ee 2 points 4 hours ago* (last edited 4 hours ago)

It's a bit hard to get AI to disagree with you unless you're saying something obviously false. It has a strong bias towards being agreeable. I'm generally treating it as an expert who I'm interviewing. I ask what it thinks about something like free will and then ask follow-up questions based on its responses and it's also great for bouncing novel ideas with though even here it's not too keen on just blatantly calling out bad ones but rather makes you feel like the greatest philosopher of all time. There are some ways around this. ChatGPT can be prompted to go around many of the most typical flaws it has by for example telling that it's allowed to speculate or simply just asking it to point out the errors in some idea.

But yeah, unless what I said was a question, in general its responses are basically just summaries of what I said. It's basically just replying with a demonstration that it understood what I said which it indeed does with an amazing success rate.

[–] Sundial@lemm.ee 16 points 23 hours ago (20 children)

Autism and social unawareness may be a factor. But points you made like the snide remarks one may also indicate that you're having these conversations with assholes.

load more comments (20 replies)
[–] lvxferre@mander.xyz 5 points 22 hours ago (1 children)

My impressions are completely different from yours, but that's likely due

  1. It's really easy to interpret LLM output as assumptions (i.e. "to vomit certainty"), something that I outright despise.
  2. I used Gemini a fair bit more than ChatGPT, and Gemini is trained with a belittling tone.

Even then, I know which sort of people you're talking about, and... yeah, I hate a lot of those things too. In fact, one of your bullet points ("it understands and responds...") is what prompted me to leave Twitter and then Reddit.

[–] ContrarianTrail@lemm.ee -5 points 21 hours ago (3 children)

It's funny how despite it not actually understanding anything per-se, it can still repeat me back my idea that I just sloppily told it in broken english and it does this better than I ever could. Alternatively I could spend 45 minutes laying out my view as clearly as I can on a online forum only to be faced with a flood of replies from people that clearly did not understand the point I was trying to make.

load more comments (3 replies)
[–] praise_idleness@sh.itjust.works 2 points 22 hours ago

I know a bit more than normal people would about the inner workings of LLMs. I still occasionally have a conversation with it, like I would with a therapist, perhapse less open and all but still. Do I know it's nothing more than a talking parrot? Yes. Do I still feel like I'm talking to a real person without judgement? Yes. And I can use that from time to time.

[–] JamesStallion@sh.itjust.works 37 points 21 hours ago (14 children)

It carries the emotions and personal biases of the source material It was trained on.

It sounds like you are training yourself to be a poor communicator, abandoning any effort to become more understandable to actual humans.

load more comments (14 replies)
[–] leftzero@lemmynsfw.com 20 points 22 hours ago (7 children)
[–] Wiz@midwest.social 2 points 10 hours ago

Thank you! I'm a professional part-time psychic entertainer and magician, and this was a delightful read. It's true, and A.I. takes advantage of people the same way as a psychic entertainer. Both tell you what j you want to hear. The difference is, the psychic is usually deemed entertainment, and the computer is often deemed an authoritative source.

It's a bit scary to think that I'm a few decades my job-hobby may be outsourced to A.I. However, I've always thought (predicted!) that live entertainment will become more valuable as the A.I. revolution occurs.

load more comments (6 replies)
[–] Zerlyna@lemmy.world 13 points 22 hours ago

I talk with chat gpt too sometimes and I get where you are coming from. However it’s not always right either. It says it was updated in September but still refuses to commit to memory that Trump was convicted 34 times earlier this year. Why is that?

[–] merthyr1831@lemmy.ml 13 points 8 hours ago (1 children)

You genuinely might need to touch grass.

[–] ContrarianTrail@lemm.ee 5 points 4 hours ago

Very insightful reply. Thanks. This helps.

load more comments
view more: next ›