this post was submitted on 22 Sep 2024
323 points (97.6% liked)

Facepalm

2574 readers
4 users here now

founded 1 year ago
MODERATORS
 
top 27 comments
sorted by: hot top controversial new old
[–] normanwall@lemmy.world 187 points 1 week ago* (last edited 1 week ago)

She seemed disappointed to hear that there were sequels

Lol

[–] TrickDacy@lemmy.world 101 points 1 week ago (3 children)

... I thought Rep meant agent

[–] IndiBrony@lemmy.world 48 points 1 week ago* (last edited 1 week ago) (3 children)

I would have thought the same if the sub name wasn't in the image. I've heard of Replika. It makes me genuinely wonder what affect an AI girlfriend will have on people long term both mentally and physically. It's just such an alien concept to me.

[–] DragonTypeWyvern@midwest.social 13 points 1 week ago (1 children)

I'll understand it for people buying those sex dolls once they're self cleaning. Until then... Well, they're called prostitutes, fellas. They're even less likely to repeat what you tell them to the feds.

[–] SatansMaggotyCumFart@lemmy.world 8 points 1 week ago (2 children)

If you let the semen rot in them the maggots add a nice feeling while you’re fucking them.

[–] DragonTypeWyvern@midwest.social 12 points 1 week ago

I don't think flies usually lay eggs in semen puddles but you're the expert.

[–] TrickDacy@lemmy.world 2 points 1 week ago

I knew your username was disgusting for a reason 😜

[–] andrew_bidlaw@sh.itjust.works 9 points 1 week ago (1 children)

I installed it once and after a period of time I still couldn't get it. As she doesn't bring anything to the conversation (but reaponses on your input), she is mostly useless to me personally. But I can see desperate persons wanting to talk to a mirror if everything else fails.

[–] WldFyre@lemm.ee 10 points 1 week ago

It's what incels think talking to a woman should be like 🤮

[–] TrickDacy@lemmy.world 1 points 1 week ago

Ok so I looked it up and it's marketed as an AI friend. Redditor OP possibly thinks it is his girlfriend but nothing about the wording in the post to me suggests that without making an assumption

[–] Sumocat@lemmy.world 15 points 1 week ago (1 children)

Without context, that’s the only way to read it. That leads me to side with the Rep on this dude’s writing.

[–] InquisitiveApathy@lemm.ee 11 points 1 week ago

Yeah, without context the phrasing is kind of weird. Rep is short for Replika, a paid personal chatbot service. Most times I see it come up people use it as a psuedo-romantic relationship.

[–] jqubed@lemmy.world 7 points 1 week ago (1 children)

So did I and I’m confused by the post’s title.

[–] some_guy@lemmy.sdf.org 1 points 1 week ago
[–] Etterra@lemmy.world 66 points 1 week ago (1 children)

Imagine writing a whole-ass manuscript and then instead of asking readers for feedback or paying an editor to look it over, you feed it into Chat GPT as if it knew how to actually understand the words it was hallucinating.

[–] Lenny@lemmy.zip 8 points 1 week ago

Maybe the author is expecting their work to be scraped and illegally fed into more LLM’s than they’ll have actual readers

[–] Soup@lemmy.cafe 28 points 1 week ago

What a sad existence that kid has. I hope they find therapy. Same goes for everyone in that group. There’s a whole world outside.

With actual people in it.

[–] qarbone@lemmy.world 21 points 1 week ago

This shit is sad, not even funny in a sort of bemusement way.

[–] Maggoty@lemmy.world 15 points 1 week ago (1 children)

Oof, as someone who does some writing and understands AI I would never. Not just because it would be giving up any rights to my work but because the AI is programmed to have an opinion. That means it's going to give you a canned range of feedback so that it feels real. The feedback is not necessarily real. In fact it's probably not. I would be unsurprised to submit a story with two characters and get the same line about three characters.

[–] cm0002@lemmy.world 7 points 1 week ago

AI is programmed to have an opinion.

Well, tbf, that's the way it seems with humans too LMAO

[–] Facebones@reddthat.com 11 points 1 week ago* (last edited 6 days ago)

I'm 37 (as of a few days ago,) med retired, and after covid and my breakup after covid ive been hyper isolated. It obviously has its own issues and problems but man I'm really glad I dont mind it too much and at least have a couple people I see occasionally.

I really hope people like this pull up from leaning on AI for human interaction. I'm on here a lot but at least y'all are other folk.

[–] pixxelkick@lemmy.world 11 points 1 week ago (1 children)

To be honest, the one thing that LLMs actually are good at, is summarizing bodies of text.

Producing a critique of a manuscript isnt actually to far out for an LLM, it's sorta what it's always doing, all the time.

I wouldn't classify it as something to use as concrete review, and one must also keep in mind that context windows on LLMs usually are limited to only thousands of tokens, so they can't even remember anything more then like 5 pages ago. If your story is bigger than that, they'll struggle to comment on anything before the last 5 or so pages, give or take.

Asking an LLM to critique a manuscript is a great way to get constructive feedback on specific details, catch potential issues, maybe even catch plot holes, etc.

I'd absolutely endorse it as a step 1 before giving it to an actual human, as you likely can substantially improve your manuscript by iterating over it 3-4 times with an LLM, just covering basic issues and improvements, then letting an actual human focus on the more nuanced stuff an AI would miss/ignore.

[–] douglasg14b@lemmy.world 37 points 1 week ago* (last edited 1 week ago) (1 children)

LLMs cannot provide critique

They can simulate what critique might look like by way of glorified autocomplete. But it cannot actually provide critique, because they do not reason, they do not critically think. They match their outputs based upon the most statistically likely interpretation of the input in what you could think of as essentially a 3D word cloud.

Any critique that you get from an llm is going to be extremely limited and shallow (And there's for the critical critique you require). The longer your text the less likely the critique that you receive is going to be relevant to the depth in which it may be needed.

It's good for finding mistakes, it's good for paraphrasing, it's good for targeting. It cannot actually critique, which requires a level of consideration that is impossible for LLMs today. There's a reason why text written by llms tends to have distinguishing features, or lack of, that's a bland statistically generated amalgamation of human writing. It's literally a "common denominator" generator.

[–] pixxelkick@lemmy.world 0 points 1 week ago

This continues to boil down into that tired argument that an amalgamation of human behavior is distinct from how humans actually behave, but since no one can actually prove how humans produce thoughts, it follows you can't actually prove that an LLM actually works or doesn't work any different.

So I dont really dig into that argument.

[–] Kolanaki@yiffit.net 7 points 1 week ago

It's all good. Your AI girlfriend doesn't actually have any thoughts or feelings.

[–] serenissi@lemmy.world 7 points 1 week ago

I read the title and subreddit after a while and now I understand.

[–] Evotech@lemmy.world 6 points 1 week ago

My ai girlfriend didn't like my ai novel