quotheraven404

joined 1 year ago
[โ€“] quotheraven404@lemmy.ca 4 points 10 months ago

He does have pretty flawless porcelain skin. ๐Ÿค”

[โ€“] quotheraven404@lemmy.ca 10 points 10 months ago (4 children)

Well that sucks, I just paid for a full year. Was this announced beforehand or did they disappear with no warning? What else are they going to drop?

[โ€“] quotheraven404@lemmy.ca 10 points 10 months ago

This kinda gives me Cardcaptor Sakura vibes.

[โ€“] quotheraven404@lemmy.ca 8 points 10 months ago

I use "in essence" for i.e.

[โ€“] quotheraven404@lemmy.ca 23 points 10 months ago

My grandma gave him this bed, and he's actually using it!

[โ€“] quotheraven404@lemmy.ca 9 points 11 months ago

Not my gumdrop buttons!

[โ€“] quotheraven404@lemmy.ca 4 points 1 year ago (2 children)

Do you do the same routine every day? I've been interested in trying this but I don't know where to start.

[โ€“] quotheraven404@lemmy.ca 2 points 1 year ago (1 children)

Thanks for the link, that sounds like exactly what I was asking for but gone way wrong!

What do you think is missing to prevent these kinds of outcomes? Is AI simply incapable of categorizing topics as 'harmful to humans' on it's own without a human's explicit guidance? It seems like the philosophical nuances of things like consent or dependence or death would be difficult for a machine to learn if it isn't itself sensitive to them. How do you train empathy in something so inherently unlike us?

[โ€“] quotheraven404@lemmy.ca 4 points 1 year ago (1 children)

Every time I'm using someone else's tech and I see ads I actually quite enjoy them in a nostalgic way. I kind of miss the days when I used to know all the jingles for all the pizza chains and furniture stores in my area.

[โ€“] quotheraven404@lemmy.ca 2 points 1 year ago (3 children)

Yeah I haven't played with it much but it feels like ChatGPT is already getting pretty close to this kind of functionality. It makes me wonder what's missing to take it to the next level over something like Siri or Alexa. Maybe it needs to be more proactive than just waiting for prompts?

I'd be interested to know if current AI would be able to recognize the symptoms of different mental health issues and utilize the known strategies to deal with them. Like if a user shows signs of anxiety or depression, could the AI use CBT tools to conversationally challenge those thought processes without it really feeling like therapy? I guess just like self-driving cars this kind of thing would be legally murky if it went awry and it accidentally ended up convincing someone to commit suicide or something haha.

[โ€“] quotheraven404@lemmy.ca 1 points 1 year ago

Exactly! I think mental health issues would be reduced drastically if everyone had a devoted friend for support at all times.

Things like misinformation and radicalization would go down too, if the AI always had global context for everything.

[โ€“] quotheraven404@lemmy.ca 6 points 1 year ago (7 children)

I'd want a familiar/daemon that was running an AI personality to act as a personal assistant, friend and interactive information source. It could replace therapy and be a personalized tutor, and it would always be up to date on the newest science and global happenings.

view more: next โ€บ