mydataisplain

joined 1 year ago
[–] mydataisplain@lemmy.world 2 points 1 year ago

The legless avatars seem to be mostly a VR thing.

It would be cool if you could mix the two. What if you could meet a group of friends at a coffee shop but if one of your friends was out of town you could have them join you virtually?

The avatar may or may not have legs. We could leave that choice up to the individual. Maybe they want legs. Maybe they want to be a little floaty ghost. Maybe they want to present as a talking frog.

[–] mydataisplain@lemmy.world 3 points 1 year ago

Oh yeah! There are all kinds of cool games you could play with AR.

My old school used to get really into "assassin". Some organizer would divide everyone into a big circle but everyone was only told their connection in one direction (ie everyone knew their target but nobody knew who was targeting them. This was in NYC. Kids would pull out Rayline Tracer Guns in the subway and pop each other. AR would be a much better way to do that.

Games like Pokemon Go would be much cooler with AR.

[–] mydataisplain@lemmy.world 1 points 1 year ago

It's hard to guess what the internal motivation is for these particular people.

Right now it's hard to know who is disseminating AI-generated material. Some people are explicit when they post it but others aren't. The AI companies are easily identified and there's at least the perception that regulating them can solve the problem, of copyright infringement at the source. I doubt that's true. More and more actors are able to train AI models and some of them aren't even under US jurisdiction.

I predict that we'll eventually have people vying to get their work used as training data. Think about what that means. If you write something and an AI is trained on it, the AI considers it "true". Going forward when people send prompts to that model it will return a response based on what it considers "true". Clever people can and will use that to influence public opinion. Consider how effective it's been to manipulate public thought with existing information technologies. Now imagine large segments of the population relying on AIs as trusted advisors for their daily lives and how effective it would be to influence the training of those AIs.

[–] mydataisplain@lemmy.world 26 points 1 year ago (5 children)

The big tech companies keep trying to sell AR as a gateway to their private alternate realities. That misses the whole point of AR. It's supposed to augment reality, not replace it.

Everyone who has played video games knows what AR is supposed to look like. Create an API to let developers build widgets and allow users to rearrange them on a HUD.

Obvious apps that would get a ton of downloads:
floatynames - floats people's names over their heads
targettingreticle - puts a customizable icon in the center of your screen so you know it's centered
graffiti - virtual tagging and you control who sees it
breadcrumbs - replaces the UI of your map software to just show you a trail to your destination
catears - add an image overlay that makes it look like your friends have cat ears healthbars - they're a really familiar visual element that you can tie to any metric (which may or may not be health related)

I imagine being able to meet my friends at a cafe that I've never been to. It's easy to find because I just follow a trail of dots down the street. As I get closer I can see a giant icon of a coffee cup so I know I'm on the right block. Not everyone is there yet but I can see that the last of our friends is on the bus 2 blocks away. I only met one of them once a few months ago I can see their name and pronouns. We sit around discussing latte art. I get up for an other cup and see from their health bar that one of my friends is out of coffee so I get them a refill. On the way out I scrawl a positive review and leave it floating on the sidewalk.

[–] mydataisplain@lemmy.world 3 points 1 year ago (1 children)

Have you looked into AIHorde?
It's clearly harder to use than the commercial alternatives but at first glance it doesn't seem to bad.
It looks about as complicated as setting up any of the other volunteer compute projects (like SETI@home).

[–] mydataisplain@lemmy.world 11 points 1 year ago (2 children)

Is it practically feasible to regulate the training? Is it even necessary? Perhaps it would be better to regulate the output instead.

It will be hard to know that any particular GET request is ultimately used to train an AI or to train a human. It's currently easy to see if a particular output is plagiarized. https://plagiarismdetector.net/ It's also much easier to enforce. We don't need to care if or how any particular model plagiarized work. We can just check if plagiarized work was produced.

That could be implemented directly in the software, so it didn't even output plagiarized material. The legal framework around it is also clear and fairly established. Instead of creating regulations around training we can use the existing regulations around the human who tries to disseminate copyrighted work.

That's also consistent with how we enforce copyright in humans. There's no law against looking at other people's work and memorizing entire sections. It's also generally legal to reproduce other people's work (eg for backups). It only potentially becomes illegal if someone distributes it and it's only plagiarism if they claim it as their own.