FaceDeer

joined 4 months ago
[–] FaceDeer@fedia.io 5 points 10 hours ago

Cloaks are actually quite historical, they're very easy to make and useful in a variety of conditions.

[–] FaceDeer@fedia.io 4 points 10 hours ago

When you delve into the details of what those bullet points actually entailed, they were all far far worse in medieval times.

[–] FaceDeer@fedia.io 35 points 1 day ago (1 children)

I suppose Biden could have him officially assassinated. That's legal now.

[–] FaceDeer@fedia.io 11 points 3 days ago

Oh, neat. The first one blew up the door, and then the second one literally flew inside and went down the hallway to reach the cache.

[–] FaceDeer@fedia.io 5 points 4 days ago

Yup. Fortunately unsubscribing from politics subreddits is generally advisable whether one has been banned from them or not.

[–] FaceDeer@fedia.io 4 points 4 days ago (3 children)

Being slightly wrong means more of an endorphin rush when people realize they can pounce on the flaw they've spotted, I guess.

Don't sweat downvotes, they're especially meaningless on the Fediverse. I happen to like a number of applications for AI technology and cryptocurrency, so I've certainly collected quite a few of those and I'm still doing okay. :)

[–] FaceDeer@fedia.io 32 points 4 days ago (5 children)

There was a politics subreddit I was on that had a "downvoting is not allowed" rule. There's literally no way to tell who's downvoting on Reddit, or even if downvoting is happening if it's not enough to go below 0 or trigger the "controversial" indicator.

I got permabanned from that subreddit when someone who'd said something offensive asked "why am I being downvoted???" And I tried to explain to them why that was the case. No trial, one million years dungeon, all modmail ignored. I guess they don't get to enforce that rule often and so leapt at the opportunity to find an excuse.

[–] FaceDeer@fedia.io 15 points 4 days ago (5 children)

Downvotes for not getting it right, I presume.

Which makes me concerned that the "Hole for Pepnis" answer has so many upvotes.

[–] FaceDeer@fedia.io 54 points 4 days ago (7 children)

Those holes look open to me.

[–] FaceDeer@fedia.io 1 points 4 days ago

Especially because seeing the same information in different contexts helps mapping the links between the different contexts and helps dispel incorrect assumptions.

Yes, but this is exactly the point of deduplication - you don't want identical inputs, you want variety. If you want the AI to understand the concept of cats you don't keep showing it the same picture of a cat over and over, all that tells it is that you want exactly that picture. You show it a whole bunch of different pictures whose only commonality is that there's a cat in it, and then the AI can figure out what "cat" means.

They need to fundamentally change big parts of how learning happens and how the algorithm learns to fix this conflict.

Why do you think this?

[–] FaceDeer@fedia.io 2 points 5 days ago (2 children)

There actually isn't a downside to de-duplicating data sets, overfitting is simply a flaw. Generative models aren't supposed to "memorize" stuff - if you really want a copy of an existing picture there are far easier and more reliable ways to accomplish that than giant GPU server farms. These models don't derive any benefit from drilling on the same subset of data over and over. It makes them less creative.

I want to normalize the notion that copyright isn't an all-powerful fundamental law of physics like so many people seem to assume these days, and if I can get big companies like Meta to throw their resources behind me in that argument then all the better.

[–] FaceDeer@fedia.io 1 points 5 days ago* (last edited 5 days ago) (4 children)

Remember when piracy communities thought that the media companies were wrong to sue switch manufacturers because of that?

It baffles me that there's such an anti-AI sentiment going around that it would cause even folks here to go "you know, maybe those litigious copyright cartels had the right idea after all."

We should be cheering that we've got Meta on the side of fair use for once.

look up sample recover attacks.

Look up "overfitting." It's a flaw in generative AI training that modern AI trainers have done a great deal to resolve, and even in the cases of overfitting it's not all of the training data that gets "memorized." Only the stuff that got hammered into the AI thousands of times in error.

view more: next ›