ritswd

joined 1 year ago
[–] ritswd@lemmy.world 21 points 1 year ago (5 children)

No, it wasn’t like that. Remember that while computer technology was fairly mainstream, it wasn’t nearly as engrained into our lives as today. So people were talking about a worst-case scenario that involved technological things: potential power outages, administrations maybe shutting down, some public transportation maybe shutting down, … To me, it felt like people were getting ready for being potentially majorly inconvenienced, but that they weren’t at all freaking out.

I do remember the first few days of January 2000 felt like a good fun joke. “All that for this!”

[–] ritswd@lemmy.world 56 points 1 year ago

I’ve been telling people that the notion that the ER lets poor people die in the US is false; instead, they make you wish you did.

[–] ritswd@lemmy.world 6 points 1 year ago (1 children)

Mint uses an OAuth token (I think through Plaid). This is not the same thing as sharing a username/password, and is authorized by your bank, since they provide the OAuth flow; otherwise OAuth wouldn’t work in the first place.

[–] ritswd@lemmy.world 13 points 1 year ago* (last edited 1 year ago)

Oh, I just watched this!

I’m pretty much aligned with what Niko said in it: if the point is entertaining value (as proven by the sci-fi stuff added to the shot), then I find it off-putting that someone is trying to sell me real-life suffering and death as sci-fi entertainment, enough that it makes me not want to go see the movie. Not out of protest, but because it’s just gross.

[–] ritswd@lemmy.world 14 points 1 year ago

Oh, that was me, sorry guys.

[–] ritswd@lemmy.world 1 points 1 year ago

Right, and in my case to be clear, it was all businesses headquartered in the US, doing business in Europe, and getting compliant with Europe’s GDPR. I have no idea if it was any different if the businesses were headquartered in Europe (guessing no), but I thought I’d confirm that was the situation.

[–] ritswd@lemmy.world 1 points 1 year ago

Yeah, there were different interpretations there from different counsels. It went from “well, they put it there and we don’t store it anywhere else, so nobody is preventing them from removing it, we don’t need to do anything”, with some “oh this field is actually durably stored somewhere else (such as an olap db or something), so either we need to scrub it there too when someone changes a value, or we can just add a ‘don’t share personal information in this field’ little label on the form”; to doing that kind of stuff on all fields.

Overall, the feeling was that we needed to do best effort depending on how likely it would be for a field to durably contain personal info, for it to smell a judge’s smell test that it was done in good faith, as is often the case in legal matters.

[–] ritswd@lemmy.world 32 points 1 year ago (8 children)

Reposting what I posted here a while ago.

Companies abiding by the GDPR are not required to delete your account or content at all, only Personally Identifiable Information (PII). Lemmy instances are unlikely to ask for info such as real name, phone number, postal address, etc; the only PII I can think of is the email that some (not all) instances request. Since it’s not a required field on all instances, I’m going to guess that the value of this field does not travel to other instances.

Therefore, if you invoked the GDPR to request your PII to be deleted, all that would need to happen is for the admin of your instance to overwrite the email field of your account with something random, and it would all be in compliance. Or they could also choose the delete your account, if they prefer.

Source: I’m a software engineer who was tasked at some point with aligning multi-billion-dollar businesses to the GDPR, who had hundreds of millions of dollars in liability if they did it wrong and therefore took it very seriously. I am not a lawyer or a compliance officer, but we took our directions from them directly and across several companies, that’s what they all told us.

[–] ritswd@lemmy.world 1 points 1 year ago

Yeah, I think that’s probably more accurate than what I was thinking, and that leaving belongs to acceptance rather than depression.

[–] ritswd@lemmy.world 2 points 1 year ago

I was actually aware of that, which is why I wrote depression/acceptance, meaning they probably moved from bargaining to either one of those, thinking either of those 2 stages could prompt people to leave. By fast-tracking, I meant that moved happened faster than they would have if the rebranding hadn’t happened. It’s still a fascinating bit, I have known about the stages of grief for a while, but only learned recently (like, this year) that they didn’t have to happen in order.

[–] ritswd@lemmy.world 8 points 1 year ago (1 children)

Nitpicking: I’d rephrase “playing an instrument” to “playing a first instrument”. I struggled as heck to learn the guitar as a young adult, while kids in my music class were having a much easier time; but once I got it after a while, all instruments I learned after that, even in my 40s, were a ton easier.

[–] ritswd@lemmy.world 31 points 1 year ago (5 children)

I think it’s spot on. It’s people who were already going through the stages of grief, were kinda stuck in “bargaining” (like: “nah, Twitter is not really dead, it’ll come back”), and the symbolism there about Twitter really being gone-gone fast-tracked them to depression/acceptance.

 

To be clear I’m not expert. But I know a bit.

The way LLMs (like ChatGPT, GPT-4, etc) work, is that they continuously decide what the next best-sounding word might be, and they print it, over and over and over, until it makes sentences and paragraphs. And the way that next-word decision works under the hood, is with a deep neural net that was initially a theoretical tool designed to imitate the neural circuits that make up our biological nervous system and brain. The actual code for LLMs is rather small, it’s just about storing and managing representations of a neuron, and rearranging the connections between neurons as it learns more; just like the brain does.

I was listening to the first part of this “This American Life” episode this morning that covers it really well: https://podcasts.apple.com/us/podcast/this-american-life/id201671138?i=1000618286089 In it, Microsoft AI experts also express excitement and confusion about how GPT-4 seems to actually reason about things, rather than just bullshitting the next word to make it look like it reasons, like it’s supposed to be designed to do.

And so I was thinking: the reason why it works might be the other way around. It’s not that LLMs are smart enough to reason instead of bullshit, it’s that human’s reasoning actually works out of constantly bullshitting too, one word at a time. Imitate the human brain exactly, and I guess we shouldn’t be surprised that we land with a familiar-looking kind of intelligence - or lack thereof. Right?

 

I seem to hear from a variety of people that they struggle to fall asleep at night; but the difficult to fall asleep sounds like an evolutionary downside. Even as hunter-gatherers, being able to sleep whenever and wherever sounds like it would be an advantage.

Is it a recent product of modern times and people didn’t actually struggle with it a while back? In which case, what of modern life is causing this? If not, what is the evolutionary advantage of not falling asleep easily?

 

Both their functionings are complex, so people can get impressed. But for both of them, all the complexity is inside the device and there isn’t much to put together; and the way they hook up to your house is really simple.

Why YSK: so you don’t live too long with a broken toilet or garbage disposal, thinking it will be too hard to replace. Those two are some of the simplest things to DIY.

view more: next ›