planish

joined 1 year ago
[–] planish@sh.itjust.works 1 points 2 months ago

The pita fix only works if you can dig up a CD drive to put it in though. Most people don't have one and are SOL.

[–] planish@sh.itjust.works 37 points 2 months ago (6 children)

That's what the BSOD is. It tries to bring the system back to a nice safe freshly-booted state where e.g. the fans are running and the GPU is not happily drawing several kilowatts and trying to catch fire.

[–] planish@sh.itjust.works 4 points 2 months ago (1 children)

Foreign to who?

[–] planish@sh.itjust.works 86 points 2 months ago (1 children)
[–] planish@sh.itjust.works 1 points 4 months ago

It shouldn't be hard to implement the APIs, the problem would be sourcing the models to sit behind them. You can't just steal them off Windows or you will have Copyright Problems presumably. I guess you could try and train clones on Windows against the Windows model results?

[–] planish@sh.itjust.works 0 points 4 months ago

KDE and Gnome haven't been stable or usable for the past 20 years, but will become so this year for some reason?

[–] planish@sh.itjust.works 7 points 4 months ago (3 children)

So Copilot Runtime is... Windows bundling a bunch of models like an OCR model and an image generation model, and then giving your program an API to call them.

[–] planish@sh.itjust.works 26 points 4 months ago

Do they "give high rankings" to CloudFlare sites because they just boost up whoever is behind CloudFlare, or because the sites happen to be good search hits, maybe that load quickly, and they don't go in and penalize them for... telling CloudFlare that you would like them to send you the page when you go to the site?

Counting the number of times results for different links are clicked is expected search engine behavior. Recording what search strings are sent from results pages for what other search strings is also probably fine, and because of the way forms and referrers work (the URL of the page you searched from has the old query in it) the page's query will be sent in the referrer by all browsers by default even if the site neither wanted it nor intends to record it. Recording what text is highlighted is weird, but probably not a genuine threat.

The remote favicon fetch design in their browser app was fixed like 4 years ago.

The "accusation" of "fingerprinting" was along the lines of "their site called a canvas function oh no". It's not "fingerprinting" every time someone tries to use a canvas tag.

What exactly is "all data available in my session" when I click on an ad? Is it basically the stuff a site I go to can see anyway? Sounds like it's nothing exciting or some exciting pieces of data would be listed.

This analysis misses the important point that none of this stuff is getting cross-linked to user identities or profiles. The problem with Google isn't that they examine how their search results pages are interacted with in general or that they count Linux users, it's that they keep a log of what everyone individually is searching, specifically. Not doing that sounds "anonymous" to me, even if it isn't Tor-strength anonymity that's resistant to wiretaps.

There's an important difference between "we're trying to not do surveillance capitalism but as a centralized service data still comes to our servers to actually do the service, and we don't boycott all of CloudFlare, AWS, Microsoft, Verizon, and Yahoo", as opposed to "we're building shadow profiles of everyone for us and our 1,437 partners". And I feel like you shouldn't take privacy advice from someone who hosts it unencrypted.

[–] planish@sh.itjust.works 4 points 4 months ago

Well you can start by trying on purpose to make an SCP wiki level horror scene. Then the bugs are features!

[–] planish@sh.itjust.works 1 points 4 months ago

But now the windows one is getting scrapped whereas Waydroid is presumably sticking around.

[–] planish@sh.itjust.works 0 points 5 months ago (1 children)

How do snaps make money for Canonical?

[–] planish@sh.itjust.works 4 points 6 months ago

You're probably going to run into the problem that people didn't anticipate your strategy if you try to run a model on a GPU with way more memory than the host system. I'm not sure many execution frameworks can go straight from disk to GPU RAM. Also, storage speed for loading the model might be an issue on an SOC that boots off e.g. an SD card.

An eGPU dock should do CUDA just as well as an internal GPU, as far as I know. But you would need the drivers installed.

 
 

Obviously it wouldn't be allowed in this community, but how feasible would it be to make a community on a friendly instance and start shipping data through it somehow? If it works for NNTP it ought to work for ActivityPub, right?

Potential problems:

  1. Community full of base64'd posts immediately gets blocked by everybody's home instance.
  2. Community host immediately gets sued for handing out data it might not have a license for.
  3. Other instances that carry the community immediately get sued (see #2).
  4. Community host is in the US and follows DMCA and deletes all the posts that are complained about.

Maybe it would work as a way to distribute NZBs or other things that are useful but not themselves copyrightable? But the problem with NZBs is you have to keep them away from the people who want to send DMCAs to the Usenet providers about them, or they stop working. So shipping them around in a basically public protocol like ActivityPub would not be good for them.

 

Steps to reproduce:

  1. Start a Node project that uses at least five direct dependencies.
  2. Leave it alone for three months.
  3. Come back and try to install it.

Something in the dependency tree will yell at you that it is deprecated or discontinued. That thing will not be one of your direct dependencies.

NPM will tell you that you have at least one security vulnerability. At least one of the vulnerabilities will be impossible to trigger in your particular application. At least one of the vulnerabilities will not be able to be fixed by updating the versions of your dependencies.

(I am sure I exaggerate, but not by much!)

Why is it like this? How many hours per week does this running-to-stay-in-place cost the average Node project? How many hours per week of developer time is the minimum viable Node project actually supposed to have available?

 

Through witchcraft and dark magic, Zig contains a C standard library and cross compiler for every architecture in 45 megabytes.

 

Julia Evans has done it again.

cross-posted from: https://derp.foo/post/88689

There is a discussion on Hacker News, but feel free to comment here as well.

 

Doesn't seem like that acronym is used for anything important at the moment, I'm sure we can grab it.

 

That's right folks, I want to see you post your... old dreams.

 
 

Many AI image generators, including the big UIs for Stable Diffusion, helpfully embed metadata in the images so that you can load them up again and get all the settings you need to regenerate the image.

But Lemmy's built-in pict-rs image hoster, and most image hosters that resize or re-encode images or that try and stop people from doxing themselves with photos' embedded GPS coordinates, will remove all the metadata. This is counter-productive for AI image generation, because part of the point of sharing the images is so other people can build on the prompts.

What are some good places to host images that don't strip metadata?

 

Most of the Lemmy instances seem to require an email to sign up. That's fine, except most of the places you would go to sign up for email want you to... already have an email. And often a phone number. And almost always a first name, last name, and birthday.

I promise not to do bad stuff, but I don't want that sort of information able to be publicly associated with my accounts where I write stuff, when everyone inevitably loses their databases to hackers. Pseudonymity is good, actually; on the Internet nobody knows you're a dog, etc.

Is anyone doing normal webmail registration anymore? Set username and password, receive email for free? I don't even need to send anything to sign up for accounts elsewhere.

 

Right now, NSFW-marked communities are by default(?) not shown by their home instance to non-logged-in users in the community list, and even if you go to them manually no posts are shown.

Fine, but they also aren't shown to logged in users on other home instances, unless somehow already federated over. If you go to the community's instance, it can't tell you are logged in, and if you go to your home instance you can't see a list of all communities on the other instance that might be available.

Also, older posts that are marked NSFW can't be gotten by anyone with an account anywhere other than the instance they were posted to. When you subscribe to a community on another instance it federates over a few posts, but to doesn't request and federate older posts as you try and page back through the archive. The normal solution is to view the old posts on the source instance, but if the community is marked NSFW the source instance won't let you read the archive there without a local account.

4
submitted 1 year ago* (last edited 1 year ago) by planish@sh.itjust.works to c/main@sh.itjust.works
 

I managed to federate https://sh.itjust.works/c/dave_tv@dalek.zone/ and it gets the header and avatar but it doesn't seem to actually pick up any videos.

Maybe they're all too old or the wrong type.

view more: next ›