Seirdy

joined 3 years ago
[–] Seirdy@lemmy.ml 6 points 2 years ago (2 children)

The good: familiar UI, nice community

The bad: much worse accessibility.

Conclusion: I'd recommend keeping a Gitea/Codeberg remote but not using it exclusively. Doing so should include more people without excluding people who use assistive technology.

 

Put together this brief overview to the basics of stylometric fingerprinting resistance. TLDR: obfuscate your language patterns with a good style guide.

[–] Seirdy@lemmy.ml 1 points 2 years ago

Unfortunately, Gitea (the forge software that powers Codeberg) has major accessibility issues. It's not usable from most assistive technologies (e.g. screen readers). GitLab isn't much better.

Sourcehut is pretty much the only GitHub alternative with good accessibility I know of.

[–] Seirdy@lemmy.ml 5 points 2 years ago

This is their privacy policy: https://h5hosting-dra.dbankcdn.com/cch5/petalsearch/global/agreement/privacy-statement.htm?language=en-us

It includes detailed fingerprinting metrics like mouse behavior and font information.

I should probably link it, thanks for the feedback.

[–] Seirdy@lemmy.ml 5 points 2 years ago

I said that about Petal because readers likely hadn't heard of it and didn't have any expectations. l assume readers already knew Bing, Google, and Yandex were bad for privacy.

[–] Seirdy@lemmy.ml 12 points 2 years ago (4 children)

Not at all; there are tons of newish engines out there, the best of which are trying to carve out a niche for themselves in an area that Google and Bing ignore. I listed 44 English engines with their own indexes, along with some for other languages which I'm unfortunately unable to review because I don't speak the langs required.

On these engines, you won't get far if you use natural language queries or expect the engine to make inferences. Use broad terms and keywords instead. I recommend giving Mojeek, Marginalia, Teclis, Petal (bad privacy, but usable through Searx), Kagi, and Alexandria a try.

[–] Seirdy@lemmy.ml 2 points 2 years ago (1 children)

In these discussions, it's worth distinguishing between "reducing the size of your fingerprint" (reducing data collection, e.g. by blocking trackers) and "reducing the likelihood of connecting your footprint to an identity" (fingerprinting avoidance). Customization, extensions, adblocking, etc. are antithetical to the latter but useful to the former.

[–] Seirdy@lemmy.ml 0 points 2 years ago

The reality is more nuanced than this. Wrote up my thoughts on my blog: A layered approach to content blocking.

Strictly speaking about content filtering: declarativeNetRequest is honestly a good thing for like 80% of websites. But there's that 20% that'll need privileged extensions. Content blocking should use a layered approach that lets users selectively enable a more privileged layer. Chromium will instead be axing the APIs required for that privileged layer; Firefox's permission system is too coarse to support a layered approach.

 

A more complex look at where Manifest v3 fits into the content-blocking landscape, and why it can't replace privileged extensions despite bringing important improvements to the table.

[–] Seirdy@lemmy.ml 2 points 2 years ago (1 children)

If you're asking for an open-source option because you want to self-host...well, at that point, you'd already have a web server. Just sftp/rsync the files over into a subdir of your web root.

[–] Seirdy@lemmy.ml 0 points 2 years ago* (last edited 2 years ago)

The safety of TUI browsers is a bit overrated; most don't do any sandboxing of content whatsoever and are run in an unsandboxed environment. Both of these are important for a piece of software that only exists to parse untrusted content from the most hostile environment known (the Web).

Check a CVE database mirror for your favorite TUI browser; if it has a nontrivial number of users, it'll have some vulns to its name. Especially noteworthy is Elinks, which I absolutely don't recommend using.

Personally: to read webpage from the terminal, I pipe curl or rdrview output into w3m that's sandboxed using bubblewrap (bwrap(1)); I wrote this script to simplify it. I use that script to preview HTML emails as well. The sandboxed w3m is forbidden from performing a variety of tasks, including connecting to the network; curl handles that.

[–] Seirdy@lemmy.ml 0 points 2 years ago

The problem is that your offline CA stores won't use OCSP revocation logs or certificate transparency. You need live updates for those. The latter is especially important, as without it you're completely dependent on one group of CAs.

[–] Seirdy@lemmy.ml 0 points 2 years ago* (last edited 2 years ago) (1 children)

I compiled a list of search engines that use their own indexes for organic results: https://seirdy.one/2021/03/10/search-engines-with-own-indexes.html

I'll probably post a big update to that article at some point that compares if/how some of the listed engines process structured data (RDFa, microdata, JSON-LD, microformats 1/2, open graph metadata, POSH).

I typically use a Searx/SearxNG instance that mixes Google, Bing, and Bing-derivatives (e.g. DDG) with other indexes: Petal, Mojeek, Gigablast, and Qwant (Qwant mixes its own results with Bing's). Petal, Gigablast, and Mojeek have been quite helpful for discovering new content; however, I wouldn't use Petal directly due to privacy concerns. Using it through a Searx proxy you trust more seems alright.

If I know a query will give me an instant answer I want to use, I'll use DDG.

[–] Seirdy@lemmy.ml 1 points 2 years ago* (last edited 2 years ago)

Not just this thread, but the rest of Fedi, IRC, my own email, and Matrix too. My posts get atl 20% longer after I share them.

 

I find people who agree with me for the wrong reasons to be more problematic than people who simply disagree with me. After writing a lot about why free software is important, I needed to clarify that there are good and bad reasons for supporting it.

You can audit the security of proprietary software quite thoroughly; source code isn't a necessary or sufficient precondition for a particular software implementation to be considered secure.

 

cross-posted from: https://lemmy.ml/post/60818

Lots of people have been spreading the often-unnecessary advice to add a Permissions-Policy response header to their sites to opt-out of Google's FLoC, and some have been going so far as to ask FLOSS maintainers to patch their software to make this the default. When discussions got heated to the point of accusing webmasters who don't implement these headers of being "complicit" in Google's surveillance, I felt I had to write this.

Everybody: please calm down, take a deep breath, and read the spec before you make such prescriptive advice about it.

FLoC is terrible, but telling everyone to add a magic “opt-out header” in every situation conveys a misunderstanding of everything you need to know about the opt-in/out process.

 

Most “alternative” search engines to the big three (Google, Bing, Yandex aka GBY) just proxy their results from GBY. I took a look at 30 non-meta search engines with their own crawlers/indexers to find actual alternatives.

Feedback + additions welcome.

view more: next ›