moonpiedumplings

joined 1 year ago
[–] moonpiedumplings@programming.dev 21 points 1 month ago* (last edited 1 month ago) (2 children)

And before you start whining - again - about how you are fixing bugs, let me remind you about the build failures you had on big-endian machines because your patches had gotten ZERO testing outside your tree.

As far as I know, the Linux Foundation does not provide testing infrastructure to it's developers. Instead, corporations are expected to use their massive amount of resources to test patches across a variety of cases before contributing them.

Yes, I think Kent is in the wrong here. Yes, I think Kent should find a sponsor or something to help him with testing and making his development more stable (stable in the sense of fewer changes over time, rather than stable as in reliable).

But, I kinda dislike how the Linux Foundation has a sort of... corporate centric development. It results in frictions with individual developers, as shown here.

Over all of the people Linus has chewed out over the years, I always wonder how many of them were independent developers with few resources trying to figure things out on their own. I've always considered trying to learn to contribute, but the Linux kernel is massive. Combined with the programming pieces I would have to learn, as well as the infrastructure and ecosystem (mailing list, patch system, etc), it feels like it would be really infeasible to get into without some kind of mentor or dedicated teacher.

So I don't know how much you know about the shell, but the way that the linux command line works is that there are a set of variables, called environment variables, which dictate so me behavior of the shell. For example, $PATH variable, refers to what directories to search through, when you try to execute a program in your shell.

The documentation you linked, wants you to create a custom shell variable, called SCALE_PATH, consisting of a folder path, which contains the compiled binaries/programs of scale you want to run.

This command: export PATH="${SCALE_PATH}/bin:$PATH"

temporarily edits your PATH variable to add that folder with the scale programs you want to run to your path, enabling you to execute them from your shell.

Thorium's entire focus is on performance. As another commenter has noted, that means no security updates, and no privacy features.

I wouldn't recommend it for daily use, but if you are playing a browser based game it's worth testing out. I used to play krunker.io and I tested it to see if I could get more FPS (FPS equaled faster movement speed back then), but I didn't see any major performance improvements over the major krunker clients or Microsoft Edge (other most performant browser).

[–] moonpiedumplings@programming.dev 2 points 1 month ago* (last edited 1 month ago) (1 children)

Linux mint debian edition is not based on testijg, but rather on stable*.

This misconception may be caused by the fact that the latest debian stable, has newer packages than many of the older-but-not-ancient ubuntu releases, which were originally based off of debian sid.

*I cannot find a first party source for this, only third party

Linux Mint Debian Edition 6 hits beta with reassuringly little drama. Think Debian 12 plus Mint's polish and a friendlier UX for non-techies

https://www.theregister.com/2023/09/13/linux_mint_debian_edition_hands_on/

[–] moonpiedumplings@programming.dev 1 points 1 month ago* (last edited 1 month ago)

I'd recommend looking at Twitch streams in the software and game development category. Many of them develop in Unity, which is almost entirely C#.

I really like mercernarymage*. He mostly does gamedev in unity, but he occasionally explains stuff and answers questions. In addition to that, his code is very clean and easy to read, easy enough for me (a non C# dev) to understand it.

*note the spelling. NOT "mercenary".

Sorry. I meant if you wanted to use only packages from one set of repositories/one distro, for if you were looking for lower level packages like the kernel or desktop environment to be updated.

I cannot find anything related to that in their documentation, their about page, or their whitepaper.

They talk a lot about decentralized computing, but any form of secure enclave or code verification isn't mentioned.

Compare that to this project, which is similar, but incomplete. However, quilibrium uses it's own language instead of python or javascript, like golem does. The docs for golem do not explain how I am supposed to verify a remote server is actually running my python/javascript code.

[–] moonpiedumplings@programming.dev 1 points 1 month ago (3 children)

No, I think if you're using the nextcloud all in one image, then the management image connects to the docker socket and deploys nextcloud using that. The you could be able to update nextcloud via the web ui.

https://github.com/nextcloud/all-in-one?tab=readme-ov-file#how-to-update-the-containers

[–] moonpiedumplings@programming.dev 1 points 1 month ago (2 children)

I read through the docs. I'm not sure how this enables trusted computing.

[–] moonpiedumplings@programming.dev 1 points 1 month ago* (last edited 1 month ago) (4 children)

There is concern amongst critics that it will not always be possible to examine the hardware components on which Trusted Computing relies, the Trusted Platform Module, which is the ultimate hardware system where the core 'root' of trust in the platform has to reside.[10] If not implemented correctly, it presents a security risk to overall platform integrity and protected data

https://en.m.wikipedia.org/wiki/Trusted_Computing

Literally all TPM's are proprietary. It's basically a permanent, unauditable backdoor, that has had numerous issues, like this one (software), or this one (hardware).

We should move away from them, and other proprietary backdoors that deny users control over there own system, rather than towards them, and instead design apps that don't need to trust the server, like end to end encryption.

Also: if software is APGL then they are legally required to give you the source code, behind the server software. Of course, they could just lie, but the problem of ensuring that a server runs certain software also has a legal solution.

[–] moonpiedumplings@programming.dev 4 points 1 month ago (1 children)

So, officially no. But there are ongoing theories in the r/emulationonandroid subreddit that they are.

I think it could be either way, but it's unlikely that they are the same person. In both cases, harassment caused them to shut there projects down, which could be a reasanobale coincidence, or could be indicative of a larger harassment campaign.

[–] moonpiedumplings@programming.dev 3 points 1 month ago (1 children)

Crowdstrike didn't target anyone either. Yet, a mistake in code that privileged, resulted in massive outages. Intel ME runs at even higher privileges, in even more devices.

I am opposed to stuff like kernel level code, exactly for that reason. Mistakes can be just as harmful as malice, but both are parts of human nature. The software we design should protect us from ourselves, not expose us to more risk.

There is no such thing as a back door that "good guys" can access, but the bad guys cannot. Intel ME is exactly that, a permanent back door into basically every system. A hack of ME would take down basically all cyber infrastructure.

view more: ‹ prev next ›