skilltheamps

joined 1 year ago
[–] skilltheamps@feddit.de 9 points 3 months ago (9 children)

That power efficiency is a direct result of the instructions. Namely smaller chips due to the reduced instructions set, in contrast to x86's (legacy bearing) complex instruction set.

[–] skilltheamps@feddit.de 2 points 3 months ago (2 children)

With something like this, how do you handle the period of time while copying? I mean you can't really leave it running as it wouldn't be in a consistent state. A "under maintenance" page instead? Copy to a fresh folder and when done tell the webserver to serve the new location?

[–] skilltheamps@feddit.de 6 points 3 months ago (1 children)

"almost all of the most technical employees in framework are using either ubuntu, fedora or nixos. I'm mostly on Windows because we need actually people that are using Windows because our employee base in framework is all Linux users"

  • Nirav Patel

https://m.youtube.com/watch?v=EIEc43CxIvY

[–] skilltheamps@feddit.de 9 points 3 months ago (7 children)

That is not the case for every country though. In France and Germany for example almost 3/4 of google requests are via IPv6.

[–] skilltheamps@feddit.de 1 points 4 months ago

If you have a working config, thats exactly the point. Before you built your config, you don't know. If you deploy silverblue, you know it will work beforehand because exactly this config, including /etc, has been tested upstream before. What you are to your buddy, Fedora Atomic is to me. The difference is, it is not just one person that tested some config they decided on on their single piece of hardware, it is the effort of a full blown distro team.

[–] skilltheamps@feddit.de -1 points 4 months ago (2 children)

No, just because it is reproducible doesn't mean you are able to (re)produce something that works. With something like fedora silverblue you know that this specific composition of packages and their versions has been tested, and that all the other users run this exact composition as well.

When you roll your own composition, where you install whatever stuff, you may be the one finding out that there's some conflict between package a version u.v.w and package b version x.y.z.

[–] skilltheamps@feddit.de 24 points 4 months ago (11 children)

I encourage you to go to town with whatever crazy setup you come up.

I just want to note that the reboot-to-update mechanism also has its positive sides, as ancient as it may seem (we do not succumb to windows level backwardness, because that fails to reap the benefits despite requiring so many reboots). Namely, you get atomic updates, hence the name "fedora atomic" for example. That means you have no transient periods where your OS is running in an inconsistent state. Like when you update a traditional distro, the new files/libraries/binaries/kernel-modules do not match anymore what is in RAM, including the currently running kernel. That leads to stuff like the nvidia driver / cuda not working until reboot, running applications failing to load a library they need now etc.. The vast majority of times this is no huge problem, but in theory the only way of maintaining a system with it never running in basically undefined state is with atomic udpates.

[–] skilltheamps@feddit.de 21 points 4 months ago

And the firmware inside that rp2040 is stored on plain old flash memory. So while the data may still be on the memory chip, the controller chip dies at just the same pace than every other usb drive - and then you can't access it.

[–] skilltheamps@feddit.de 55 points 5 months ago (1 children)

The problem is not the EU demanding that, it rather is Apples blatant incompetence at implementing it

[–] skilltheamps@feddit.de 17 points 6 months ago (1 children)

You do not want Octoprint on a machine that is busy. Otherwise you have load spikes that cause Octoprint to not be able to send the move-commands (gcode) as fast as the printer executes the movements. This problem is pronounced with faster printers and slicers that break up arcs into small straight lines (which is practically all slicers). Otherwise your printer stutters because it has to take small breaks to wait for the next command from octoprint.

[–] skilltheamps@feddit.de -1 points 6 months ago (3 children)

What privacy concerns do you have? I'm all for privacy, but I don't really see where registrars are a delicate topic in that. The most that comes to mind is that some (most?) have a service where they do not give out your name and address for whois requests, but instead the details of the registrar (namecheap has that for example).

[–] skilltheamps@feddit.de 2 points 6 months ago (1 children)

And they believe all employees actually remember so many wildly different and long passwords, and change them regularly to wildly different ones? All this leads to is a single password that barely makes it over the minimum requirements, and a suffix for the stage (like 1 for boot, 2 for bitlocker etc), and then another suffix for the month they changed it. All of that then on sticky notes on the screen.

 

Hello,

I moved my home servers to fedora silverblue and docker-compose (ipv6 reasons :/). I stumpled upon the problem that I neither wanted to update image tags manually, nor have no idea what ":latest" deployed on my server in case I need to roll back.

To alleviate that problem, I made a small update-tool. It takes care of writing down the image@sha256... digest every time so that you can roll back. It also automatically snapshots and restarts the services.

It is made in Python but doesn't need any dependencies, so no catering for a venv either. You only need to have skopeo and snapper in working order. Maybe you'll find it useful, but please be aware that it is in an early stage. Also I'm not responsible if it nukes your server 😅

 

I often observe that people that started a small open source project seem to abandon it sooner or later. I'm guilty of this myself in numerous cases. Reasons there are many probably, from new obligations in life to shifts in interest and whatnot.

At some point somebody comes by with an issue, or a merge request even, but the maintainer does not take care of it. Usually this ends up in forks, often though forks undergo the same fate. Apart from the immediate forks-jungle, stuff like software stores or other things might be hardlinked to the original repo, which means places like these end up with dead originals and a number of forks with varying degree of being maintained as well.

To me its just a sad situation overall. And yet I cannot find the time or motivation to maintain some stuff, because circumstances just changed. And I also do not think one is obliged to do so, just because they where nice enough to share their code when the project mattered to them.

Is there a better way? Usually these are very nieche projects, and there is not a circle of regularly active developers that could share administration of a repo, but rather a quiet one-man-show with a short timespan of incredible activity. Some kind of sensible failover mechanism once the original maintainer vanishes would probably be cool. Or any other way that introduces some redundancy in keeping a repository alive. You know how package maintainers in Linux distributions open their package(s) for adoption by somebody else if they run out of capacity? I think that is nice.

I will publish a small project soon I think, but somewhere in the future I fear to leave one or the other person frustrated again when I have moved on to other things...

view more: next ›