jlh

joined 1 year ago
[–] jlh@lemmy.jlh.name 6 points 1 day ago (1 children)

In the past two years, I have had horrible issues where it decides that I'm not allowed to join the call because I have a Teams account logged into a different organization, that it won't let me log out of. An issue where Microsoft servers just time out if you have ipv6 enabled, etc.

Don't get me started on Skype for Business. It's still around.

[–] jlh@lemmy.jlh.name 5 points 2 days ago* (last edited 2 days ago) (3 children)

Much more likely to get run over in a crosswalk in the us than in Europe. American drivers don't stop. The amount of overengineered zebra crossings in the US are crazy.

https://youtu.be/d-5A2RxgvOc

[–] jlh@lemmy.jlh.name 13 points 2 days ago

IT folk got so annoyed about being asked about what happens if you got run over by a bus, they decided to go out and show everyone.

[–] jlh@lemmy.jlh.name 22 points 2 days ago (5 children)

Now make it open source

[–] jlh@lemmy.jlh.name 26 points 3 days ago* (last edited 3 days ago) (2 children)

"Shell has a responsibility to reduce its emissions," but everyone involved is abdicating their responsibility and handwaving away the idea of emissions reductions?? How else will Shell reduce emissions by 45% if Shell doesn't reduce emissions by 45%?

[–] jlh@lemmy.jlh.name 38 points 3 days ago

Slavery is wrong, period.

[–] jlh@lemmy.jlh.name 3 points 3 days ago* (last edited 3 days ago)

Go for it!

Hetzner currently doesn't have a managed kubernetes option, so you have to set it up manually with Terraform, but there are a few terraform modules out there that have everything you need. The rumor is that they are working on a managed kubernetes offering, so that will be something simpler in the future.

Their api is compatible with all the Kubernetes automation, so all the autoscaling stuff is all automatic once you have it set up, and bullet-proof. Just use the k8s HPA to start and stop new containers based on cpu, or prometheus metrics if youre feeling fancy, and then kubernetes node autoscaler will create and delete nodes automatically for you based on your containers' cpu/ram reservations.

Let me know if you need documentation links for something.

[–] jlh@lemmy.jlh.name 3 points 3 days ago

For the firewall issue, could you keep the cluster on its own vpc, and then use load balancer annotations to do per-service firewalls?

https://docs.digitalocean.com/products/kubernetes/how-to/configure-load-balancers/#firewall-rules

[–] jlh@lemmy.jlh.name 39 points 3 days ago* (last edited 3 days ago) (3 children)

I understand why Democrats are falling back on arguments like this in the face of open facism, but it's fitting that this week is the the 86th anniversary of the kristallnacht pogrom. Somebody tell Hitler and Goebbels not to dehumanize and attack all those jews, because it would be, like, totally bad for the economy or something.

The point of Trump's rhetoric against immigrants is to dehumanize them and scapegoat them for all of the US's problems. We do the same here in Sweden, when our politicans say we're being "naiive", and we call immigrants disloyal to Sweden, welfare frausters, and terrorists. We accuse their culture of being anti-democratic, sexist, and promoting child abuse. We scapegoat them for our crime issues, antisemitism issues, and impoverished neighborhoods.

Just the first step in dehumanizing an entire 1M+ group of people.

[–] jlh@lemmy.jlh.name 3 points 3 days ago

Their Terraform support is top notch too, better than AWS.

[–] jlh@lemmy.jlh.name 10 points 3 days ago* (last edited 3 days ago) (4 children)

If your scale is right, both Hetzner and Digital Ocean support the Kubernetes autoscaler.

https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner

https://docs.digitalocean.com/products/kubernetes/how-to/autoscale/

Digital Ocean is super easy for beginners, Hetzner is a bit more technical but like half the cost.

This only outweighs the per-node overhead though if you're scaling up/down entire 4vcpu/8gib nodes and/or running multiple applications that can borrow cpu/ram from each other.

If you're small scale, microVMs like Lambda or fly.io are the only way to go for meaningful scaling under 4vcpu/8gib of daily variation. Also, at that scale, you can ask yourself if you really need autoscaling, since you can get servers that big from Hetzner for like $20/month. Simple static scaling is better at that scale unless you have more dev time than money.

 

@antonioguterres on twitter:

I condemn the broadening of the Middle East conflict with escalation after escalation.

This must stop.

We absolutely need a ceasefire.

7:26 PM · Oct 1, 2024

 

https://web.archive.org/web/20240719155854/https://www.wired.com/story/crowdstrike-outage-update-windows/

"CrowdStrike is far from the only security firm to trigger Windows crashes with a driver update. Updates to Kaspersky and even Windows’ own built-in antivirus software Windows Defender have caused similar Blue Screen of Death crashes in years past."

"'People may now demand changes in this operating model,' says Jake Williams, vice president of research and development at the cybersecurity consultancy Hunter Strategy. 'For better or worse, CrowdStrike has just shown why pushing updates without IT intervention is unsustainable.'"

 

I wanted to share an observation I've seen on the way the latest computer systems work. I swear this isn't an AI hype train post 😅

I'm seeing more and more computer systems these days use usage data or internal metrics to be able to automatically adapt how they run, and I get the feeling that this is a sort of new computing paradigm that has been enabled by the increased modularity of modern computer systems.

First off, I would classify us being in a sort of "second-generation" of computing. The first computers in the 80s and 90s were fairly basic, user programs were often written in C/Assembly, and often ran directly in ring 0 of CPUs. Leading up to the year 2000, there were a lot of advancements and technology adoption in creating more modular computers. Stuff like microkernels, MMUs, higher-level languages with memory management runtimes, and the rise of modular programming in languages like Java and Python. This allowed computer systems to become much more advanced, as the new abstractions available allowed computer programs to reuse code and be a lot more ambitious. We are well into this era now, with VMs and Docker containers taking over computer infrastructure, and modern programming depending on software packages, like you see with NPM and Cargo.

So we're still in this "modularity" era of computing, where you can reuse code and even have microservices sharing data with each other, but often the amount of data individual computer systems have access to is relatively limited.

More recently, I think we're seeing the beginning of "data-driven" computing, which uses observability and control loops to run better and self-manage.

I see a lot of recent examples of this:

  • Service orchestrators like Linux-systemd and Kubernetes that monitor the status and performance of services they own, and use that data for self-healing and to optimize how and where those services run.
  • Centralized data collection systems for microservices, which often include automated alerts and control loops. You see a lot of new systems like this, including Splunk, OpenTelemetry, and Pyroscope, as well as internal data collection systems in all of the big cloud vendors. These systems are all trying to centralize as much data as possible about how services run, not just including logs and metrics, but also more low-level data like execution-traces and CPU/RAM profiling data.
  • Hardware metrics in a lot of modern hardware. Before 2010, you were lucky if your hardware reported clock speeds and temperature for hardware components. Nowadays, it seems like hardware components are overflowing with data. Every CPU core now not only reports temperature, but also power usage. You see similar things on GPUs too, and tools like nvitop are critical for modern GPGPU operations. Nowadays, even individual RAM DIMMs report temperature data. The most impressive thing is that now CPUs even use their own internal metrics, like temperature, silicon quality, and power usage, in order to run more efficiently, like you see with AMD's CPPC system.
  • Of source, I said this wasn't an AI hype post, but I think the use of neural networks to enhance user interfaces is definitely a part of this. The way that social media uses neural networks to change what is shown to the user, the upcoming "AI search" in Windows, and the way that all this usage data is fed back into neural networks makes me think that even user-facing computer systems will start to adapt to changing conditions using data science.

I have been kind of thinking about this "trend" for a while, but this announcement that ACPI is now adding hardware health telemetry inspired me to finally write up a bit of a description of this idea.

What do people think? Have other people seen the trend for self-adapting systems like this? Is this an oversimplification on computer engineering?

 

The latest patch today, 13.23 makes the game instacrash after champ select, be warned. Don't start a match on Linux until it's fixed.

https://leagueoflinux.org/

 

Awful to see our personal privacy and social lives being ransomed like this. €10 seems like a price gouge for a social media site, and I'm even seeing a price tag of 150SEK (~€15) In Sweden.

view more: next ›