this post was submitted on 03 Feb 2024
50 points (93.1% liked)

Linux

48364 readers
1197 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

A university near me must be going through a hardware refresh, because they've recently been auctioning off a bunch of ~5 year old desktops at extremely low prices. The only problem is that you can't buy just one or two. All the auction lots are batches of 10-30 units.

It got me wondering if I could buy a bunch of machines and set them up as a distributed computing cluster, sort of a poor man's version of the way modern supercomputers are built. A little research revealed that this is far from a new idea. The first ever really successful distributed computing cluster (called Beowulf) was built by a team at NASA in 1994 using off the shelf PCs instead of the expensive custom hardware being used by other super computing projects at the time. It was also a watershed moment for Linux, then only a few yeas old, which was used to run Beowulf.

Unfortunately, a cluster like this seems less practical for a homelab than I had hoped. I initially imagined that there would be some kind of abstraction layer allowing any application to run across all computers on the cluster in the same way that it might scale to consume as many threads and cores as are available on a CPU. After some more research I've concluded that this is not the case. The only programs that can really take advantage of distributed computing seem to be ones specifically designed for it. Most of these fall broadly into two categories: expensive enterprise software licensed to large companies, and bespoke programs written by academics for their own research.

So I'm curious what everyone else thinks about this. Have any of you built or admind a Beowulf cluster? Are there any useful applications that would make it worth building for the average user?

you are viewing a single comment's thread
view the rest of the comments
[–] Kangie@lemmy.srcfiles.zip 16 points 9 months ago (1 children)

Yes. I'm actually doing so right now at work, and run multiple Beowulf clusters for a research institution. You don't need or want this.

In a real cluster you would use software like Slurm or PBS to submit jobs to the cluster and have them execute on your compute nodes as resources are available to keep utilisation high.

It makes no sense for the home environment unless you're trying to run some serious computations and if you have a need to do that for work or study then you probably have access to a real HPC.

It might be interesting and fun, but not particularly useful. Maybe a fun HCI setup would be more appropriate to enable you to scale VMS across hosts and get some redundancy.

[–] plenipotentprotogod@lemmy.world 3 points 9 months ago (2 children)

Out of curiosity, what software is normally being run on your clusters? Based on my reading, it seems like some companies run clusters for business purposes. E.g. an engineering company might use it for structural analysis of their designs, or a pharmaceutical company might simulate the interactions of new drugs. I assume in those cases they've bought a license for some kind of high-end software that's been specifically written to run in a distributed environment. I also found references to some software libraries that are meant to support writing programs in this environment. I assume those are used more by academics who have a very specific question they want to answer (and may not have funding for commercial software) so they write their own code that's hyper focused on their area of study.

Is that basically how it works, or have I misunderstood?

[–] Kangie@lemmy.srcfiles.zip 4 points 9 months ago* (last edited 9 months ago)

Overall you're not too far off, but what you'll tend to find is that it's a lot of doing similar calculations over and over.

For example, climate scientists may, for certain experiments, read a ton of data from storage for say different locations and date/times across a bunch of jobs, but each job is doing basically the same thing - you might submit 100000 permutations, or have an updated model that you want to crunch the existing dataset out with.

The data from each job is then output, and analysed (often with followup batch jobs).

Edit: here's an example of a model that I have some real-world experience building to run on one of my clusters: https://www.nrel.colostate.edu/projects/century/

Swin have some decent, public docs. I think mine are pretty good, but they're not public so....

https://supercomputing.swin.edu.au/docs/2-ozstar/oz-partition.html

There will typically be some interactive nodes in a cluster as well that enable users to log in and perform interactive tasks, like validating that the software will run or, more commonly, to submit jobs to the queue manager.