iggy

joined 1 year ago
[–] iggy@lemmy.world 2 points 1 day ago

I have a couple Aoostar R7's (4x in a hyper-converged ceph+cloud-hypervisor+k0s cluster, but that's overkill for most). They have been rock solid. They also have an n100 version with less storage expansion if you don't need it. My nodes probably idle at about 20w fully loaded with drives (2x nvme, 1x sata SSD, 1x sata HDD). Running ~15 containers and a VM or 2. You should be able to easily get 1 (plus memory and drives) for $1000. Throw proxmox and/or some NAS OS on it and you're good to go.

[–] iggy@lemmy.world 1 points 3 days ago

Caddy can do both. If you're using a wildcard already, stick with it. In fact, I'd say it's more prudent to use wildcards (with DNS challenges) than http challenges.Then you aren't listing all of your domains in letsencrypt's public database for everyone to see. Nobody needs to know you've got a site called bulwarksdirtyunderpants.bulwark.ninja

[–] iggy@lemmy.world 9 points 1 week ago

Good write up. Thanks for the good lessons learned section.

Tmux is your friend for running stuff disconnected. And I agree with the other post about btrfs send/receive.

[–] iggy@lemmy.world 1 points 8 months ago

They've been rock solid so far. Even through the initial sync from my old file server (pretty intensive network and disk usage for about 5 days straight). I've only been running them for about 3 months so far though, so time will tell. They are like most mini pc manufacturers with funny names though. I doubt I'll ever get any sort of bios/uefi update

[–] iggy@lemmy.world 8 points 9 months ago* (last edited 9 months ago) (2 children)

Internet:

  • 1G fiber

Router:

  • N100 with dual 2.5G nics

Lab:

  • 3x N100 mini PCs as k8s control plane+ceph mon/mds/mgr
  • 4x Aoostar R7 "NAS" systems (5700u/32G ram/20T rust/2T sata SSD/4T nvme) as ceph OSDs/k8s workers

Network:

  • Hodge podge of switches I shouldn't trust nearly as much as I do
  • 3x 8 port 2.5G switches (1 with poe for APs)
  • 1x 24 port 1G switch
  • 2x omada APs

Software:

  • All the standard stuff for media archival purposes
  • Ceph for storage (using some manual tiering in cephfs)
  • K8s for container orchestration (deployed via k0sctl)
  • A handful of cloud-hypervisor VMs
  • Most of the lab managed by some tooling I've written in go
  • Alpine Linux for everything

All under 120w power usage