Self-Hosted Alternatives to Popular Services

208 readers
1 users here now

A place to share, discuss, discover, assist with, gain assistance for, and critique self-hosted alternatives to our favorite web apps, web...

founded 1 year ago
MODERATORS
26
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/veteranbv on 2024-11-14 21:03:29+00:00.


Hey y'all. I made some retro terminal themes for Dashy and Uptime Kuma that I wanted to share.

The Dashy theme has that classic green terminal feel. For Uptime Kuma, I made a similar theme with a compact layout and some glowing status indicators. Everything's mobile-friendly and easy to set up.

Check out some screenshots:

  • Dashy Dashboard

I didn't go all the way in making a theme, but added custom CSS to my config.

  • Uptime Kuma Status Page

Custom CSS added to the Uptime Kuma Status Page

  • Uptime Kuma Dashboard

Things greyed out aren't always on or I was testing

Everything's customizable through CSS variables if you want to tweak colors or layouts. I've included setup instructions in the repo.

Grab it here if you want to try it out, Terminal Zero.

27
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/-eschguy- on 2024-11-13 22:36:29+00:00.

28
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/dakazze on 2024-11-14 20:09:17+00:00.


Since I am particularly careful about where I give out my phone number, I don't receive any spam calls, even though I've had the same number for about 10 years now. So you can imagine my surprise when I received a call from Intel today!

The person on the phone had a thick indian accent even though he introduced himself as "generic english name" and told me that 4 international IP addresses are accessing my PC....

Since I had nothing important to do and I was curious I thought I'd play along and see where this takes me. So I excused myself because "I had to answer the door". I quickly made a new snapshot of my tiny11 VM (debloated windows 11), reset firefox, deleted my network shares and disconnected my microsoft account.

Back on the phone I played along when I was told to enter "eventvwr" under win+R in minute detail: "You see the control key on the bottom left? What key is right next to it? Yes the windows key! Now press the windows key and R as in Richard at the same time". The scammer made me navigate to the windows event log and asked me how many errors I see. "17500!!" I answered in shock at this huge number!

Now that I realized how serious the situation was I was ready to get forwarded to a support technician... (I am not quite sure if I was actually forwarded to another person or if the scammer just faked a different accent). This new support tech made me visit www.support.me and explained that the security warning that was displayed when visiting this website was caused by Firefox. I learned that Firefox is not updated as frequently as google chrome which is why these errors are common. After skipping the security warning I entered a pin to download some kind of remote desktop client via that site.

Then something weird happened. I was told to right click the desktop and navigate to display options (not sure, I am using german windows). There he told me to click a button to change the theme but he kept shaking the mouse so I wasnt able to click it. "Ahh, you see the problem?" he asked and somewhat confused I agreed... This was executed so poorly I honestly was at a loss!

The next step to solve my PCs issues was to install some kind of software but I am not entirely sure what it was. He transferred an installer file to my desktop that was called something along the lines of "Microsoft support tool". Even though he had full remote access he made me do all the clicking "accept", "ok", "allow" maybe to hide the fact that he was able to control my mouse and keyboard all along. During the install process I had to set and confirm a password he told me. I am still annoyed with myself for not keeping a copy of that installer... During the whole process I had two "disconnects from the internet" to make some coffee since it was still pretty early for me....

After the software was installed he expected a new service to show up in my taskbar which obviously was not the case. Since I still dont know what that program was I honestly have no idea why it did not work but this obviously worked out in my favor. He instructed me to look for the program under the start menu and obviously he did not know what classic shell is, since he kept telling me that I am using Windows Vista, which might be the reason the support tool wasnt working... After we werent able to find the newly installed software he was clearly at a loss. I guess his script doesnt have instructions on what to do in that case because he had to call a colleague over to help him. This was when he started breaking character, talking to his colleague in indian. After trying to reinstall the software 3 times he asked me if I was using Virtual Box and since a whole hour had already passed I told him that I had fun and wished him a nice day.

I was very surprised when he acted very chill upon this revelation. He insisted that he knew all along that I messing with him and claimed that he is getting paid anyway. He wished me a nice day too and this concluded my first interaction with a tech support scammer.

In the end this was a convenient way for me to practice my spoken english since I hardly ever get a chance to talk in english. What I am wondering is why they are calling people in german speaking countries since most older people who are likely to fall for their scams dont speak english well enough to get through the whole script.

Does anyone know what the software was that he was trying to install? I sadly already restored the snapshot so I cant check.

29
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/sexyshingle on 2024-11-14 19:06:00+00:00.


Hello guys, I was really sad and shocked to hear about TTeck. Maybe I was living under a rock these past few weeks but I had no idea he was even sick. RIP and condolences to all who knew him.

His passing did bring to the surface of my mind something I had been thinking about ever since I registered my first domain in order to host my own email... (I've def procrastinated on it...) how do we ensure a proper digital "estate plan" to make sure our family members can sort thru (or even take over if so desired) the technical and digital stuff we leave behind?

Estate planning in general is something no one likes think about, but I think the more into selfhosting we dive, the more we really need a plan for when the unthinkable happens, that way we ensure any data we want make sure "lives on" and is passed onto our relatives is not lost, and whomever is managing our last affairs can carry out instructions to preserve things.

For the longest time, I've thought about setting something like Hereditas up, so that my somewhat technical relatives can get access to my digital stuff and carry out my wishes should I ever kick the bucket... but I haven't

But I was wondering what recs, tools, or plans others on here had in place for this kinda thing?

PS: This goes without saying but I'll say it anyway as someone who had to deal with the unexpected death of a close family member: it's never too early to do some estate planning (for you or your relatives): look up the laws in your jurisdiction and have a plan (a will, healthcare proxy, etc) for both your tangible assets, and your digital assets.

30
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/lawrencesystems on 2024-11-14 11:29:06+00:00.

31
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/WaYyTempest on 2024-11-14 14:22:15+00:00.

32
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/80lm80 on 2024-11-14 08:46:17+00:00.

33
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/ShehbajDhillon on 2024-11-13 06:18:51+00:00.


Hey folks! I built Guard, a free, open-source tool designed for cloud security that’s fully self-hosted. It scans your AWS cloud environment for misconfigurations in services like IAM, EC2, S3, etc., and offers LLM-based remedies for any issues it finds.

If you're handling cloud security and want something self-hosted, feel free to check it out and let me know if it’s helpful!

Demo video: Here’s a quick walkthrough

GitHub repo:

Website: guard.dev

Would love to hear any feedback, ideas, or feature requests!

34
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Accomplished-Tip-227 on 2024-11-13 19:07:10+00:00.

Original Title: K3S is awsome for your HomeServer(s) I used to use docker in a single node use last weekend i was bored and thought it was time to upgrade some stuff.. k3s is just dope + you can use longhorn in rancher do make everything persistent without any issues.. awsome stuff working!

35
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/completelyreal on 2024-11-13 16:09:42+00:00.

36
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/AmbienJoe on 2024-11-13 03:28:41+00:00.

37
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/ElGatoPanzon on 2024-11-13 15:26:57+00:00.


Scheduling and clustering has been a thing for a long time now. There's solutions like docker swarm, nomad and the massive k8s to schedule containers on multiple nodes. 2 years ago I was still manually provisioning and setting up services outside of docker and giving all my servers cute names. But I wanted to up my game and go with a more "cattle not pets" solution. And oh boy, it sent me down a huge rabbit hole to get there but I finally did.

So 2 years ago I set out to create a setup which I want to call the "I don't care where it is" setup. It has the following goals:

  1. Provision and extend a server cluster without manual intervention (for this, I used Ansible and wrote about 50+ ansible roles)
  2. Automatic HTTP and HTTPS routing, and SSL (for this, I used Consul and a custom script to generate nginx configs, and another script to generate certs using consul data)
  3. Schedule a docker container job to be run on one or more servers in the cluster (for this I went with Nomad, it's great!)
  4. Nodes need to be 100% ephemeral (essentially, every single node needs to be disposable and re-creatable with a single command and without worry)

Regarding point #2, I know that Traefik exists and I used Traefik for this solution for a year. However it had one major flaw that being you cannot have multiple instances of Traefik doing ACME certs, because of 2 reasons: Let's Encrypt rate limits, or the fact that Traefik's ACME storage cannot be shared. For a long time Traefik was a bottleneck in the sense that I couldn't scale it out if I wanted to. So I ultimately wrote my own solution with nginx and Consul and generate certs with certbot and feed them to multiple nginx instances.

Where I was ultimately stuck however was #4. Non-persistent workloads are a non-issue because they don't persist data so they can show up on any node. My first solution (and the one I used for a long time) was essentially running all my deployments on NFS mounts and having the deployment data living on a bunch of nodes and creating a web of NFS mounts so that every worker node in the cluster had access to the data. And it worked great! Until I lost a node, and deployments couldn't access that storage and it brought down half my cluster.

I decided it was time to tackle #4 again. Enter the idea of the Distributed File System, or DFS for short. Technically NFS is a DFS, but what I specifically wanted was a replicating DFS where the deployment's data exists on multiple nodes and gets automatically replicated. That way, if a node stopped working the deployments data would exist somewhere else and they could be scheduled to come back up without data loss.

MooseFS changed the game for me, here's how:

1. I no longer need RAID storage It took me a while to come to this conclusion, and I'd like to explain it because I believe this could save a lot of money and decrease hardware requirements. Moosefs uses "chunkservers" which are daemons running on a server that offer local storage for use with the cluster, to store chunks. These chunkservers can use any number of storage devices of any type and any size. And it does not need to be 2 or more. In fact, moosefs does not even work on top of a typical RAID and requires JBOD (passing disks as-is to the system).

In my eyes 1 node with a 2 disk ZFS mirror or mdraid offers redundancy of 1 failure. In the moosefs world, 2 nodes each with 1 disk is the same setup. Moosefs handles replication of chunks to 2 chunkservers running on the nodes, but it's even better because you can lose an entire node and the other node still has all the chunks. Compared to RAID if you loose the node it doesn't matter if you had 2 disks to 200 disks, they are all down!

2. I no longer care about disk failures or complete node failures This is what I recently discovered after migrating my whole cluster to use moosefs. Deployment data exists on the cluster, every node has access to the cluster, deployments show up on any worker (I don't even care where they are, they just work). I lost a 1TB NVMe chunkserver yesterday. Nothing happened. Moosefs complained about a missing chunkserver, but quickly rebalanced the chunks to other nodes to ensure the minimum replication level I set (3). Nothing happened!

I still have a dodgy node in rotation. For some reason, it randomly ejects NVMe drives (either 0 or 1), or locks up. And that has been driving me insane the last few months. Because, until now, whenever it died half my deployments died and it really put a dent in the cluster. Now, when it dies, nothing happens and I don't care. I can't stress this enough. The node dies, deployments are marked as lost, and instantly scheduled on another node in the cluster. They just pick right back up where they left off because all the data is available.

3. Expanding and shrinking the cluster is so easy With RAID the requirements are pretty strict. You can put a 4TB and a 2TB in a RAID, and you only get 2TB. Or you can make a RAIDZ2 with like 4 disks and cannot expand the number of disks in the pool, only their sizes, and it has to be the same sizes for all of them.

Well, not with moosefs. Whatever you have for storage, it can be used for chunk storage, down to the MB. Here's an example I went through while testing and migrating. I setup chunkservers with some microSDs and usb sticks and started putting data on it. 3 chunkservers, with like 128gb usb stick and one with 64gb microsd, and the other had 80gb free. It started filling them up evenly with chunks, until the 64gb one filled and then it started filling the other 2 with most of the chunks. With replication level 2, that was fine. Now I wanted replication level 3. So I picked up 3 256GB usb sticks for cheap, added each one to each node, and marked the previous 3 chunkservers for removal. This triggered migrations of chunks to the 3 new usb sticks. Eventually I added more of my real node's storage to the cluster and concluded the usb sticks were too slow (high read/write latency), and marked all 3 for removal. It migrated all chunks to the rest of the storage I added. I was adding them in TBs like 1 TB SSD, then one of my 8TB HDDs, then an 2 TB SSD. Adding and removing is not a problem!

4. With moosefs I can automatically cache data on SSDs, and automatically backup to HDDs Moosefs offers something called storage classes. It lets you define a couple of properties that apply per-path in the cluster by giving labels to each chunkserver and then specifying how to use them:

  • Creation label: which chunkservers are used when files are being written/created
  • Storage label: which chunkservers are the chunks stored on after they are fully written and kept
  • Trash label: when deleting a file, which chunkservers hold the trash
  • Archive label: when the archive period passes for the file, which chunkservers hold the chunks

To get a "hot data" setup where everything is written to an SSD and read from an SSD as long as it's accessed or modified within X time, the storage class is configured to create and keep data on SSDs as a preference, and set to archive to HDDs after a certain time such as 24 hours, 3 days or 7 days.

In addition to this, the storage and archive labels include an additional replication target. I have a couple of USB HDDs connected and setup as chunkservers, but they are not used by the deployments data. They are specifically for backups, and have labels which are included in the storage class. This ensures that important data which I apply the storage classes to get their chunks replicated to these USB HDDs, but the cluster won't read from them because it's set to prefer labels of the other active chunkservers. The end result: automatic local instant rsync-style backups!

The problems and caveats of using a DFS There are some differences and caveats to using such a deployment. It's not free, as in, resource wise. It requires a lot more network activity and I am lucky most of my worker nodes have 2x 2.5GBe NICs. Storage access speed is network bound, so you don't get NVMe speeds even if you had a cluster made up entirely of NVMe storage. It's whatever the network can handle - overhead.

There is 1 single point of failure with the moosefs cluster GPL version, which is the master server. Currently I have that running on my main gateway node which also runs my nginx and accepts incoming network access and handles the routing to the right nodes. So, if that node goes down my entire cluster is non-accessible. Luckily they offer another type of daemon called a metalogger, which logs the metadata from the master and acts as an instant backup. Any metalogger server can easily be turned into a running master, for disaster recovery.

Did I mention it's a hybrid cluster made up of both local and remote nodes? I have certain workloads running in the cloud, and others running locally, all accessing each other over a zero-trust wireguard VPN. All of my deployments bind to wireguard's wg0 and cannot be accessed from the public (even local LAN) addresses and everything travels over wireguard even the moosefs cluster data.

It's been just over 2 years but I finally reached enlightenment with this setup!

38
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/2sXy on 2024-11-12 22:02:52+00:00.


I thought the subreddit might find this interesting. I came across this thread on X:

39
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/pandapajama on 2024-11-13 12:57:07+00:00.

40
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/reddit_lanre on 2024-11-13 09:03:16+00:00.

41
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Tanner234567 on 2024-11-13 01:28:38+00:00.


I host an internet radio station for my local town on my server using azuracast. In order to have minimal maintenance but stay relevant and useful, I wrote a script that takes the API from weather.com and creates a weather report for my local area. Feel free to check it out and let me know what you think!

Radio Station (Fair warning. I love christmas, so it's christmas music in the morning and evening right now):

Weather Report Script:

42
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/LifeReboot___ on 2024-11-12 18:43:20+00:00.


For people that care about privacy and selfhost as much as possible for that reason, how do you handle offiste backup for some important data such as your private files and photos?

From what I understand it's best to keep some offsite backup in case of floods/fire/etc, but I am curious how everyone do that, for example do you backup your files periodically to zero knowledge cloud providers like Proton/Mega/Sync/pCloud/etc

Or do you encrypt your files (which requires you to safe keep a lot of different passphrases/passwords) before backing them up to any remote storage?

(I'm asking this as I'm backing up something to b2 with rclone crypt, but damn, it is so slow or maybe my cpu is just too old)

43
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/cvicpp on 2024-11-12 13:56:55+00:00.


Hey all,

For those of you that do not know this project, I had made some previous posts here:

tududi is a task and project management web application that allows users to efficiently manage their tasks and projects, categorize them into different areas, and track due dates. It is designed to be intuitive and easy to use, providing a seamless experience for personal productivity.

This a big update of the UI layout as well as for the main features. I moved the views from server rendered erb files to a full ReactJS frontend with tailwind CSS support, making it fully responsive.

There are also now Task, Projects, Notes, Areas and Tags management pages among others.

And still, it can be setup with only one command.

If you're into productivity experimentation or you simply want to create some order in your life, feel free to try it.

Thanks!

Chris

44
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/ranjandatta on 2024-11-12 19:08:43+00:00.

45
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Daniel31X13 on 2024-11-12 17:29:14+00:00.

Original Title: Linkwarden - An open-source collaborative bookmark manager to collect, organize and preserve webpages | November 2024 Update - Browser synchronization, custom icons, custom preview image, and so much more! 🚀


Hello everybody, Daniel here!

We're excited to be back with some new updates that we believe the community will love!

As always before we start, we’d like to express our sincere thanks to all of our Cloud subscription users. Your support is crucial to our growth and allows us to continue improving. Thank you for being such an important part of our journey. 🚀

What's New?

🖼️ Custom Preview Image

Allows users to set a specific preview image for links, making them more visually distinctive and personalized.

🎨 Custom Icons for Links and Collections

Thanks to Phosphor Icons, users can now assign unique icons to both individual Links and Collections, each with thousands of unique combinations.

ℹ️ New Link Details Drawer

We added a new drawer to display a full view of Link Details, Preserved Formats, and Additional information.

🛠️ Customizable View and Adjustable Columns

You can now customize what to view and adjust the number of columns.

🔄 Browser Synchronization

Special thanks to Marcel from Floccus, you can now sync your browser bookmarks with Linkwarden using Floccus.

↗️ Open all Links under a Collection

Allows users to open all links under a collection in a new tab.

🌐 Added many more Translations

Thanks to all the contributors, we now support the following languages to make Linkwarden accessible to a broader, global audience:

  • 🇹🇼 Chinese - Taiwan (zh-TW)
  • 🇳🇱 Dutch (nl)
  • 🇩🇪 German (de)
  • 🇯🇵 Japanese (ja)
  • 🇧🇷 Portuguese - Brazil (pt-BR)
  • 🇪🇸 Spanish (es)
  • 🇹🇷 Turkish (tr)
  • 🇺🇦 Ukrainian (uk)

👥 Reserve more Seats

Cloud subscribers can now add more seats and invite users who aren’t on Linkwarden from their billing page. Learn more about managing seats in our documentation.

🔗 Editable Link URL's

Users can now directly edit link addresses without needing to create a new entry.

🐳 Smaller Docker Image

The Docker image size has been reduced by around 50%, optimizing storage usage and making deployment faster.

✅ And more...

Check out the full changelog below.

Full Changelog:

If you like what we're doing, you can support the project by either starring ⭐️ the repo to make it more visible to others or by subscribing to the Cloud plan (which helps the project, a lot).

Feedback is always welcome, so feel free to share your thoughts!

Website:

GitHub:

Read the blog:

46
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/Purdue49OSU20 on 2024-11-12 14:20:23+00:00.


After a ridiculous back-and-forth with the Grocy developer (who called me "stupid AF" after I deleted a post on Reddit [A question with a one line answer that probably wouldn't have helped anyone else] and continued to insult me, I decided to look back at Tandoor and Mealie instead.

So my question to you all is: What are your developer horror stories in the Self Hosted world?

Conversely, are there any apps that are so good that you are willing to overlook a difficult, vocal developer?

47
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/kaydyday on 2024-11-12 09:49:38+00:00.


I'm aware that these models can be intensive and I'm not sure if my hardware can handle it with a basic setup at home:

  • Laptop: Intel Core i3-10110U, 16GB RAM, Intel UHD Graphics 630, no fancy GPU for me
  • Storage: 256GB SSD, 1TB HDD
  • OS: Ubuntu 20.04 LTS
  • Docker experience: total beginner

Been hearing about these uncensored AI models like AnonAI or Venice. So I'd love to host one for fun. I've looked into some possible solutions like use lower-res models to make the most of my CPU, or use GPU sharing platforms. Would love to learn from ur exp, thanks

48
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/MattiTheGamer on 2024-11-12 07:34:40+00:00.


Not too long ago i made a post with a guide I made to the arr-stack, specifically on a Synology NAS running DSM 7.2 and it focused mostly on torrenting with a VPN. You can find it here:

Anyway, I have recently gotten into usenet, and have completly replaced all my torrenting with usenet, as it's easier, faster and more reliable. I am therefore wondering if there would be any interest if I made a guide to setup the arr-stack but with the purpose of usenet instead of torrenting? Please comment any feedback you may have.

TL;DR: Should I make a guide for the arr-stack but with a focus on usenet and how we can use it with radarr, sonarr etc?

49
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/nik282000 on 2024-11-12 02:12:10+00:00.


Warning there are some tall-ass images in this post.

A few years ago I got mad enough at the temperature gradient in my town house that I designed and build a bunch of ESP8266 sensors to feed data into an RRD so that I could have some pretty graphs to be angry about as well. (As of this week I have also started logging stats from my UPS and server.) Using the minimum of HTML and CSS I threw those graphs, a map of the previous day's incoming network traffic, and some convenient links onto a homepage that I use on all of my devices. At a glance this tells me if the furnace/AC is working, if my server is having a fit for unknown reasons, and if the local power grid is playing it fast and loose with the voltage and frequency (which I suspect they do).

Clicking the temperature/humidity data leads to a long term data page covering 2 years of data in varying resolution. The gap last fall was when the garage sensor failed and I was waiting for Aliexpress.

There are also long term trends for the server load and UPS but they have only been logging for a few days so there is not much to look at.

Clicking the map on the home page leads to a text file containing a summary of all incoming traffic to apache and ssh. The ssh server is on a high port number and doesn't see much traffic but occasionally a persistent bot will find it.

Everything but my landing page (this animation in p5.js with the text "Hey this isn't where I parked my car" overlayed) is behind basic auth or better and I have push notifications set up for every ssh login (even my own), in 5 years I have never had a successful login from an attacker, this is not an invitation, have mercy.

All the data is gathered with python scripts and stored in RoundRobinDatabases or, in the case of network data, digested down into a CSV. The climate sensors respond to requests on port 80 with the temperature and humidity separated by a comma to allow for easy polling. The map is generated by looking up the IPs' information on Shodan then plotting the location data if it was present.

Absolutely none of this is the ideal solution, there are existing projects that cover literally every aspect plus a dozen extra features I could never hope to implement. I wrote as much as I could from scratch just to see if I could, it's more fun to drive a shitty car that you built than one you bought from the dealer.

50
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/selfhosted by /u/ArdaOneUi on 2024-11-11 19:10:53+00:00.


Dont lynch me but currently i dont have the money to build another system. So just to learn and try things out i setup Jellyfin and a few other things on my PC as a temporary test, but honestly its working great and i havent experienced any problems so i was thinking of just letting it be this way for the forseeable future. My specs are: 7700XT, 7600X, 32GB DDR5 RAM. I havent really experienced performance loss even while gaming and streaming 4k media from it(only me and 3 others have acess) so are there any other things that i should pay attention to? I assume a benefit of a dedicated server would be power efficiency, which my gaming pc obviously isnt build for, would that alone make it worth it to build a seperate system? I also dont have any subscriptions im replacing besides onedrive wich is just 20€ a year so i cant really justify it that way lol i already wasnt paying for netflix or other clouds

view more: ‹ prev next ›