SeaMauFive

joined 1 year ago
[–] SeaMauFive@lemm.ee 16 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I see other commenters saying you should use docker. And I agree. If you are set on doing this without learning docker, feel free to ignore this comment.

Otherwise, I would argue that learning docker and docker compose is a good investment for your linux learning. The TL;DR of docker is that it is a way to run software like jellyfin, which is prone to issues like you are currently seeing when running on "bare metal" (here defined as simply: not running in docker), in an environment created specifically for that software.

Docker makes it so other (usually really knowledgeable) people can set up a server that runs some software properly. It has all the files in the right place, any recurring jobs happening, all the permissions set up, etc. And then, they create a snapshot of that server and put it in a docker image. That image is publicly distributed and others (like you and me) can take that image to start a container. The terms are not super intuitive. But an image can be thought of as a "snapshot" of a specific computer at a specific time; it is usually set up to run a specific task. And a container is an actually running instance built off one of those snapshots.

To tie this into your use case, the idea is that you use docker to take a popular and robust jellyfin image and use docker to create a container that's running jellyfin smooth as silk for you to use. Docker is popular amongst the self-hosting community because containers don't tend to run into issues like you see above. Docker and/or the image itself control what environment variables they receive, what other software might be running with them, and a whole host of other headaches that come with running the service "bare metal."

Some comments mention Docker Compose. This can be thought of as an extension to docker functionality. To run a container in docker, you'll run a command like docker run -d <a whole goddamn host of arguments> lscr.io/linuxserver/jellyfin:latest where lscr.io/linuxserver/jellyfin is the image you want to use and latest is the tag (which is essentially the image's version except that some tags, like latest, change what they point to). This is totally fine and will work. But it's hard to make updates. The "whole goddamn host of arguments" will have important information about how your host server interacts with the container. It'll specify things like what ports from your host are forwarded to the container, what files from within the container persist after it has been stopped, and what environment variables the container runs with. With "base" docker, you need to run a command like this to bring up a container, which can get hugely cumbersome if you'rr maintaining a lot of services in containers or are trying to experiment with different container arguments. Docker Compose is a way to specify containers you want to bring up in a YAML file so that you don't need to type out or remember each service's command every time. I would strongly recommend using Docker Compose as well.

So, I'd recommend taking the time to install Docker and Docker Compose. And then the little extra learning curve to use them to run jellyfin in a container. It will take more time now, but save you time in the long run. You will know you have installed docker and docker compose correctly if you can run the command docker --version and see your version information and docker compose version and similarly see version information.

Appendix:

  • Install Docker - I believe this installation comes with compose too, but it's been a while.
  • LSCR Jellyfin Image - I would recommend this image over the official jellyfin one; it's easier to set up. It's the one I use.

My last note: we all started as noobs. I give this advice because it is the advice I would give my earlier self. I believe that the time investment in this now will save you a lot of pain both in avoiding the debugging now and in avoiding this debugging in future. Please feel free to reach out if you have questions.

[–] SeaMauFive@lemm.ee 1 points 9 months ago (2 children)

I'm assuming you mean the jellyfin server and not the Android TV client specifically.

Running the jellyfin server on an SBC is possible, I'm running it on a pi 4 right now. Personally, I'd recommend installing docker and running the service in a container. There's official docs for how to do so https://jellyfin.org/docs/general/installation/container/

That being said, Jellyfin docs recommend against running on an SBC. I too, running it on an SBC, recommend against it. It can do the "happy path" well. But if the stored media is in a format that requires the server to transcode it for the current, it cannot handle it. The time it takes to transcode is several times longer than the actual run time of the media. You will have a bad time. I'm currently looking to change/upgrade the hardware to work around this

[–] SeaMauFive@lemm.ee 1 points 9 months ago (1 children)

I'm not totally sure I get what you're saying in 2, but I appreciate your endorsement of quick sync

[–] SeaMauFive@lemm.ee 2 points 9 months ago

Much appreciated. I was worried I'd need a graphics card. It's great to hear that some CPUs are overkill

[–] SeaMauFive@lemm.ee 1 points 9 months ago

Thanks! I feel much more confident hearing what people have that works

[–] SeaMauFive@lemm.ee 1 points 9 months ago

Thank you. That's exactly the kind of answer I was looking for. It's good to know that this is just a struggle for the pi and not something CPUs as a whole have an issue with

8
submitted 9 months ago* (last edited 9 months ago) by SeaMauFive@lemm.ee to c/jellyfin@lemmy.ml
 

Hey all, I started hosting my own media server using jellyfin on a raspberry pi. This was mostly because I was new to the space and didn't want to invest heavily in hardware only to drop the project or find that I couldn't make it work for some reason.

I've now got it all set up and working, but the pi is absolutely not able to handle any sort of transcoding. So I'm now looking to upgrade the hardware.

Currently, I only need something that can handle transcoding two sub-4k streams concurrently. But I don't want to fully shut away the possibility of streaming 4k media. I should have the space for it, my current limiting factor is processing power.

Reading the jellyfin docs on recommended hardware, my understanding is that I should be OK if I get a recent Intel i7 CPU as long as it's got integrated graphics?

I am currently planning to build a small form factor PC and run it as a headless Linux (possibly Debian) server with jellyfin and everything else running in docker.

Mostly, I want to ask: does anyone with experience doing this have concerns or advice? In particular:

  1. Is just the CPU processing power sufficient for everything if the CPU is chosen correctly?

  2. If the CPU is not sufficient, is it difficult to set up a dedicated graphics card on a headless server?

[–] SeaMauFive@lemm.ee 2 points 1 year ago (1 children)

Googling off of this response, I think you're right that an NAS is the best solution long term. And in terms of a fully scalable system, I saw that I can create a Distributed File System of multiple NAS systems to even further scale. So thank you

[–] SeaMauFive@lemm.ee 2 points 1 year ago (2 children)

Big thanks for this pointer. That seems like the move for me

22
submitted 1 year ago* (last edited 1 year ago) by SeaMauFive@lemm.ee to c/jellyfin@lemmy.ml
 

I'm just getting started on my first setup. I've got radarr, sonarr, prowlarr, jellyfin, etc running in docker and reading/writing their configs to a 4TB external drive.

I followed a guide to ensure that hardlinks would be used to save disk space.

But what happens when the current drive fills up? What is the process to scale and add more storage?

My current thought process is:

  1. Mount a new drive
  2. Recreate the data folder structure on the new drive
  3. Add the path to the new drive to the jellyfin container
  4. Update existing collections to look at the new location too
  5. Switch (not add) the volume for the *arrs data folder to the new drive

Would that work? It would mean the *arrs no longer have access to the actual downloaded files. But does that matter?

Is there an easier, better way? Some way to abstract away the fact that there will eventually be multiple drives? So I could just add on a new drive and have the setup recognize there is more space for storage without messing with volumes or app configs?

[–] SeaMauFive@lemm.ee 11 points 1 year ago* (last edited 1 year ago) (1 children)

The only user in this thread who understood the assignment

[–] SeaMauFive@lemm.ee 20 points 1 year ago (4 children)

For educational purposes what is a more expected/desired response from a nuerotypical person?

[–] SeaMauFive@lemm.ee 2 points 1 year ago (2 children)

I skimmed the guide you sent and the top says that the portions in brackets are placeholders and need to be replaced with real values. If you change {{ lemmy_docker_image }} to be the name of the image to use dessalines/lemmy:0.18.0 for example, do you get further

[–] SeaMauFive@lemm.ee 44 points 1 year ago* (last edited 1 year ago) (1 children)

Huge respect for what you've built here, but it might be worth reaching out to the lemm.ee admin. I only know enough DevOps and cloud hosting to be dangerous, not helpful. But his instance seems stable and scalable. He might be able to offer some insight into the issues here

view more: next ›