this post was submitted on 03 Nov 2023
6 points (100.0% liked)

Selfhosted

40189 readers
679 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi everyone.

I was trying to research about how to implement SSL on the traffic between my clients and the containers that I host on my server.

Basically, my plan was to use upstream SSL in HAProxy to attempt to achieve this, but in order for that to work, each individual container on my server needs to be able to decrypt SSL. I do not think that is possible and that every container has the necessary libraries for it. This puts a halt on my idea for upstream encryption of traffic from my reverse-proxy to my containers.

With that said, ChatGPT suggested I use Kubernetes with a service mesh like Istio. The idea was intriguing so I started to read about it; but before I dive head-first into using k3s (TBH it's overkill for my setup), is there any way to implement server-side encryption with podman containers and a reverse-proxy?

After writing all of this, I think I'm missing the point about a reverse-proxy being an SSL termination endpoint, but if my question makes sense to you, please let me know your thoughts!

Thanks!

top 7 comments
sorted by: hot top controversial new old
[–] ithilelda@lemmy.world 2 points 1 year ago* (last edited 1 year ago) (1 children)

if I'm understanding your question correct, you are trying to use tls on containers that may not have tls libraries?

there are two ways to that. one is to rebuild every container by yourself modifying its services to contain tls. the other is to use a pod. you put your service container and a reverse proxy into the same pod, setup that reverse proxy correctly as an edge proxy terminating tls, and expose only the reverse proxy's port. that way, it will just look like a service with tls enabled.

since you are considering tls for everyone, I assume that you don't care about overheads. adding a reverse proxy in front of every container is like 10-50MB of additional memory, and it won't matter on modern systems.

[–] MigratingtoLemmy@lemmy.world 1 points 1 year ago (1 children)

Thank you, this is an excellent idea. I will probably not run a pod for every container (technically I can, since Netavark is supported for rootless containers in Podman 4.0), but I will definitely have a few pods on my system, where I can definitely use a reverse-proxy for every pod. Just need to figure out how I can automate it.

Thanks again

[–] notfromhere@lemmy.one 2 points 1 year ago

Single node k3s is possible and can do what you’re asking but has some overhead (hence your acknowledgment of overkill). One thing i think it gets right and would help here is the reverse proxy service. It’s essentially a single entity with configuration of all of your endpoints in it. It’s managed programmatically so additions or changes are not needed to he done by hand. It sounds like you need a reverse proxy to terminate the TLS then ingress objects defined to route to individual containers/pods. If you try for multiple reverse proxies you will have a bad time managing all of that overhead. I strongly recommend going for a single reverse proxy setup unless you can automate the multiple proxies setup.

[–] Max_P@lemmy.max-p.me 2 points 1 year ago (1 children)

The mesh proxy would work, but it's not easy to configure and for somewhat little benefit, especially if they're all running on the same box. The way that'd work is, NGINX would talk to the mesh proxy which would encrypt it to the other mesh proxy for the target container, and then it would talk to the container unencrypted again. You talk to 3 containers and still end up unencrypted.

Unless you want TLS between nodes and containers, you can skip the intermediate step and have NGINX talk directly to the containers plaintext. That's why it's said to do TLS termination: the TLS session ends at that reverse proxy.

[–] MigratingtoLemmy@lemmy.world 1 points 1 year ago

Thanks. As another commenter mentioned, I'm planning to deploy a reverse proxy for every pod. I'm hoping this is OK

[–] Decronym@lemmy.decronym.xyz 1 points 1 year ago* (last edited 1 year ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
HTTP Hypertext Transfer Protocol, the Web
HTTPS HTTP over SSL
SSL Secure Sockets Layer, for transparent encryption
TLS Transport Layer Security, supersedes SSL
k8s Kubernetes container management package

5 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

[Thread #264 for this sub, first seen 6th Nov 2023, 14:30] [FAQ] [Full list] [Contact] [Source code]

[–] themachine@lemmy.world 1 points 1 year ago

Why not just run a reverse proxy container on the server hosting the rest?