schmurnan

joined 1 year ago
[–] schmurnan@lemmy.world 1 points 8 months ago (1 children)

Yeah I know they’re all based on one of three, but they are all subtly different in what they offer.

So whilst there are three main engines, there are definitely more than three choices.

Bottom of the pile for me is Chrome - I don’t use anything Google knowingly/willingly.

 

I know “best” is subjective, but as someone who’s entrenched in the Apple ecosystem I always used to use the stock apps: Reminders, Calendar, Mail, Podcasts and, of course, Safari.

But over time I’ve moved away from some of those apps, towards things that work better than the stock apps but also still sync with my other Apple devices (iPhone, iPad, Watch): Things and Todoist (because I can’t decide on one over the other), Fantastical, Mail (still), Overcast… but I tend to hover between browsers.

I mainly use Safari, and try to use profiles to separate personal and work stuff. But over the years I’ve also tried Firefox, I’ve tried Brave and more recently I’ve tried Arc. But I just can’t make my mind up.

So I was curious what your browser of choice is (and also, if you have any other views on the best stock app replacements - including alternatives to the ones I listed above for GTD, calendars, email and podcasts (don’t get me started on the “best” search engine!), I’d be interested to get your opinions.

 

This probably isn’t possible, but I traded in an old MacBook Air and chose to get the cash added to my Apple account. However, at the minute it just pays some of my subscriptions automatically, and I was wondering if I can withdraw it somehow?

I was looking to buy a few apps (Swish, Transmit) and figured I could get them off the Mac App Store. But Swish isn’t on there and the App Store version of Transmit doesn’t seem worth it compared to getting it directly from their website.

So wondered if I could somehow transfer the cash from my Apple account but guessing the answer is “no”?

[–] schmurnan@lemmy.world 2 points 1 year ago

Yeah I feel you, it's a tad frustrating. But a first world problem I guess. I just thought it was an obvious continuity feature to have as part of the Apple ecosystem.

[–] schmurnan@lemmy.world 1 points 1 year ago

We can hope, right?

[–] schmurnan@lemmy.world 3 points 1 year ago

Yeah I use that feature all the time; if I've been out running or for a walk listening on my iPhone, I'll often hold my phone next to my HomePod in the kitchen or bedroom and seamlessly carry on playback. And it works great. I just assumed it would be equally easy to implement it from Mac to iPhone.

[–] schmurnan@lemmy.world 2 points 1 year ago (2 children)

That’s a shame. Seems like it’d be reasonably straightforward to implement on iOS 17 given the new stuff they’re doing with AirDrop.

I know Spotify has had this for a while and is probably one of the items highest on peoples’ wishlists. Maybe one day.

 

Pretty much as the title says, just wondered if it was possible for me to stop listening to a playlist on my Mac and seamlessly pick it up on my iPhone, rather than starting the playlist again?

[–] schmurnan@lemmy.world 3 points 1 year ago

I managed to get mine for launch day, although the Apple Store didn’t come online at 13:00 UK as advertised; there was probably 4 - 5 mins delay.

I had configured my pre-order beforehand.

[–] schmurnan@lemmy.world 4 points 1 year ago (2 children)

Pro models actually got a price decrease in the UK, unless I’m missing something.

Now I just need to decide whether to match my phone to my watch and get the natural titanium, or whether to go for the blue.

Oh, and, Pro (like usual) or Pro Max (seems a bit big!).

[–] schmurnan@lemmy.world 2 points 1 year ago

I didn’t know they stored local copies — had a very, VERY quick skim through their privacy policy on their website and couldn’t see any reference to that (sure it’s there but I didn’t see it).

I’m not a Spark user btw, was just following the conversation. I use plain ol’ Apple Mail.

[–] schmurnan@lemmy.world 1 points 1 year ago (11 children)

I could be misinformed, but this isn’t just limited to Spark as I understand it, I believe a lot (maybe all?) third-party clients do the same thing. They act as an intermediary between you and the server so they can deliver push notifications.

However, as I understand it, Spark’s privacy policy outlines that they don’t read/scan the contents of your emails, and the use of app-specific passwords rather than your email password ensures they only have access to emails and nothing else.

Pretty sure others such as Canary, Airmail, Edison, etc. all do/did the same thing, but it was the lack of clarity in Spark’s privacy policy that made them the main target for scrutiny. I think they’ve since cleared that up.

I could be mistaken, though.

[–] schmurnan@lemmy.world 3 points 1 year ago

I see what you did there!

[–] schmurnan@lemmy.world 1 points 1 year ago (1 children)

I replied to another comment on here saying that I'd tried this once before, via a Docker container, but just wasn't getting any results back (kept getting timeouts from all the search engines).

I've just revisited it, and still get the timeouts. Reckon you're able to help me troubleshoot it?

Below are the logs from Portainer:

 File "/usr/local/searxng/searx/network/__init__.py", line 165, in get
    return request('get', url, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/searxng/searx/network/__init__.py", line 98, in request
    raise httpx.TimeoutException('Timeout', request=None) from e
httpx.TimeoutException: Timeout
2023-08-06 09:58:13,651 ERROR:searx.engines.soundcloud: Fail to initialize
Traceback (most recent call last):
  File "/usr/local/searxng/searx/network/__init__.py", line 96, in request
    return future.result(timeout)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/concurrent/futures/_base.py", line 458, in result
    raise TimeoutError()
TimeoutError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/usr/local/searxng/searx/search/processors/abstract.py", line 75, in initialize
    self.engine.init(get_engine_from_settings(self.engine_name))
  File "/usr/local/searxng/searx/engines/soundcloud.py", line 69, in init
    guest_client_id = get_client_id()
                      ^^^^^^^^^^^^^^^
  File "/usr/local/searxng/searx/engines/soundcloud.py", line 45, in get_client_id
    response = http_get("https://soundcloud.com")
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/searxng/searx/network/__init__.py", line 165, in get
    return request('get', url, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/searxng/searx/network/__init__.py", line 98, in request
    raise httpx.TimeoutException('Timeout', request=None) from e
httpx.TimeoutException: Timeout
2023-08-06 09:58:13,654 ERROR:searx.engines.soundcloud: Fail to initialize
Traceback (most recent call last):
  File "/usr/local/searxng/searx/network/__init__.py", line 96, in request
    return future.result(timeout)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/concurrent/futures/_base.py", line 458, in result
    raise TimeoutError()
TimeoutError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/usr/local/searxng/searx/search/processors/abstract.py", line 75, in initialize
    self.engine.init(get_engine_from_settings(self.engine_name))
  File "/usr/local/searxng/searx/engines/soundcloud.py", line 69, in init
    guest_client_id = get_client_id()
                      ^^^^^^^^^^^^^^^
  File "/usr/local/searxng/searx/engines/soundcloud.py", line 45, in get_client_id
    response = http_get("https://soundcloud.com")
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/searxng/searx/network/__init__.py", line 165, in get
    return request('get', url, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/searxng/searx/network/__init__.py", line 98, in request
    raise httpx.TimeoutException('Timeout', request=None) from e
httpx.TimeoutException: Timeout
2023-08-06 10:02:05,024 ERROR:searx.engines.wikidata: engine timeout
2023-08-06 10:02:05,024 ERROR:searx.engines.duckduckgo: engine timeout
2023-08-06 10:02:05,024 ERROR:searx.engines.google: engine timeout
2023-08-06 10:02:05,024 ERROR:searx.engines.qwant: engine timeout
2023-08-06 10:02:05,024 ERROR:searx.engines.startpage: engine timeout
2023-08-06 10:02:05,024 ERROR:searx.engines.wikibooks: engine timeout
2023-08-06 10:02:05,024 ERROR:searx.engines.wikiquote: engine timeout
2023-08-06 10:02:05,024 ERROR:searx.engines.wikisource: engine timeout
2023-08-06 10:02:05,025 ERROR:searx.engines.wikipecies: engine timeout
2023-08-06 10:02:05,025 ERROR:searx.engines.wikiversity: engine timeout
2023-08-06 10:02:05,025 ERROR:searx.engines.wikivoyage: engine timeout
2023-08-06 10:02:05,025 ERROR:searx.engines.brave: engine timeout
2023-08-06 10:02:05,481 WARNING:searx.engines.wikidata: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False
2023-08-06 10:02:05,481 ERROR:searx.engines.wikidata: HTTP requests timeout (search duration : 6.457878380082548 s, timeout: 6.0 s) : TimeoutException
2023-08-06 10:02:05,482 WARNING:searx.engines.wikisource: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False
2023-08-06 10:02:05,484 ERROR:searx.engines.wikisource: HTTP requests timeout (search duration : 6.460748491808772 s, timeout: 6.0 s) : TimeoutException
2023-08-06 10:02:05,485 WARNING:searx.engines.brave: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False
2023-08-06 10:02:05,485 ERROR:searx.engines.brave: HTTP requests timeout (search duration : 6.461546086706221 s, timeout: 6.0 s) : TimeoutException
2023-08-06 10:02:05,487 WARNING:searx.engines.google: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False
2023-08-06 10:02:05,487 ERROR:searx.engines.google: HTTP requests timeout (search duration : 6.463769535068423 s, timeout: 6.0 s) : TimeoutException
2023-08-06 10:02:05,489 WARNING:searx.engines.wikiversity: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False
2023-08-06 10:02:05,489 ERROR:searx.engines.wikiversity: HTTP requests timeout (search duration : 6.466003180015832 s, timeout: 6.0 s) : TimeoutException
2023-08-06 10:02:05,490 WARNING:searx.engines.wikivoyage: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False
2023-08-06 10:02:05,490 ERROR:searx.engines.wikivoyage: HTTP requests timeout (search duration : 6.466597221791744 s, timeout: 6.0 s) : TimeoutException
2023-08-06 10:02:05,490 WARNING:searx.engines.qwant: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False
2023-08-06 10:02:05,490 ERROR:searx.engines.qwant: HTTP requests timeout (search duration : 6.4669976509176195 s, timeout: 6.0 s) : TimeoutException
2023-08-06 10:02:05,491 WARNING:searx.engines.wikibooks: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False
2023-08-06 10:02:05,491 ERROR:searx.engines.wikibooks: HTTP requests timeout (search duration : 6.4674198678694665 s, timeout: 6.0 s) : TimeoutException
2023-08-06 10:02:05,491 WARNING:searx.engines.wikiquote: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False
2023-08-06 10:02:05,492 WARNING:searx.engines.wikipecies: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False
2023-08-06 10:02:05,492 ERROR:searx.engines.wikiquote: HTTP requests timeout (search duration : 6.468321242835373 s, timeout: 6.0 s) : TimeoutException
2023-08-06 10:02:05,492 ERROR:searx.engines.wikipecies: HTTP requests timeout (search duration : 6.468797960784286 s, timeout: 6.0 s) : TimeoutException
2023-08-06 10:02:05,496 WARNING:searx.engines.duckduckgo: ErrorContext('searx/engines/duckduckgo.py', 98, 'res = get(query_url, headers=headers)', 'httpx.TimeoutException', None, (None, None, None)) False
2023-08-06 10:02:05,497 ERROR:searx.engines.duckduckgo: HTTP requests timeout (search duration : 6.47349306801334 s, timeout: 6.0 s) : TimeoutException
2023-08-06 10:02:05,511 WARNING:searx.engines.startpage: ErrorContext('searx/engines/startpage.py', 214, 'resp = get(get_sc_url, headers=headers)', 'httpx.TimeoutException', None, (None, None, None)) False
2023-08-06 10:02:05,511 ERROR:searx.engines.startpage: HTTP requests timeout (search duration : 6.487425099126995 s, timeout: 6.0 s) : TimeoutException
2023-08-06 10:04:27,475 ERROR:searx.engines.duckduckgo: engine timeout
2023-08-06 10:04:27,770 WARNING:searx.engines.duckduckgo: ErrorContext('searx/search/processors/online.py', 118, "response = req(params['url'], **request_args)", 'httpx.TimeoutException', None, (None, None, None)) False
2023-08-06 10:04:27,771 ERROR:searx.engines.duckduckgo: HTTP requests timeout (search duration : 3.2968566291965544 s, timeout: 3.0 s) : TimeoutException
2023-08-06 10:04:50,094 ERROR:searx.engines.duckduckgo: engine timeout
2023-08-06 10:04:50,187 WARNING:searx.engines.duckduckgo: ErrorContext('searx/engines/duckduckgo.py', 98, 'res = get(query_url, headers=headers)', 'httpx.ConnectTimeout', None, (None, None, 'duckduckgo.com')) False
2023-08-06 10:04:50,187 ERROR:searx.engines.duckduckgo: HTTP requests timeout (search duration : 3.0933595369569957 s, timeout: 3.0 s) : ConnectTimeout

The above is a simple search for "best privacy focused search engines 2023", followed by the same search again but using the ddg! bang in front of it.

I can post my docker-compose if it helps?

[–] schmurnan@lemmy.world 1 points 1 year ago

Would love to know this, and this did form part of my OP in terms of which search engines could be “hacked” into Safari.

 

TL;DR - which privacy-focused search engine do people recommend, preferably one that can also easily be used as a default option in Safari?

I ditched Google in about 2016ish I would guess, and since then have used DDG as my default search engine.

As someone entrenched in the Apple ecosystem, it’s always seemed like a sound choice, as it’s one of the search engines built in to Safari on both iOS and macOS.

After spending a bit more time recently playing around with and updating my Docker containers, I started hosting a Whoogle container, which seemed to work pretty well, but I don’t see many out there talking about it, so not sure how good it actually is. I then tried a SearXNG container, but either had it misconfigured or just wasn’t getting many search results back.

At the moment I’m trying out Startpage, but I know there are potential privacy concerns since they were part-bought in 2019 by a US ad-tech company.

I’m also playing around with different browsers at the moment, flicking between Safari, Firefox and Brave. At which point I stumbled across Brave Search, which seems pretty promising.

So, which search engines do you all recommend?

UPDATE: Probably should’ve done a poll! But latest (if I’ve captured everything correctly) is:

  • DuckDuckGo - 10
  • Qwant / SearXNG / Kagi / Brave - 4
  • Startpage / Ecosia - 2
  • Google - 1

As to my other questions around browsers:

  • Majority seem to use Firefox
  • Some mentions of Brave
  • One mention of Arc
24
submitted 1 year ago* (last edited 1 year ago) by schmurnan@lemmy.world to c/selfhosted@lemmy.world
 

I'm trying to access my Pi-hole container from pihole.mydomain.com without any ports or /admin, and I swear the multitude of posts on the internet make this seem really straightforward. Perhaps it is and I'm being dumb, but I cannot get it to work.

Below is my current docker-compose for both Traefik and Pi-hole:

version: "3.7"

services:
  traefik:
    container_name: traefik
    image: traefik:latest
    restart: unless-stopped
    security_opt:
      - no-new-privileges:true
    networks:
      - medianet
    ports:
      - 80:80
      - 443:443
    environment:
      - CF_API_EMAIL=${CF_API_EMAIL}
      - CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}
      - TZ=${TZ}
      - PUID=${PUID}
      - PGID=${PGID}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /path/to/traefik:/etc/traefik
      - /path/to/shared:/shared
      - /path/to/traefik/logs/traefik.log:/etc/traefik/logs/traefik.log
      - /path/to/traefik/logs/access.log:/etc/traefik/logs/access.log
    labels:
      - traefik.enable=true
      - traefik.http.routers.traefik.entrypoints=http
      - traefik.http.routers.traefik.rule=Host(`${TRAEFIK_DASHBOARD_HOST}`)
      - traefik.http.middlewares.traefik-auth.basicauth.users=${TRAEFIK_USER_PASS}
      - traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https
      - traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https
      - traefik.http.routers.traefik.middlewares=traefik-https-redirect
      - traefik.http.routers.traefik-secure.entrypoints=https
      - traefik.http.routers.traefik-secure.rule=Host(`${TRAEFIK_DASHBOARD_HOST}`)
      - traefik.http.routers.traefik-secure.middlewares=traefik-auth
      - traefik.http.routers.traefik-secure.tls=true
      - traefik.http.routers.traefik-secure.tls.certresolver=cloudflare
      - traefik.http.routers.traefik-secure.tls.domains[0].main=${TRAEFIK_BASE_DNS}
      - traefik.http.routers.traefik-secure.tls.domains[0].sans=*.${TRAEFIK_BASE_DNS}
      - traefik.http.routers.traefik-secure.service=api@internal

  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    restart: unless-stopped
    networks:
      - medianet
      - npm_network
    domainname: mydomain.com
    hostname: pihole
    ports:
      - 53:53/tcp
      - 53:53/udp
    environment:
      - TZ=${TZ}
      - WEBPASSWORD=${WEBPASSWORD}
      - FTLCONF_LOCAL_IPV4=192.168.1.116
      - WEBTHEME=default-auto
      - DNSMASQ_LISTENING=ALL
      - VIRTUAL_HOST=pihole.mydomain.com
    volumes:
      - /path/to/pihole:/etc/pihole
      - /path/to/pihole/dnsmasq.d:/etc/dnsmasq.d
    cap_add:
      - NET_ADMIN
    labels:
      - traefik.enable=true
      - traefik.http.routers.pihole.rule=Host(`pihole.mydomain.com`)
      - traefik.http.routers.pihole.entrypoints=https
      - traefik.http.routers.pihole.tls=true
      - traefik.http.routers.pihole.service=pihole
      - traefik.http.services.pihole.loadbalancer.server.port=80

The Pi-hole one will load the login page and, upon entering the password and logging in, it will simply bring me back to the login page. So just keeps looping around.

The Traefik config is working with lots of other containers, all of which are using SSL certificates, so I'm pretty sure my Traefik config is okay.

I've tried middlewares to addprefix=/admin, which just ends up looping round with multiple /admin prefixes and also doesn't work.

Anybody got any ideas?

I'm aware I don't have to put Pi-hole behind SSL as I'm not exposing any of this stuff to the open internet (ports 80 and 443 are not forwarded on my router, and I'm using local DNS records in Pi-hole to access via subdomains).

Happy to post my traefik.yml and config.yml files if needed.

UPDATE: I seem to have figured it out! Below is my final Pi-hole docker-compose - the Traefik one remains unchanged from the original post:

  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    restart: unless-stopped
    networks:
      - medianet
      - npm_network
    domainname: mydomain.com
    hostname: pihole
    ports:
      - 53:53/tcp
      - 53:53/udp
    environment:
      - TZ=${TZ}
      - WEBPASSWORD=${WEBPASSWORD}
      - FTLCONF_LOCAL_IPV4=192.168.1.116
      - WEBTHEME=default-auto
      - DNSMASQ_LISTENING=ALL
      - VIRTUAL_HOST=pihole.mydomain.com
    volumes:
      - /path/to/pihole:/etc/pihole
      - /path/to/pihole/dnsmasq.d:/etc/dnsmasq.d
    cap_add:
      - NET_ADMIN
    labels:
      - traefik.enable=true
      - traefik.http.routers.pihole.entrypoints=http
      - traefik.http.routers.pihole.rule=Host(`pihole.mydomain.com`)
      - traefik.http.middlewares.pihole-https-redirect.redirectscheme.scheme=https
      - traefik.http.routers.pihole.middlewares=pihole-https-redirect
      - traefik.http.routers.pihole.service=pihole
      - traefik.http.routers.pihole-secure.entrypoints=https
      - traefik.http.routers.pihole-secure.rule=Host(`pihole.mydomain.com`)
      - traefik.http.routers.pihole-secure.tls=true
      - traefik.http.routers.pihole-secure.service=pihole
      - traefik.http.services.pihole.loadbalancer.server.port=80
 

I'm sure I'm massively overthinking this, but any help would be greatly appreciated.

I have a domain name that I bought through NameCheap and I've pointed it to Cloudflare (i.e. updated the name servers). I have a Synology NAS on which I run Docker and a few containers. Up until now I've done this using IP addresses and ports to access everything (I have a Homepage container running and just link to everything from there).

But I want to setup SSL and start running Vaultwarden, hence purchasing a domain name to make it all easier.

I tried creating an A record in Cloudflare to point to the internal IP of my NAS (and obviously, this couldn't be orange-clouded through CF because it's internal to my LAN). I'm very reluctant to point the A record to the external IP of my NAS (which, for added headache is dynamic, so I'd need to get some kind of DDNS) because I don't want to expose everything on my NAS to the Internet. In actual fact, I'm not precious about accessing any of this stuff over the internet - if I need remote access I have a Tailscale container running that I can connect to (more on that later in the post). The domain name was purely for ease of setting up SSL and Vaultwarden.

So I guess my questions are:

  • What is the best way to go about this - do I create a DDNS on the NAS and point that external IP address to my domain in Cloudflare, then use Traefik to just expose the containers I want to have access to using subdomains?
  • If so, then how do I know that all other ports aren't accessible (I assume because I'm only going to expose ports 80 and 443 in Traefik?)
  • What do other people see (i.e. outside my network) if they go to my domain? How do I ensure they can't access my NAS and see some kind of page?
  • Is there a benefit to using Cloudflare?
  • How would Pi-hole and local DNS fit into this? I guess I could point my router at Pi-hole for DNS and create my A records on Pi-hole for all my subdomains - but what do I need to setup initially in Cloudflare?
  • I also have a RPi that has a (very basic) website on it - how do I setup an A record to have Cloudflare point a sub-domain to the Pi's IP address?
  • Going back to the Tailscale thing - is it possible to point the domain to the IP address of the Tailscale container, so that the domain is only accessible when I switch on the Tailscale VPN? Is this a good idea/bad idea? Is there a better way to do it?

I'm sure these are all noob-type questions, but for the past 6-7 years I've purely used this internally using IP:port combinations, so never had to worry about domain names and external exposure, etc.

Many thanks in advance!

 

I’ve got my library just as I want it, and have made a couple of changes to the in my movies’ .nfo files.

This is fine for a day or so, and then Jellyfin decides to overwrite my .nfo files.

I have them set to “lock” via tinyMediaManager but it doesn’t seem to make any difference. Every day it’ll reorder some movies in my library.

Pretty sure I’ve also disabled the image plug-ins in the library so it shouldn’t be pulling any metadata from anywhere.

Not a huge deal but incredibly frustrating — I want my library showing movies in a certain order and it’s driving me nuts when they’re rearranged 🤣

Any ideas?

TIA.

 

Just wondered what people are using for their password management.

I’m currently using 1Password on a family subscription for both password management and 2FA (and then Authy for the 1Password 2FA). But I’m seeing a lot more posters — particularly since joining Lemmy — championing BitWarden (either cloud or self hosted) and Raivo OTP as a cheaper, almost-as-functional alternative.

So is it worth the switch? Will I lose out on anything by doing so?

I’m currently running BitWarden with a free account to see if I can live with it. But I must admit, 1Password is a staple app for me and one that I would say is priceless to my workflow and setup.

Just interested in your thoughts and trying to stimulate conversation!

 

As a recent convert from Plex to Jellyfin, I’m going through my library correcting metadata, etc. and wondered what great ideas I could glean from the community.

I don’t use collections (should I?) because, at least on Plex, I could never completely agree with myself on what should be included in a collection and what shouldn’t.

At present I tend to just sort movies alphabetically (and then in order of release, i.e. 47 Meters Down comes before 47 Meters Down: Uncaged) and I use the sort title for that. But what do you guys suggest with things such as the Star Wars movies, or Indiana Jones movies — movies that don’t have similar titles, e.g. “Star Wars”, “The Empire Strikes Back”, “Return of the Jedi” — would you have them scattered alphabetically around your library, or would you use the sort title to call them “Star Wars 1”, “Star Wars 2”, etc. so they’re all grouped together (albeit breaking the alphabetisation)?

Any other hints and tips would be appreciated!

 

Hey all,

I'm sure I'm massively overlooking something, but wondered if someone could help me out, please?

I'm trying to switch from Traefik to Nginx Proxy Manager on my Synology NAS, and I've opted to run NPM via a bridge network and a macvlan, so as to not have to mess around with ports 80 and 443 on the NAS (usually reserved for Synology services).

I've got the following:

Bridge network (npm_bridge):

  • Subnet = 192.168.10.0/24
  • IP range = 192.168.10.2/32
  • Gateway of 192.168.10.1.

Macvlan network (npm_network):

  • Subnet = 192.168.1.0/24 (same as my LAN)
  • IP range = 192.168.1.216/32
  • Gateway = 192.168.1.1 (same as my LAN).

NPM is connected to these two networks, and I have a MariaDB container connected to the host - everything works great with NPM and MariaDB - no issues.

However, I have a third network, medianet:

  • Subnet = 192.168.96.0/24
  • Gateway = 192.168.96.1.

Connected to that network I have a Gluetun container (via docker-compose).

I then have multiple other containers that run through the Gluetun container (several "arrs" and Portainer) using network_mode: service:gluetun.

What I used to have via Traefik was a local hostname I created (let's say, nas.local for posting's sake) and I could simply create labels in my docker-compose for each service to assign ports. I could then access all of these containers via nas.local/portainer, nas.local/sonarr, etc. and they would be accessible via the VPN container.

However, I'm completely stuck on how to do this via NPM. I've tried all kinds of combinations via the Proxy Host configuration, but I don't know how to set it up.

  • Do I need an overarching nas.local entry as the top level? If so, what hostname/IP and port combination do I use?
  • Do I think setup Custom Locations behind it, one for each service, i.e. Portainer? If so, what is the hostname/IP and port for this?
  • Or do I create a new Proxy Host per entry, i.e. portainer.nas.local?
  • Do I even need to have Portainer behind the VPN as well, or do I add that direct to the medianet network, and then somehow link NPM to the medianet network as well?

I'm really at a loss, and as it stands all my containers are offline at the moment because I can't figure out how to connect them (except Homebridge and MariaDB - they're both up as they're connected to the host network).

Any help would be very, very much appreciated.

view more: next ›