scsi

joined 7 months ago
[–] scsi@lemm.ee 1 points 2 hours ago* (last edited 2 hours ago) (1 children)

If you have access to some sort of basic Linux system (cloud server, local server whatever works for you) you can run a program on a timer such as https://isync.sourceforge.io/ (Debian package: isync) which reads email from one source and clones it to another. Be careful and run it in a security context that meets your needs (I use a local laptop w/encryption at home that runs headless 24/7, think raspberry Pi mode).

This includes IMAP (1) -> IMAP (2) as well as IMAP -> Local and so on; as with any app you'll need to spend a bit learning how to build the optimum config file for your needs, but once you get it going it's truly a "set and forget" little widget. Use an on-fail service like https://healthchecks.io/ in your wrapper script to get notified on error, then go about your life.

Edit: @mike_wooskey@lemmy.thewooskeys.com glanced at your comments and see you have a lot of self-hosting chops, here's a markdown doc of mine to use isync to clone one IMAP provider (domain1.com) to another IMAP provider (domain2.com) subfolder for archiving. (using a subfolder allows you to go both ways and use both domains normally)

----

Sync email via IMAP from host1/domain1 to a subfolder on host2/domain2 via a cron/timer. Can be reversed as well, just update Patterns to exclude the subfolders from being cross-replicated (looped).

  • Install the isync package: apt-get update && apt-get install isync

Passwords for IMAP must be left on disk in plain text

  • Generate "app passwords" at the email providers, host1 can be READ only
  • Keep ${HOME}/.secure contents on encrypted volume unlocked manually

The mbsync program keeps it's transient index files in ${HOME}/.mbsync/ with one per IMAP folder; these are used to keep track of what it's already synced. Should something break it may be necessary to delete one of these files to force a resync.

By design, mbsync will not delete a destination folder if it's not empty first; this means if you delete a folder and all emails on the source in one step, a sync will break with an error/warning. Instead, delete all emails in the folder first, sync those deletions, then delete the empty folder on the source and sync again. See: https://sourceforge.net/p/isync/mailman/isync-devel/thread/f278216b-f1db-32be-fef2-ccaeea912524%40ojkastl.de/#msg37237271

Simple crontab to run the script:

0 */6 * * * /home/USER/bin/hasync.sh

Main config for the mbsync program:

${HOME}/.mbsyncrc

# Source
IMAPAccount imap-src-account
Host imap.host1.com
Port 993
User user1
PassCmd "cat /home/USER/.secure/psrc"
SSLType IMAPS
SystemCertificates yes
PipeLineDepth 1
#CertificateFile /etc/ssl/certs/ca-certificates.crt

# Dest
IMAPAccount imap-dest-account
Host imap.host2.com
Port 993
User user2
PassCmd "cat /home/USER/.secure/pdst"
SSLType IMAPS
SystemCertificates yes
PipeLineDepth 1
#CertificateFile /etc/ssl/certs/ca-certificates.crt

# Source map
IMAPStore imap-src
Account imap-src-account

# Dest map
IMAPStore imap-dest
Account imap-dest-account

# Transfer options
Channel hasync
Far :imap-src:
Near :imap-dest:HASync/
Sync Pull
Create Near
Remove Near
Expunge Near
Patterns *
CopyArrivalDate yes

This script leverages healthchecks.io to alert on failure; replace XXXXX with the UUID of your monitor URL.

${HOME}/bin/hasync.sh

#!/bin/bash

# vars
LOGDIR="${HOME}/log"
TIMESTAMP=$(date +%Y-%m-%d_%H%M)
LOGFILE="${LOGDIR}/mbsync_${TIMESTAMP}.log"
HCPING="https://hc-ping.com/XXXXXXXXXXXXXXXXXXXXXXXXX"

# preflight
if [[ ! -d "${LOGDIR}" ]]; then
  mkdir -p "${LOGDIR}"
fi

# sync
echo -e "\nBEGIN $(date +%Y-%m-%d_%H%M)\n" >> "${LOGFILE}"
/usr/bin/mbsync -c ${HOME}/.mbsyncrc -V hasync 1>>"${LOGFILE}" 2>&1
EC=$?
echo -e "\nEC: ${EC}" >> "${LOGFILE}"
echo -e "\nEND $(date +%Y-%m-%d_%H%M)\n" >> "${LOGFILE}"

# report
if [[ $EC -eq 0 ]]; then
  curl -fsS -m 10 --retry 5 -o /dev/null "${HCPING}"
  find "${LOGDIR}" -type f -mtime +30 -delete
fi

exit $EC
[–] scsi@lemm.ee 37 points 5 days ago (1 children)

To expand on this, there are two settings you can put in user.js / prefs.js (desktop) or via about:config (mobile), documented on the Mozilla Wiki:

user_pref("media.autoplay.default", 5);
user_pref("media.autoplay.blocking_policy", 2);

Two bonus settings if you want to get rid of the "do you want to enable DRM?" pop-in bar when hitting one of those sites:

user_pref("media.gmp-widevinecdm.enabled", false);
user_pref("media.gmp-widevinecdm.visible", false);

hth

[–] scsi@lemm.ee 5 points 1 week ago (1 children)

I would love to find a Bill Watterson one, if anyone knows.

Here you go, I'll throw in some bonus ones as they're all linked together in the Bloom County sidebar:

[–] scsi@lemm.ee 2 points 2 weeks ago (4 children)

The Arch wiki may have some ideas for you - tl;dr is that GDM uses a global dconf db over in /etc/ and this might be the root of your problem (these configs might not get cleaned up with a --purge?) I'm a LightDM user so best I can do to help: https://wiki.archlinux.org/title/GDM#dconf_configuration

[–] scsi@lemm.ee 1 points 2 weeks ago

Quick update for anyone still reading this thread:

@fdroidorg@floss.social As with any other app, we flagged Fennec and Mull with KnownVuln until the app is updated. Contributors fixed the issues that delayed versions 130 and later. Stand by for the build.

https://floss.social/@fdroidorg/113384089915217604

[–] scsi@lemm.ee 43 points 2 weeks ago (2 children)

A bit of backstory on how we got here - in June 2024 Mozilla chose to (a) integrate the source tree of Firefox Mobile into their huge monorepo ("gecko-dev"), and (b) move the source off of Github onto their own git servers ("Mozilla Central"). You can read about it in the now-archived old repo:

This was then compounded by a core Android build kit ("NDK") choosing to remove parts of the toolchain which is/was used to build Firefox releases (ergo, forcing another change to build process):

Together these have caused a bit of a kerfuffle in getting new releases compiled and released via the official F-Droid methodology. See the other comment about the Mull version in their private repo, they're having to use a Mozilla pre-built clang (a compiler toolchain) now to make it work for the time being.

[–] scsi@lemm.ee 5 points 2 weeks ago (1 children)

The link(s) to add their F-Droid repo if not running DivestOS: https://divestos.org/pages/our_apps.html#repos

[–] scsi@lemm.ee 12 points 3 weeks ago

Along this line of thinking, I use Lemmy and Mastodon as complementary rather than competing, but not in the way people want/use X/Bluesky. Lemmy (reddit) is great for the use as you outline, Mastodon (and Pixelfed) supply a visual experience if you make it work that way and don't expect/want an X like experience (so think more Instagram). Lemmy lacks multireddits which could solve some of this Mastodon use case, on reddit I have a multireddit named "Gallery" which combines a dozen picture-only subreddits.

One can follow hashtags like #photography or #catsofmastodon, discover like-minded profiles who only post pictures and minimal talk/chatter (a lot of actual skilled photographers are present) and follow those profiles. It provides an experience that rounds out Lemmy, but I do admit I would love a "gallery" like view in the apps to streamline the hashtag viewing (Pixelfed does this specifically, but people are spread all over the planet - Mastodon proper pulls in federated data easier, IMHO)

[–] scsi@lemm.ee 4 points 1 month ago

To try and bake down the complex answers, if you are basically familiar with PGP or SSH keys the concept of a Passkey is sort of in the same ballpark. But instead of using the same SSH keypair more than once, Passkeys create a new keypair for every use (website) and possibly every device (e.g. 2 phones using 1 website may create 2 sets of keypars, one on each device) - and additionally embeds the username (making it "one-click login"):

  • creating a passkey is the client and server establishing a ring of trust ("challenge") and then generating a public and private pair of keys (think ssh-keygen ...)
  • embedded in the keypair is the user ID/username and credential ID, which sort of maps to the three fields of a SSH keypair (encryption type, key, userid optional in SSH keys) but not really, think concept not details
  • when using a passkey, the server sends the client a "challenge", the client prompts the user to unlock the private key (device PIN, biometric, Bitwarden master password, etc.)
  • the "challenge" (think crypto math puzzle) is signed with the private key and returned to the server along with the username and credential ID
  • the server, who has stored the public key, looks it up using the username + credential ID, then verifies the signature somewhat like SSH or PGP does
  • like SSH or PGP, this means the private key never leaves the device/etc. being used by the client and is used to only sign the crypto math puzzle challenge

The client private key is stored hopefully in a secure part of the phone/laptop ("enclave" or TPM hardware module) which locks it to that device; using a portable password manager instead such as Bitwarden is attractive since the private keys are stored in BW's data (so can be synced across devices, backed up, etc.)

They use the phrase "replay" a lot to mean that sending the same password to a website is vulnerable to it being intercepted and used n+1 times (hacker); in the keypair model this doesn't happen because each "challenge" is a unique crypto math puzzle generated dynamically every use, like TOTP/2FA but "better" because there's no simple hash seed (TOTP/2FA use a constant seed saved by the client but it's not as robust crypto).

view more: next ›