Aarkon

joined 2 years ago
[–] Aarkon@feddit.de 1 points 3 months ago

Which clicks? I haven't found them.

[–] Aarkon@feddit.de 7 points 3 months ago (5 children)

Had a coworker five years ago who wouldn’t let go of it. And he was really productive.

To my understanding, there are still some things it does better than IntelliJ, for instance being able to add all missing imports in one go instead of one by one.
I’ll admit though that this is a rather tiny advantage, and as I haven’t touched Java in quite a while, it may be even outdated.

[–] Aarkon@feddit.de 1 points 5 months ago

Thanks for pointing that out.

 

I was reading GitLab's documentation (see link) on how to write to a repository from within the CI pipeline and noticed something: The described Docker executor is able to authenticate e.g. against the Git repository with only a private SSH key, being told absolutely nothing about the user's name it is associated with.
If I'm correct, that would mean that technically, I could authenticate to an SSH server without supplying my name if I use a private key?

I know that when I don't supply a user explicitly like ssh user@server or via .ssh/config, the active environment's user is used automatically, that's not what I'm asking.

The public key contains a user name/email address string, I'm aware, is the same information also encoded into the private key as well? If yes, I don't see the need to hand that info to an SSH call. If no, how does the SSH server know which public key it's supposed to use to challenge my private key ownership? It would have to iterate over all saved keys, which sounds rather inefficient to me and potentially unsafe (timing attacks etc.).

I hope I'm somewhat clear, for some reason I find it really hard to phrase this question.

[–] Aarkon@feddit.de 2 points 5 months ago (1 children)

Automounts as drive V:\

 
[–] Aarkon@feddit.de 1 points 7 months ago

Cursed reply. Love it. 👌

[–] Aarkon@feddit.de 12 points 10 months ago (1 children)

Besten Dank! Aber mal im Ernst, der Artikel unterscheidet sich im Stil nur noch durch Nuancen von der BILD, oder? Ich will inhaltlich gar nicht groß widersprechen, aber von „Helden“ und derlei liest man in satisfaktionsfähigen Medien doch eher selten. Weitere Bemerkungen verkneife ich mir.

Gut, dass ich da nicht für draufklicken musste!

[–] Aarkon@feddit.de 5 points 10 months ago

A cock with a cock?

[–] Aarkon@feddit.de 1 points 11 months ago

Not that I know of, but I can't recall exactly anymore how I set up ZFS a year an a half back. I'll investigate!

[–] Aarkon@feddit.de 1 points 11 months ago

I use the ZFS mechanism exclusively today, so I'll have a look at the vdev_id.conf file as soon as I find the time, thank you!

Also, sorry for not responding earlier, I've had some busy days.

 

I've got a reoccurring issue with all of the home servers I've ever had and because it happened again just today, now the pain is big enough to ask publicly about it.
As of now, I'm running some Intel NUC ripoff with a JBOD attatched via USB 3, spinning a ZFS sort of-RAID. It's nothing that special tbh. In the past I had several other configurations with external drives, wired via fstab to Raspberry Pis and the like. All of those shared a similar issue: I can't recall exactly when, but I figure most of the time after updates to the kernel or docker, the computer(s) become stuck at boot. I had to unplug the external drives just to get the respective machine up, after which varying issues occurred with drives not being recognized anymore and such.

With my current setup, I run several docker containers which have their volumes on subdirectories/datasets on the /tank mountpoint, and when booting the machine without the drives, some of the containers create new directories at that destination, which now lives on my main drive /dev/sda.
It's not only painful to go through the manual process with the drives, I only have access the machine when I'm home, which I'm not all the time. Also, it's kind of time consuming as I'm backup up data that I fear might become inconsistent along the way. Every time I see a big kernel update, I fear that the computer will get stuck in such a situation once again and I'm reluctant to do a proper reboot.

I know that external drives are not best practice when it comes to handling "critical" data, but I don't want to run another machine just in order to provide access to the disks via network. Any ideas where these issues stem from and how to avoid them in the future?

[–] Aarkon@feddit.de 1 points 11 months ago

I honestly read “endofuckers”, and even though I know what a monad is, I think this misreading serves the memes community just as well.

[–] Aarkon@feddit.de 8 points 11 months ago* (last edited 11 months ago)

Then how is it we often times find the skeletons of our ancestors deep in the soil?

(Don’t want to sound sour though)

[–] Aarkon@feddit.de 1 points 11 months ago (4 children)
 

Hi!

Recently, I came across both EmuDeck and retroachievements.org. Playing a little Silent Hill and checking my achievements for the game at the website, I saw they only counted as "softcore", and reading up on the topic, I learned about something called "Hardcore Mode". In this mode, an emulator won't give you additional features of e.g. saving state etc., basically emulating the original experience as close as possible.
I'm by no means a completionist, but I wonder how this would feel like. I haven't found any related setting in EmuDeck so far though and my googling skills fail me here. Does any of you folks have a hint where to enable that?

 
1
submitted 2 years ago* (last edited 2 years ago) by Aarkon@feddit.de to c/linux@lemmy.ml
 

I wrote this around two months ago. Maybe it solves a problem one or two of you might have. :)

Also, I just used my method again just the other day and was happy that I had written it down, because I'd had a hard time remembering those commands. :D

 

Today the disks for my new ZFS NAS arrived, rejoice! 😍

Now I ask myself: If some day one of the drives fails, how am I supposed know which of the physical ones it is? My preliminary plan is to plug them into to disk container one by one, writing down the newly appearing blkids and labeling the corresponding drive. This is somewhat time consuming, so you folks have a better idea?

Cheers!

 

Among other things, I'm running a small Nextcloud instance on my home server and over time, data somewhat piles up (especially photos). My main storage is sufficiently sized & redundant for now, but I wonder how I am supposed to do a serious backup: The moment will come where the backup's size will exceed any reasonly prized single drive. What then?

Of course I can just buy another disk and distribute the chunks, but that's manual work - or is it not? At least rsync has no builtin option for that.
Using a virtual, larger file system spanning among multiple drives looks like the easiest option, but I doubt it's a good idea for a reliable backup - if one disk fails, all of the backup is gone. On the other hand: That's true for the distributed chunks as well.

What do you people do? Right now I don't have to bother as my data fits on a single device easily, but I wonder what in theory & practice is the best solution.
Regards!

 

Lemmy is obviously inspired by Reddit. My map of the fediverse obviously not complete, so I ask myself if there is some free as in free software comparable to a q/a platform like stackoverflow?

view more: next ›