This sounds exactly like one of the use cases described by git annex
datahoarder
Who are we?
We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.
We are one. We are legion. And we're trying really hard not to forget.
-- 5-4-3-2-1-bang from this thread
Welcome! Without buying more enclosures and increasing the number of drives you can access at one time, you will need to partition your files based on your own use case and maintain an index so that you easily can retrieve the right drive when you need to access data. Perhaps you get a drive for each year. Perhaps images go to one and video to another. perhaps you split on the file name. For an index, this can be as simple as labeling the drives and putting them on your shelf. As mentioned by others, there are software solutions for indexing file metadata as well.
If you buy more enclosures you can use MergerFS or another union file system to bring both disks together and provide a single view while using ext4 for each drive. This allows you to easily remove a single drive and plug it into another basic linux distro, but you will not get any data striping or other protections. So if 1 drive dies, you will loose whatever data was stored on that disk. Because of that, I advise you to still think about partitioning your files even if you union them so that you understand your failure scope.
Can you not just modify the paths to sync a dir to one and a dir to the other? Maybe there’s a sorting system, such as a plex library that you don’t want to break up, but I think you can point a system like that at a higher dir and have it sort things out and understand? Sounds like you need to silo your data.
You will need a way of connecting both your current 2TB disk and a new one at the same time. A USB hub (if you don't have free USB ports) and second enclosure, or 2-bay disk dock (much cheaper than a NAS device and no networking required) will do.
You can then combine their storage with mergerfs (available for most distros). Both disks will still work independently, and you can use indexing software like gwhere, cdcat or gcstar to scan each drive so you can tell where a particular file ends up.
You might also be able to buy yourself some more space by using jdupes or rdfind to hardlink duplicate files.
storing these files after I am done with them
If you're done with them, then move them onto a backup disk rather than keep them live and have a backup?
I've been doing this for a long time. I move files locally to a "To-Archive" directory and once in a while, move them to several disks based on content. Films, tv, apps, games, books - that sort of thing.
Once one disk is full, I use another old hdd in a disk caddy and label it "Books #2" and so on.
I use a windows program called Cathy which indexes the files, making it easy to locate a file on whatever disk it's on. Looks like there's a linux version available too
This works okay for me, and gives a use for old spinny hard drives. It's not infallible, but for stuff that I could replace (ie, I downloaded it) then I consider it an acceptable risk. All media has a risk of becoming unreadable, but do be realistic about how much bother it would be to replace stuff.
For data that's unique (ie, I made it, plus OS backups) then I use an offline grandfather/father/son rotation once a month and once a year turn the oldest into an annual backup. (Fully explanation of my setup is here if you're interested.