For archiving, DVD is 4 GB and who knows how long the medium will last.
LTO-6 drives go for 300-500 EUR refurbished. You need a FC switch or HBA. Each tape holds 2+ TB uncompressed data.
As for NVMe, if you do a lot of writes (e.g. DB's, Docker), go for enterprise. If you do that, grab one with PLP. You'd use it also as a cache for ZFS.
Fibre channel HBAs really cost peanuts. Because there's tons on the second hand market and the only ones who want them are enterprises who don't buy second hand shit without a support contract (they usually dump them when the support is up, that's why there's so many working ones on the market)
So I get €300 cards for €20, it's a joke. Really great though because they're really amazing for tying storage together and they can do point to point just fine. No switch needed.
The one drawback is the convoluted software chain around it, I used to work with SANs but if you don't it might require a little investigation :) The client side is pretty easy, the target side is harder (this is the part that is normally covered by a SAN).
But the biggest problem with LTO drives and home use for me is the terrible noise they make. If 3D printers sound like robots having sex, this is more like robots getting tortured.
I made a mistake in my post. I meant to say FC HBA or SAS HBA. I went with a SAS one which I use with a HBA, but the FC ones were cheaper (both FC LTO drives as well as FC HBAs) but would've still required a FC switch). I already got some fiber through my house though, so it'd have worked well. But I went with SAS which was considerably more expensive. 40 dB ain't fun, indeed. I put my LTO drive in the fuse box.
As overseeing backing up, from various optical media to disk, just don't use the optical stuff!
This was a huge international conglomerate, doing CD backups for decades. Now it turns out these precious backups only worked 96-98% of the time. Terrible stuff.
Actually 96-98% is not great but not terrible if you're employing some kind of parity scheme across multiple discs. It just means 1 in 20 (or 1 in 10 to be safe) extra discs in the mix.
The oldest NVMe SSD I have at home is a Samsung 950 Pro (the 256 GB version!) which I bought in late 2015 IIRC (and put on a ASUS Z170-A mobo, that already had a NVMe slot) and which has been in use that whole time (but mostly light desktop use):
Percentage Used: 27%
Data Units Read: 48,801,760 [24.9 TB]
Data Units Written: 84,590,914 [43.3 TB]
Power Cycles: 228 <-- only 228 power cycles in 11 years, that's about 17 days uptime every time I think
Power On Hours: 37,153 <-- not sure about this one, this comes out at about 9 hours / day of uptime
And after 11 years it's still going strong!
Now it's not on my main computer anymore: I'm rocking a WD-SN850X (recommended here on HN when it came out) but the old Samsung 950 Pro is on the desktop computer my wife uses daily (and she WFH).
> I think SSDs can take quite the beating nowadays
For regular use definitely. In my servers I've got ZFS in mirroring though: you never really know when a drive is going to RIP.
Percentage Used: 0%
Data Units Read: 15,235,390 [7.80 TB]
Data Units Written: 33,573,616 [17.1 TB]
Host Read Commands: 107,051,408
Host Write Commands: 496,391,879
Controller Busy Time: 455
Power Cycles: 938
Power On Hours: 13,189
To be honest, it hurts every time I write to an SSD drive — which is all of the time these days.