I recently decided to replace the SD card in my Raspberry Pi and reinstall the system. Without any special backups in place, I turned to rsync to duplicate /var/lib/docker with all my containers, including Nextcloud.

Step #1: I mounted an external hard drive to /mnt/temp.

Step #2: I used rsync to copy the data to /mnt/tmp. See the difference?

Step #3: I reformatted the SD card.

Step #4: I realized my mistake.

Moral: no one is immune to their own stupidity 😂

  • @[email protected]
    link
    fedilink
    English
    459 months ago

    If you have one backup, you have no backup. That’s a hard lesson to learn, but if you care about those photos it’s possible to recover them if you haven’t written stuff on that sdcard yet.

    • TWeaK
      link
      fedilink
      English
      119 months ago

      At least 3 backups, 2 different media, 1 offsite location.

      • @[email protected]
        link
        fedilink
        English
        8
        edit-2
        9 months ago

        I like 3-2-1-1-0 better. Like yours, but:

        • the additional 1 is for “offline” (so you have one offsite and offline backup copy).
        • 0 for zero errors. Backups must be tested and verified.
  • @[email protected]
    link
    fedilink
    English
    32
    edit-2
    9 months ago

    Fuck up #1: no backups

    Fuck up #2: using SD cards for data storage. SD cards and USB drives are ephemeral storage devices, not to be relied on. Most of the time they use file systems like FAT32 which are far less safe than NTFS or ext4. Use reliable storage media, like hard drives.

    Fuck up #3: no backups.

      • @[email protected]
        link
        fedilink
        English
        9
        edit-2
        9 months ago

        In my experience, flash drives are way more reliable than SD cards and I’d put SSD and HDD above both of those.

        I wish they’d just ditch the SD card on the Pi already as it’s always the most likely reason why your stuff stops working. For my Pi running Home Assistant, I’ve swapped to an SDD as the boot drive. For the others, I still use SD cards but they’re just doing basic stuff like running Klipper on my 3d printer or a (WIP) live photo frame that can be easily swapped with a replacement SD later.

        • @[email protected]
          link
          fedilink
          English
          19 months ago

          It really depends how you define reliability. SD cards are physically nigh indestructible, but can show failure when overwritten often. Hence for one off backups it’s actually a good alternative. It will start showing problems when used as a medium that often writes and overwrites the same data often.

          I would recommend backups on SD cards in an A/B fashion when you want to give a backup to someone else to store safely.

          • @[email protected]
            link
            fedilink
            English
            19 months ago

            Reliability in that I’ve used flash drives and SD cards for years but have only ever had issues with corrupt SD cards (probably at least half a dozen times) while I’ve never had any with flash drives.

            Constant writes is an issue with them, which is why I think it’s stupid that the Raspberry Pi Foundation continues to use them as the default storage/OS drive. Then again, they continue to make insane choices with power supplies as well, so it shouldn’t be a big surprise.

      • @[email protected]
        link
        fedilink
        English
        79 months ago

        The best way to ensure your data lasts a long time is to use a laser to beam it to the darkest part of the sky. Read speed is abysmal though

      • @[email protected]
        link
        fedilink
        English
        69 months ago

        Much better. SSDs and HDDs do monitor the health of the drives (and you can see many parameters through SMART), while pen drives and SD cards don’t.

        Of course, they have their limits which is why raid exists. File systems like ZFS are built on the premise that drives are unreliable. It’s up to you if you want that redundancy. The most important thing to not lose data is to have backups. Ideally at least 3 copies, 1 off site (e.g. on a cloud, or on a disk at some place other than your home).

        • @[email protected]
          link
          fedilink
          English
          19 months ago

          Though not every fail state is going to show up. If you start seeing weird intermittent behaviour from a drive, for goodness sake find a way to back it up immediately.

          My mum’s new nuc started having some issues, SMART showed perfect drive health. After trying a few things to diagnose, I rebooted to run memtest and check for bad ram, and that was the last time it ever booted into windows. Controller or something on the nvme ssd died. Far too expensive to try and repair for data recovery. Thankfully had a… Somewhat recent backup. Not as recent as we would have liked.

      • The Overlord
        link
        fedilink
        English
        29 months ago

        SD cards and pen drives are (usally) made from lower quality, cheaper nand (the little memory chips that store the data) and also lack health monitoring, that being said ssds can and do die so it’s important to have backups

  • Outcide
    link
    fedilink
    English
    219 months ago

    There’s an old saying, “Unix is user friendly, it’s just fussy about it’s friends.”

    • @[email protected]
      link
      fedilink
      English
      79 months ago

      Unix is the kind of friend who won’t bat an eye about holding your beer while you go and do something incredibly stupid

  • Bobby Turkalino
    link
    fedilink
    English
    219 months ago

    Everyone else is gonna be like “if you don’t have at least 3 backups of something blahblah” but you know, not everyone has the finances for that, so advice from a cheapskate computer nerd: when going through critical transfers/reformats/deletions like you were doing, ALWAYS try actually recovering stuff from the backup before you cross the point of no return. E.g. if the backup is a .zip, extract a few individual files from it and open them in their respective programs.

  • @[email protected]
    link
    fedilink
    English
    16
    edit-2
    9 months ago

    If you haven’t done much writing to the SD card, you may be able to recover the data. Data isn’t really “deleted”, it is just labeled as deleted. There is software that can comb through the raw data and try to make sense of what files were there. I don’t know of any specific software, so if anyone knows, please reply

    Edit: Another commenter mentioned some success with DMDE

    Edit 2: Worth mentioning that this is true of formats. As long as it doesn’t zero out the entire media, it just edits the file system metadata to say there are no files.

  • @[email protected]
    link
    fedilink
    English
    159 months ago

    Unless you’ve used something secure for formatting or wrote data to the SD after, consider attempting data recovery.

    • @[email protected]OP
      link
      fedilink
      English
      39 months ago

      No luck with extundelete (segfault) and testdisk (sees some deleted files, but not /var/lib/docker). At least I can always throws it away and not worry about safety of my data! :)

      • @[email protected]
        link
        fedilink
        English
        39 months ago

        You can always try professional data recovery services. It just depends on how much the data is worth to you.

        • Atemu
          link
          fedilink
          English
          39 months ago

          And how much time you want to put into not getting scammed.

  • @[email protected]
    link
    fedilink
    English
    12
    edit-2
    9 months ago

    I’m just impressed an SD card in a Pi lasted since 2017 without losing all your data on its own.

    For the future the general guideline is 3 copies of your data at minimum, so definitely set up some backups.

  • @[email protected]
    link
    fedilink
    English
    11
    edit-2
    9 months ago

    Sorry to hear, I feel you:

    I wanted to delete all .m3u-files in my music collection when I learned:

    find ./ -name "*.m3u" -delete -> this would have been the right way, all .m3u in the current folder would have been deleted.

    find ./ -delete -name "*.m3u" -> WRONG, this just deletes the current folder and everything in it.

    Who would have known, that the position of -delete actually matters.

    • @iknowitwheniseeit
      link
      English
      69 months ago

      I didn’t know there was a -delete option to find! I’ve been piping to xargs -0 for decades!

    • Synapse
      link
      fedilink
      English
      59 months ago

      I did this sort of mistakes too, luckily BTRFS snapshots are always here to save the day !

    • @[email protected]
      link
      fedilink
      English
      29 months ago

      The first one would have deleted nothing as it needs to match the whole name. I recommend running find with an explicit -print before replacing it in place with -delete or -exec. It’s good to remember that find has a complex order dependent language with -or and -and, but not maybe the best idea to try to use those features.

    • @[email protected]
      link
      fedilink
      English
      29 months ago

      I can recommend fd to everyone frustrated with find, it has a much more intuitive interface imo, and it’s also significantly faster.

    • @[email protected]
      link
      fedilink
      English
      29 months ago

      I use GNU find every day and still have to google about the details. Only learnt about - delete the other day, good to know the position matters.

  • @[email protected]
    link
    fedilink
    English
    119 months ago

    I know I’m going to get down voted for this but this would be almost impossible to fuck up with a gui. Yet people insist that writing commands manually is superior. I’m sorry for your loss.

    • @[email protected]
      link
      fedilink
      English
      59 months ago

      Guardrails are absolutely not a reason why people prefer the CLI. We want the guardrails off so we can go faster.

      • @[email protected]
        link
        fedilink
        English
        49 months ago

        This is on me for sure that I’ve never seen anyone be faster using a CLI compared to a GUI especially for basic operations which is what most of us do 95% of the time. I know there are specific cases where a command just does it better/easier but for me that’s not the case for everyday stuff.

    • @[email protected]
      link
      fedilink
      English
      39 months ago

      There is something to be said about CLI applications being risky by default (“rm” doesn’t prompt to ask, rsync --delete will do just that). But I’ve definitely slipped on the mouse button while “drag & dropping” files in a GUI before. And it can be a right mess if you move a bunch of individual files rather than a sub-folder…

      • @[email protected]
        link
        fedilink
        English
        19 months ago

        At least for windows, you can ctrl-z that away and it’ll handle your mouse fumble. Explorer also highlights the files after a copy so if that doesn’t work (and it was a copy action), just delete them immediately.

        I haven’t used *nix for daily stuff in years but I’m sure the same abilities are there, surely.

    • @[email protected]
      link
      fedilink
      English
      19 months ago

      To play devil’s advocate, tab completion would have also likely caught this. OP could have typed /mnt/t<Tab> and it would autofill temp, or <Tab><Tab> would show the matching options if it’s ambiguous.

  • bruhduh
    link
    fedilink
    English
    10
    edit-2
    9 months ago

    Testdisk and photorec, use them, they even saved my data from bricked Chinese usb flash drive, so it’ll save yours unless you wrote dd if /dev/zero of /*/microsd. Also here’s the tip, don’t attempt to rebuild partition firstly, first step try to copy all files from microsd to another device with these programs and after that try other ways, edit: I’ve seen from your other comments that your data already was overwritten, my condolences

  • shadowbert
    link
    fedilink
    69 months ago

    My condolences :'(

    I once lost a bunch of data because I accidently left a / at the end of a path… rsync can be dangerous lol

    • @[email protected]
      link
      fedilink
      English
      29 months ago

      Rclone is superior IMHO, you have to explicitly name the output folder. Used to think it was a hassle but in hindsight being explicit about the destination reduces mistakes.

      • shadowbert
        link
        fedilink
        29 months ago

        Sometimes you’re hands are tied by the tools already on the server - but I’ll try to remember to check to see if that’s available next time.

  • @[email protected]
    link
    fedilink
    English
    49 months ago

    The bells of the Gion monastery in India echo with the warning that all things are impermanent.

  • @[email protected]B
    link
    fedilink
    English
    3
    edit-2
    9 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    RAID Redundant Array of Independent Disks for mass storage
    SSD Solid State Drive mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    3 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

    [Thread #537 for this sub, first seen 23rd Feb 2024, 01:55] [FAQ] [Full list] [Contact] [Source code]