Tuesday, February 22 2005 @ 08:47 PM EST Contributed by: glosser
Linux.com presents a tutorial on making backups with tar.
When my production Web server, running Red Hat Enterprise Linux (RHEL), began generating filesystem errors, I found my backup system put the ultimate test. My bare-metal restore saved the day for me. Here's how you can put a similar scheme to work.
On the surface, my Web server seemed to be running fine, serving up Web pages normally. While performing some routine maintenance, I happened to run an ls command in the root directory, and it returned and empty listing -- no directories or files. Nothing. The ls command showed files in the /boot directory, so it appeared there was some file system damage. Checking the system message log showed hundreds of these alarming entries:
Dec 21 01:05:01 linux01 kernel: EXT3-fs error (device cciss0(104,1)):
ext3_new_block: Allocating block in system zone - block = 96
Errors in the ext3 file system are never a good sign. I decided to boot into rescue mode and run a file system check (fsck) to clear up any problems with the file system. The fsck turned up a lot of bad directory entries and offered to repair them. Since there were so many, I ran the command again with the automatic repair switch. After some time and lot of scary messages, the repair finished. I held my breath and rebooted. The boot loader failed to find the kernel. Another reboot into rescue mode showed that the file system was now clean -- a little too clean. There were no files left. Only the /lost+found directory survived the repair, and it was empty.