Kangry.com [insert cool graphic here]
home | Topics | Logout | Search | Contact | ?? Kangry ?? | Bandwitdh
Topics:
DVR
nvrec
Mplayer
Links
Misc
Commands
Humor

Name

Password

New user

uploaded files
(misc)-> LVM failure and recovery submited by Russell Thu 09 Apr 09
Edited Fri 02 Oct 15
Web kangry.com
I had a server (actually a secondary backup server) that had a large LVM array in in it that was full. My plan was to shut down the machine after 898 days (nearly 2.5 years ) of un-interrupted problem free operation. ( however a clever sys admin will realize that a really long uptime means they weren't keeping the kernel up to date, which is true, because the machine in question was running an out of date version of fedora)

So the Original config was like this:
Mother bord IDE addapter :
       /dev/hda1 (boot)           (hda is an old 70Gb drive)
       /dev/hda2 (lvm) VolGroup00
                           /LogVol00 (/ root drive) ~ 60Gb
                           /LogVol01 (swap) 1GB

      /dev/hdb (cdrom)

Addon IDE adapter (has the faster I/O cables)
      /dev/hde   (250Gb drive)  (recycled old boot disk )
      /dev/hde1 (ext3 unused boot probably 100Mb )
      /dev/hde2 (lvm) Backup    (not sure of the size probably 200Gb)
      /dev/hde3-8   some junk partitions I used to experiment with raid
      /dev/hdf1 (lvm) Backup    (111Gb drive)
      /dev/hdg  (no drive)
      /dev/hdh  (lvm) Backup    (250Gb drive) (notice no partition here)


     the Backup Volume Group had two volumes 
       home (100Gb)
       baclup (~450Gb)  ( don't know if this was a type-o or my reaction 
                          to leaning the lv name can't match the vg name)

Ok it was abit messy, But I believed I did a couple of things right.
  1. The Backup array was different both physically and logicically from the boot disk. This is intentional because I had planned for the possiblility of a boot disk failure.
  2. The structure of the large array was simple data was on the first disk , then the second then the third. I didn't use any fancy striped data options that I believed would make data recovery impossible. (also this is a alternate server. 99.9% of the time these disks sit idle, they just copy changes make to the primary.
  3. I had even placed sticy labels on the disks in the case noting which drive they were (hde etc) and witch volume group they were in
So I shutdown the system (after 898 days of uptime) and installed a new 500Gb drive as /dev/hdg. Before I shut it down, I took notes as to which drive was being used for what, knowing I wanted to be sure to format the correct new drive.

But on power up.. the system won't boot. At first I think it's a master/slave conflict with the new drive, but no that's not it.. The Old boot disk is totally dead. It spins up, but BIOS never recognizes it.

This annoys me, but I take it in stride, because i know I prepared for just this event.

(skip this part if you don't like horror stories)
  • I grab a 250Gb drive from a workstation that recently died.(dead CPU) and install this drive as a new boot disk.
  • The first install disk I grab is a FC10 DVD, but that 64 bit and this is a pentium 4 (32 bit) so it won't work.
  • I figure I should install a new OS and not the old one that was in place, so I download a FC10 i686 live CD.
  • the Live CD won't boot to x, just a text login, probably Because the video card is so ancient, and I don't know how to do the install from the text prompt
  • So I download ANOTHER CD image (over the slow dsl line takes a little over an hour)
  • I expect to need disk #2 so I start downloading that as soon as disk 1 finishes
  • I start the install with disk 1, As I suspected, it fails to start the graphical install, but I'm not afraid of the text install. One problem is that the partition editor is missing features (like renaming lvm groups) in the text mode.
  • Another problem ( and I do consider this a problem ) is that since about FC8 or so, all the drive devices are now mapped to SCSI device names, and the names no longer correspond to the physical (wired) location in the computer. the drives are now sda,sdb(cdrom),sdc,sdd,sde and sdf
  • Now I can't prove this, but I believe that I unselected Every Drive except sda and selected to install there only.
  • I unselect "office and productivity" and it tells me I will need only Disk #1 and Disk #2
  • the install runs for a while untill it askes for disk #2
  • About 30 minutes later I have disk 2, burn it and use that to finish the install
  • At the end I get a select "reboot" to reboot and I do
  • .... It won't boot... still... It hangs on Verifying DMI Pool data
  • I've seen this before, but I do some googling, and I come to believe that what this means is "dive with no boot sector" in boot sequence. I wasn't standing at the computer when it wrote grub to the boot sector, I fugure it probably had an error. Booting from the rescue mode of the install cd, can't mount /
  • I mount /dev/sda1 (boot) and see several FC3 kernels this confuses me
  • I'm thinking that somehow the old disk label "/boot" on hde1 confused it and it wrote grub in the wrong place.. so I decide to re-install
  • This time, (SHOULD HAVE DONE THIS THE FIRST TIME) I install with only the first disk drive connected.
  • The install completes, the machine boots, I get no X but this is a headless machine, so I really don't care about that.
  • I connect the other drives and try to mount the old LVM.. this fails..It doesn't see the backup volume group, but it does see two duplicate VolGroup00 listings..
  • I'm thinking (or hoping) that this is a conflict of the old lvm on the previous boot disk hde ( now sdc ) .. but I am wrong
  • So I figure the thing to do is-reinstall again this time using a different name for the root volume group
  • Only you can't rename the group in the text mode partition editor so I dump the lvm partions on the boot disk and use good-old Primary Partitions (1-4) and ext3 filesystems.
  • Of course, when I do this, something changes and now I need Disk #4 so I start downloading that.
  • It only needed one package of disk #4,and it went by so fast I didn't see what it was
  • This does get me wondering, why don't they just have a simple install from web option. because when I did yum-update later than evening it had to download *ANOTHER* 500 mb of packages. why not just install from the updated repository and skip the step of downloading the install CD images.(I guess the live CD install kinda is this approach, and install problems would be harder to track from a variable install set, not to mention that if your installing more than one system, it saves bandwidth to download CD images upfront )


So finally, about 8 hours after I fist issued the shutdown command. the machine is back up. but the lvm array still isn't mounting. And to my horror, I realize that the initial install didn't go onto hda ( called sda) but went onto hde (now sdc , the first drive on the second IDE controller)

Now I have two theorys as to how this happened.
  1. I'm an idiot and I selected the wrong drive for the install. Both drives are 250Gb drives so it's possible that I confused them
  2. The install program selected a drive order that put the drives from the onboard mother board *AFTER* the ones on the secondary IDE controller. I'm going to test this, but currently the machine is busy.


But, it turns out, all is not lost It is possible to get some data back from such a colossal fsck-up.

I knew I had seen stuff on this and I did some quick googling and as soon as I got past the people who want to sell me drive-recovery serveices or programs I found the article I wanted http://www.linuxjournal.com/article/8874 has some stuff on how to manualy edit the lvm header files then import them. The problem the author has is much less severe. he just needs to rename a Volume group. I needed to recover it. I pulled the header off one of the drives that shouldn't have been screwed.
dd if=/dev/sdd1 bs=512 count=255 skip=1 of=/tmp/sdd-header
Sure enough, in there were many copies of the lvm configuration. each has a "seqno = ##" line so I could tell which was the newest. The Newest one, was dated today and was all wrong ( didn't describe enough drives etc. ) Perhaps the installer tried to correct for the fact that it was orphaning this volume group. I pulled one out from over a year ago, that looked correct to me. Then I edited it so it would work, this is the fixed one:
backup {
id = "j92zOv-yCDB-cOSA-WYB9-0kew-mHYZ-P9N2MH"
seqno = 17
status = ["RESIZEABLE", "READ", "WRITE"]
extent_size = 8192
max_lv = 256
max_pv = 256

physical_volumes {

pv0 {
##id = "DXnllk-VjIj-pYJs-eeZG-qs6u-3qjh-Jk9VSq"
id="MNzJK6-uv5e-GseL-GgLZ-B0dN-fkom-VXNViQ"
device = "/dev/sde"

status = ["ALLOCATABLE"]
pe_start = 384
pe_count = 59618
}

pv1 {
##id = "s4vGn8-chQz-lpRI-tBck-shSz-AYEC-i3T3N6"
id ="xlt3Qk-o03E-4D31-vmyI-Ht1Y-8gkm-Y0NklS"
device = "/dev/sdb2"

status = ["ALLOCATABLE"]
pe_start = 384
pe_count = 59618
}

pv2 {
##id = "1POvat-7pSE-c1BY-63kT-hMyF-kGRb-QRmAyd"
id="vuZsfj-zhIu-OCFx-jPgn-oMI7-sqN3-sDzIv7"
device = "/dev/sdc1"

status = ["ALLOCATABLE"]
pe_start = 384
pe_count = 28618
}
}

logical_volumes {

baclup {
id = "WiX1ds-Z0gW-8hZO-SYF5-Oyj6-PW3U-4U04ix"
status = ["READ", "WRITE", "VISIBLE"]
.......(theres more here)
The UUID's didn't match... (were they also changed by the install ? )so I tried commenting them out and making it use the device names (/dev/sdc2 etc.. ) That didn't work. I think the device names are there just for show. so I got the current UUID's using pvdisplay and put those into the file. .. Here it's really good to be ssh'ed in from a computer with a graphical shell, you can select and copy those values then paste them in. I can't imagine trying to retype any of those ugly things correctly.

so I restore the config with vgcfgrestore and besides complaining about the partition on sdc being too large .. it works. I can't mount /home but I can mount baclup. It's screwed up. both but since 80% of the data is stored on drives that we not really messed with most of it seems to be there.. I can copy files out, but find spits out a bunch of garbage and seems to hang.

It's limited sucess. If I needed to get my thesis or password keys out of there I think I would have a pretty good chance of getting that out.

But, since this is a backup copy of another file server, I figured the best thing to do would be to fsck the damaged logical volume and then re-sync it with the master.

This was a big mistake

That process ran for MANY hours correcting bad inodes etc.then crashed. After it crashed baclup couldn't be mounted and fsck wouldn't even recognize it. I't possible that the process of scaning the drive so hard for errors caused another old drive to fail. ( I have seen this happen when a raid array tries to rebuild after the first failed drive is replaced ) None-the less, I couldn't keep playing with it. I set-up a new lvm on the new drive and that one is currently syncing with the master file server. ( will take all day)

I would like to see if I can't mount home to get some data out of it. There are about a half dozen scripts that I wrote a long time ago that manage the backup process and purge old data from the database. (witch are backed-up nowhere I'm half tempted to do a binary search of the old drive for keywords that would be in those scripts to see if I can find them.

Lessons

  • BACKUP EVRYTHING EVEN CONFIG FILES
  • Unplug drives you don't want to install onto
  • Backup the LVM configuration ( On another computer) Before you start screwing with it.
  • Backing up the size of the partitions would have also been useful

Question

Until Now, I've been a big beliver in LVM volumes. I like that you can relocate and scale them easily. Now my faith is a little damaged, But, I'm thinking, that I really wouldn't have been any better off had the installer overwritten ext3 partitions.

This is a work in progress, I'm still working on it. If something confuses you or you want more details then please post below and I will update.
If a group or volume isn't showing up , it may simply be inactive.
 lvscan 
  inactive          '/dev/vg_pent4/lv_swap' [2.47 GiB] inherit
  inactive          '/dev/vg_pent4/lv_home' [58.81 GiB] inherit
  inactive          '/dev/vg_pent4/lv_root' [50.00 GiB] inherit
  ACTIVE            '/dev/vg_amd2/lv_root' [98.84 GiB] inherit
  ACTIVE            '/dev/vg_amd2/lv_swap' [2.09 GiB] inherit
  ACTIVE            '/dev/vg_amd2/lv_home' [9.78 GiB] inherit
this is common with removable drives which were not available during system boot. correct this issue with the following command:
 vgchange -a y <volume_group_name>
don't include the path, just the group name, and the missing drives should now show up in /dev/mapper and be mountable.


Replys:
Backing up system status (Russell)

Add comment or question...:
Subject:
Submited by: NOT email address. Leave blank for anonymous    (Spam Policy)

Enter Text: (text must match image for posting)




This file (the script that presented the data, not the data itself) , last modified Tuesday 06th of March 2018 11:41:12 PM
your client: claudebot
current time: Tuesday 19th of March 2024 06:59:02 AM