вторник, 23 октября 2018 г.

How to Rebuild Corrupted RPM Database in CentOS

Source

The RPM database is made up of files under the /var/lib/rpm/ directory in CentOS and other enterprise Linux distributions such as RHEL, openSUSE, Oracle Linux and more.
If the RPM database is corrupted, RPM will not work correctly, thus updates cannot be applied to your system, you encounter errors while updating packages on your system via YUM package manager. The worst case scenario is being unable to run any rpm and yum commands successfully.
Read Also: 20 Practical Examples of RPM Command in Linux
There are a number of factors that can lead to the RPM database corruption, such as incomplete previous transactions, installation of certain third-party software, removing specific packages, and many others.
In this article, we will show how to rebuild a corrupted RPM database; this way you can recover from an RPM database corruption in CentOS. This requires root user privileges, otherwise, use the sudo command to gain those privileges.

Rebuild Corrupted RPM Database in CentOS

First start by backing up your current RPM database before proceeding (you might need it in the future), using the following commands.
# mkdir /backups/
# tar -zcvf /backups/rpmdb-$(date +"%d%m%Y").tar.gz  /var/lib/rpm
Backup RPM Database
Backup RPM Database
Next, verify the integrity of the master package metadata file /var/lib/rpm/Packages; this is the file that needs rebuilding, but first remove /var/lib/rpm/__db* files to prevent stale locks using following commands.
# rm -f /var/lib/rpm/__db*  
# /usr/lib/rpm/rpmdb_verify /var/lib/rpm/Packages
Verify RPM Database
Verify RPM Database
In case the above operation fails, meaning you still encounter errors, then you should dump and load a new database. Also verify the integrity of the freshly loaded Packages file as follows.
# cd /var/lib/rpm/
# mv Packages Packages.back
# /usr/lib/rpm/rpmdb_dump Packages.back | /usr/lib/rpm/rpmdb_load Packages
# /usr/lib/rpm/rpmdb_verify Packages
Dump and Load RPM Database
Dump and Load RPM Database
Now to check the database headers, query all installed packages using the -q and -a flags, and try to carefully observe any error(s) sent to the stderror.
# rpm -qa >/dev/null #output is discarded to enable printing of errors only
Last but not least, rebuild the RPM database using the following command, the -vv option allows for displaying lots of debugging information.
# rpm -vv --rebuilddb
Rebuild RPM Database
Rebuild RPM Database

Use dcrpm Tool to Detect and Correct RPM Database

We also discovered the dcrpm (detect and correct rpm) command line tool used to identify and correct well known issues to do with RPM database corruption. It is a simple and easy-to-use tool which you can run without option. For effective and reliable usage, you should run it regularly via cron.
You can install it from source; download the source tree and install it using setup.py (which should grab the psutil dependency from pypi as well), as shown.
# git clone https://github.com/facebookincubator/dcrpm.git
# cd dcrpm
# python setup.py install
Once you have installed dcrpm, run it as shown.
# dcrpm
Finally, try to run your failed rpm or yum command again to see if everything is working fine.
dcrpm Github repository: https://github.com/facebookincubator/dcrpm
You can find more information from RPM database recovery page.
That’s all! In this article, we have explained how to rebuild a corrupted RPM database in CentOS. To ask any questions or share your thoughts about this guide, use the feedback form below.

How to rescan disk in Linux after extending disk

 Source

How to rescan disk in Linux after extending vmware disk



Learn to rescan disk in Linux VM when its backed vdisk in vmware is extended. This method does not require downtime and no data loss.
Rescan disk when vdisk in extended
Re-scan vdisk in Linux


Sometimes we get a disk utilization situations and needs to increase disk space. In vmware environment, this can be done on the fly at vmware level. VM assigned disk can be increased in size without any downtime. But, you need to take care of increasing space at OS level within VM. In such scenario we often think, how to increase disk size in Linux when vmware disk size is increased? or how to increase mount point size when vdisk size is increased? or steps for expanding LVM partitions in vmware Linux guest? or how to rescan disk when vdisk expanded? We are going to see steps to achieve this without any downtime.
In our example here, we have one disk /dev/sdd assigned to VM of 1GB. It is part of volume group vg01 and mount point /mydrive is carved out of it. Now, we will increase size of disk to 2GB at vmware level and then will add up this space in mount point /mydrive.

Step 1:

See below fdisk -l output snippet showing disk /dev/sdd of 1GB size. We have created single primary partition on it /dev/sdd1 which in turns forms vg01 as stated earlier. Always make sure you have data backup in place of the disk you are working on.

Step 2:

Now, change disk size at vmware level. We are increasing it by 1 more GB so final size is 2GB now. At this stage disk need to be re-scanned in Linux so that kernel identifies this size change. Re-scan disk using below command :
Make sure you use correct disk name in command (before rescan). You can match your SCSI number (X:X:X:X) with vmare disk using this method.
Note : Sending “– – -” to /sys/class/scsi_host/hostX/scan is scanning SCSI host adapters for new disks on every channel (first -), every target (second -), and every device i.e. disk/lun (third -) i.e. CTD format. This will only helps to scan when new devices are attached to system. It will not help us to re-scan already identified devices.
Thats why we have to send “1” to /sys/class/block/XYZ/device/rescan to respective SCSI block device to refresh device information like size. So this will be helpful here since our device is already identified by kernel but we want kernel to re-read its new size and update itself accordingly.
Now kernel re-scan disk and fetch its new size. You can see new size is being shown in your fdisk -l output.

Step 3:

At this stage our kernel know new size of disk but out partition (/dev/sdd1) is still of old 1GB size. This left us no choice but delete this partition and re-create it again with full size. Make a note here your data is safe and make sure your (old & new) partition are marked as Linux LVM using hex code  8e or else your will mess up whole configuration.
Delete and re-create partition using fdisk console as below:
All fdisk prompt commands are highlighted in above output. Now you can see new partition /dev/sdd1 is of 2GB size. But this partition table is not yet written to disk. Use w command at fdisk prompt to write table.
You may see warning and error like above. If yes, you can use partprobe -s and you should be good. If you still below error with partprobe then you need to reboot your system (which is sad ).

Step 4:

Now rest of the part should be tackeled by LVM. You need to resize PV so that LVM identify this new space. This can be done with pvresize command.
As new PV size is learned by LVM you should see free/extra space available in VG.
You can see our VG now have 2GB space i.e. what we have resized our disk to! Now you can use this space to create new lvol in this VG or extend existing lvol using LVM commands. Further you can extend filesystem online which is sittign on logical volumes.
You can observe all lvol in this VG will be un-affected by this activity and data is still there as it was previously.
Shell
1
2
3
4
5
6
7
8
 
# ll /mydrive
total 24
drwx------.  2 root root 16384 Jun 23 11:00 lost+found
-rw-r--r--.  1 root root     0 Jun 23 11:01 shri
drwxr-xr-x.  3 root root  4096 Jun 23 11:01 .
dr-xr-xr-x. 28 root root  4096 Jun 23 11:04 ..