How to Configure RAID-1 in Redhat Linux

Configuring RAID-1(Mirroring)

RAID-1 Mirroring means having the same data on both the hard-disks,i.e exact clone of the same data copied on both the disks.

Minimum Requirements to configure RAID-1:

1.Minimum 2 Hard-disks required(You can also add more than two disks 4,6,8 for that the server should have the RAID Physical adapter installed)

Advantages of RAID-1

1. Read performance is better than writing data to the hard disk

2.If one disk fails there is no data loss since we have the same data on both the disks.

3.50% of space will be lost,i.e if we have two disks with 250GB size total, it will be 500GB but in mirroring it will show only 250GB.

Configure RAID-1

Step:1 Check the drives whether there is already RAID configured by using the below command

#mdadm  -E  /dev/[b-c]

As you can see from the above output there is no RAID superblock detected i.e no RAID defined yet

Step:2 Partition Drive for RAID

As I have mentioned already minimum 2 harddisks required to configure RAID-1, I have attached two disks /dev/sdb,/dev/sdc for configuring RAID-1, Let us create partitions on these two hard disks and change the pariton type to RAID while creation.

Procedure to create RAID type partitions on the drives

1. Choose ‘n’ to create a new partition

2. Then choose ‘p’ for a primary partition

3. Now select the partition number ‘1’

4. Give the default full size and then press enter

5. Now press ‘p’ to check the mentioned partition

6.Press “L” to list all the available partition types

7.Type ‘t‘ to choose the partition

8. Now choose ‘fd’ for the Linux RAID and press enter

9.choose ‘p’ once again to confirm the changes we made

10.type ‘w’ to save the changes

Step:3

#fdisk /dev/sdb

 

Follow the same above  procedure to create the partition on /dev/sdc drive

Step:4

#fdisk /dev/sdc

Step:5

We have successfully created the partitions on both the drives, verify the changes on both the drives /dev/sdb & /dev/sdc  using the following command

#mdadm  -E /dev/sd[b-c]

From the above output, you will see no md superblock detected because there is no RAID defined on /dev/sdb1 and /dev/sdc1.

Step:4 Configure RAID device  /dev/md0  using the following syntax

#mdadm   --create   /dev/md0   --leve=raid1 --raid-devices=2  /dev/sdb  /dev/sdc

After executing the above command, check the RAID-1 status by using the below syntax

#cat  /proc/mdstat

 

Step:7 Check the RAID Array and device types by using the following command

#mdadm  -E /dev/sd[b-c]

#mdadm  --detail   /dev/md0

From the above output, RAID devices successfully created by using the /dev/sdb and /dev/sdb drives

Step:8 Creating a file system on RAID devices

#mkfs  /dev/md0

Step:9 Mount the file system to some mount point directory

#mkdir  /RAID

#mount  /dev/md0    /RAID

To check the mounted filesystem status

#df -h

From the above output, the RAID devices successfully mounted under the RAID directory.

If you found this article useful, Kindly Subscribe here 👉  Click this link to Subscribe

Never miss an article Do like my official  FB page 👉 Learn Linux in an easier way

##############################################################################################

 

 

Basic concepts of RAID

RAID(Redundancy Array of Independent Disk):

A RAID  is an only way in which you can link up  several hard disks so that if any one of them fails, the other one can take over the load

Types of   RAID:

1)Hardware RAID

2) Software RAID

Hardware RAID: It has its own independent disk subsystem and resources.It doesn’t use any resource from the system such as RAM, CPU, and power.Since it has its own dedicated resources it will not put any additional Load on the system, It also provides very high performance.

Software RAID: Performance wise when compared to hardware RAID software RAID delivers slow performance since it uses all the resources from the system.

concept of RAID:

1.Hot spare: This is the additional disk in the RAID array, if any disks fail, data from the faulty disk will be migrated to the spare disk automatically.

2.Mirroring: The copy of the same data will be available on the mirror disk, like making an additional  copy of data

3.Striping: If this feature is enabled then data is written to all the available disks randomly, it is like sharing the data between all disks, so all of them fills quickly.

4.Parity: The parity regenerates the lost data from the saved parity information.

There are different levels of RAID available based on how mirroring and striping needed, Among these levels, LEVEL 0, LEVEL 1, LEVEL 5   are mostly used in Redhat Linux.

Let us have a look at these different levels of RAID

RAID 0-Striping:

It provides striping without parity, Since it doesn’t store any parity data performs the read and write operation equally, Speed would be much faster than other levels, Minimum two hard disks required for this level.All the harddisks in this levels are filled equally.You should use this level only if the read and write speed are concerned.when you decide to use this level always have a backup plan on your data because a single disk failure from the array will result in total data loss.

RAID 1-Mirroring:

In this level it provides parity without striping, data will be written on both the harddisks, if any of these failed or remove still we can get the data.This level requires two hard disks.It means if you want to use two hard disks then you will have to deploy 4 hard disks or if you want to use one hard disk then you will have to use two hard disks, First hard disk stores the original data while the other disks provides the exact same copy of the first disk, performance is reduced since data is written twice,You should use this level only if data is concerned at any cost

RAID 5 Parity with Striping:

This level provides Parity and striping, This level requires minimum 3hard disks.It writes parity data equally in all disks.If anyone of disk fails, then data can be reconstructed from the parity data available on remaining disks.

NOTE: When you are using Hardware RAID  device, use hot swappable hardware RAID device with spare disks, if any disk fails data will be reconstructed on the first available spare disk without any time.

In our next article, I will show you how to configure RAID in Red hat Linux.

If you found this article useful, Kindly Subscribe here 👉  Click this link to Subscribe

Never miss an article Do like my official  FB page 👉 Learn Linux in an easier way

 

How to Extend the size of Volume Group and Logical volumes(LVM)

Volume Group/Logical Volume Extending(LVM)

In our previous article we have seen the basics of LVM, how to configure PV, VG, LV, Here we are going to see how to increase the size of the existing volume group and logical volume size.As I have stated earlier the biggest advantage of Logical volume manager is, it allows us to increase the size of the logical volumes at any time when you are running out of space.

If you missed my previous  Basic LVM article you can visit here  Understanding LVM

Now in our case we have three PV, one VG and four LV, Check the details by using the following command

#pvs
#vgs
#lvs

As you can see from the above output, we don’t have enough space available in physical volumes and volume group, For example, if there is a requirement of additional 10 GB to one of the  logical volume, will it be possible to add 10 GB extra to the logical volume???no…we couldn’t extend that as we don’t have enough space in VG,

for extending what we have to do is, we need to add one physical volume(PV) and then we have to extend the volume group(VG) by extending the VG then we will get enough space to increase the logical volume size, so first will add one physical volume

For adding the PV we need to create one LVM partition with “fdisk” command

NOTE: YOU CAN ASLO ADD A NEW PHYSICAL HARDDISK TO EXTEND THE SIZE OF VG AND LV

1.To create a new partition, Press n

2.Choose the primary partition, press p

3.Choose which number of partition to be selected to create the primary partition

4.Press 3 (coz already I have created two partitions )

5.change the type using t

6.Type 8e to change the partition type to Linux LVM

7.Press w to write the changes

Now reboot the system once completed

Now check the partition we have created with fdisk

#fdisk   -l   /dev/sda

Now create a PV(Physical volume) using the following command

#pvcreate  /dev/sda3

Check the PV details

#pvs or pvdisplay

Extending the size of the Volume Group(VG)

Now you have to add this newly created PV to the volume group VG1 to grow the size of the volume group  to get more space for expanding Logical volume (LV)

Syntax:

#vgextend   <Volume group name>    <Physical volume name>
#vgextend   VG1   /dev/sda3

Now let us check the size of the volume group by using the following command

#vgs

As you can see from the above output, now the volume group  VG1 space extended from 3.99 GB to 19.09 GB

If you want to check which PV is used to create particular volume group run the following command

#pvscan

As you can see from the above screenshot each PV and its associated VG names are listed, we have just added one PV  and it’s totally free.

Extending size of the Logical volume(LV)

Before we expanding the size let us check the size of the  each Logical volumes

#lvdisplay or lvs

For better view Output has been truncated

In this example am going to expand the size of the logical volume lv1 (current lv1 size is 2 GB)

I will add additional 10 GB to the logical volume lv1

syntax:

#lvextend  -L <+size>   <Logical volume name path>
#lvextend -L +10 GB  /dev/VG1/lv1

As you can see from the above screenshot, now the  filesystem size is extended 10 12 GB  from  3.99 GB

After extending we need to resize the filesystem by using the following command

Before you run the resize2fs command you must have to run the e2fsck command to check the

#e2fsck -f  /dev/VG1/lv1

e2fsck is used to check the integrity of ext2/ext3/ext4 filesystem types.

Note:resize2fs will not run unless you execute e2fsck . 

#resize2fs  /dev/VG1/lv1

Now let us see the size of the re-sized logical volumes by using lvdisplay

#lvdisplay

As you can see from the above output after extending there is 12.00 GB from 3.99 GB

Now if we check the VG size available

#vgs

The above output says the current available VGfree size is changed from 19.9 GB to  9.09 GB

I hope now you have got some ideas on LVM concepts, resizing the LV, VG volumes.

Never miss any articles from Vasanth blog follow my facebook page for updates Learn Linux in an easier way
If you found this article useful, Kindly Subscribe here 👉👉🏿Click this link to Subscribe