How to Configure RAID in Linux Step by Step Guide
This tutorial explains the way to view, list, create, add, put off, delete, resize, layout, mount and configure RAID Levels (zero, 1 and 5) in Linux little by little with realistic examples. Learn basic concepts of software RAID (Chunk, Mirroring, Striping and Parity) and essential RAID tool management commands in detail.



RAID stands for Redundant Array of Independent Disks. There are  styles of RAID; Hardware RAID and Software RAID.

Hardware RAID
Hardware RAID is a physical garage device that's constructed from multiple tough disks. While connecting with system all disks seems as a unmarried SCSI disk in device. From gadget factors of view there's no distinction between a normal SCSI disk and a Hardware RAID tool. System can use hardware RAID tool as a unmarried SCSI disk.

Hardware RAID has its very own impartial disk subsystem and sources. It does no longer use any resources from device consisting of energy, RAM and CPU. Hardware RAID does not placed any extra load in gadget. Since it has its own devote resources, it gives excessive performance.

Software RAID
Software RAID is a logical garage device that is built from attached disks in gadget. It makes use of all sources from device. It provides sluggish overall performance however price not anything. In this educational we can learn how to create and manipulate software program RAID in element.

This educational is the remaining a part of our article “Linux Disk Management Explained in Easy Language with Examples”. You can read different elements of this newsletter right here.

Linux Disk Management Tutorial
This is the first a part of this article. This part explains fundamental principles of Linux disk control inclusive of BIOS, UEFI, MBR, GPT, SWAP, LVM, RAID, primary partition, prolonged partition and Linux record system type.

Manage Linux Disk Partition with fdisk Command

This is the second a part of this text. This part explains a way to create number one, prolonged and logical partitions from fdisk command in Linux grade by grade with examples.

Manage Linux Disk Partition with gdisk Command

This is the third a part of this article. This part explains how to create GPT (GUID partition desk) walls from gdisk command in Linux grade by grade with examples.

Linux Disk Management with parted command

This is the fourth a part of this article. This component explains how to create primary, prolonged, logical and GPT walls from parted command in Linux step by step with examples.

How to create SWAP partition in Linux

This is the fifth a part of this text. This element explains the way to create swap partition in Linux with examples together with primary switch control duties together with how to increase, mount or clear swap memory.

Learn the way to configure LVM in Linux little by little

This is the sixth a part of this newsletter. This part explains primary standards of LVM in detail with examples inclusive of the way to configure and manipulate LVM in Linux step by step.

Basic standards of RAID


A RAID tool can be configured in multiple ways. Depending on configuration it could be classified in ten one of a kind tiers. Before we talk RAID stages in more element, let’s have a quick appearance on some vital terminology utilized in RAID configuration.

Chunk: - This is the size of facts block used in RAID configuration. If chew size is 64KB then there would be sixteen chunks in 1MB (1024KB/64KB) RAID array.

Hot Spare: - This is the additional disk in RAID array. If any disk fails, facts from faulty disk will be migrated in this spare disk robotically.

Mirroring: - If this option is enabled, a duplicate of identical statistics can be saved in different disk also. It is much like making an extra reproduction of facts for backup purpose.

Striping: - If this option is enabled, data could be written in all to be had disks randomly. It is much like sharing facts between all disks, so all of them fill similarly.

Parity: - This is method of regenerating misplaced statistics from stored parity statistics.

Different RAID stages are defined based on how mirroring and stripping are required. Among these degrees handiest Level 0, Level1 and Level5 are in most cases utilized in Red Hat Linux.

RAID Level zero
This stage gives striping with out parity. Since it does not save any parity facts and perform read and write operation simultaneously, speed might be a great deal quicker than different level. This level calls for at the least  difficult disks. All hard disks on this stage are stuffed similarly. You ought to use this level handiest if examine and write pace are concerned. If you make a decision to use this stage then usually deploy alternative statistics backup plan. As any single disk failure from array will bring about general records loss.

RAID Level 1
This degree provides parity with out striping. It writes all records on  disks. If one disk is failed or removed, we nevertheless have all records on different disk. This degree calls for double tough disks. It means in case you need to use 2 hard disks then you have to deploy 4 difficult disks or if you need use one hard disk then you need to installation  tough disks. First hard disk shops authentic information at the same time as different disk stores the precise replica of first disk. Since records is written two times, overall performance can be reduced. You have to use this degree most effective if facts protection is involved at any value.

RAID Level 5
This stage provides both parity and striping. It requires as a minimum 3 disks. It writes parity facts equally in all disks. If one disk is failed, statistics can be reconstructed from parity records available on ultimate disks. This gives a aggregate of integrity and overall performance. Wherever feasible you should always use this degree.

If you need to apply hardware RAID device, use warm swappable hardware RAID device with spare disks. If any disk fails, data will be reconstructed on the primary to be had spare disk with none downtime and on account that it's far a warm swappable device, you can update failed device even as server continues to be going for walks.

If RAID device is nicely configured, there could be no distinction among software program RAID and hardware RAID from operating system’s point of view. Operating gadget will get admission to RAID device as a everyday tough disk, regardless of whether it's miles a software RAID or hardware RAID.

Linux provides md kernel module for software RAID configuration. In order to use software RAID we must configure RAID md device which is a composite of two or greater storage gadgets.

How to configure software program RAID step by step
For this tutorial I anticipate which you have un-partitioned disk space or additional difficult disks for practice. If you are following this tutorial on digital software program consisting of VMware workstation, add 3 additional difficult disks in device. To learn how to upload additional difficult disk in digital gadget, please see the first a part of this educational. If you're following this tutorial on physical machine, connect an additional tough disk. You can use a USB stick or pen drive for practice. For demonstration cause I actually have connected 3 extra tough disks in my lab machine.

Each disk is 2GB in size. We can list all connected tough disks with fdisk –l command.

Fdisk -l command

We also can use lsblk command to view a based evaluate of all attached garage devices.

Lsblk command

As we can see in above output there are three un-partitioned disks to be had with each of 2G in size.

The mdadm package is used to create and control the software program RAID. Make sure it's far mounted earlier than we begin working with software RAID. To discover ways to deploy and control package deal in linux see the following tutorials

How to configure YUM Repository in RHEL
RPM Command Explained with Example

For this academic I expect that mdadm bundle is set up.

Rpm -qa mdadm

Creating RAID zero Array
We can create RAID zero array with disks or walls. To understand both options we are able to create  separate RAID zero arrays; one with disks and different with partitions. RAID 0 Array calls for at the least  disks or walls. We will use /dev/sdc and /dev/sdd disk to create RAID zero Array from disks. We will create two partitions in /dev/sdb and later use them to create some other RAID 0 Array from walls.

To create RAID 0 Array with disks use following command

#mdadm --create --verbose /dev/[ RAID array Name or Number] --level=[RAID Level] --raid-gadgets=[Number of storage devices] [Storage Device] [Storage Device]
Let’s recognize this command in detail

mdadm:- This is the primary command

--create:- This option is used to create a new md (RAID) device.

--verbose:- This option is used to view the real time update of process.

/dev/[ RAID array Name or Number]:- This argument is used to provide the name and region of RAID array. The md device need to be created under the /dev/ directory.

--degree=[RAID Level]:- This option and argument are used to outline RAID degree which need to create.

--raid-gadgets=[Number of storage devices]:- This choice and argument are used to specify the quantity of garage devices or walls which we need to use on this device.

[Storage Device]:- This choice is used to specify the call and area of garage tool.

Following command may be used to create a RAID zero array from disks /dev/sdc and /dev/sdd with md0 call.

Mdadm create raid array

To confirm the array we will use following command

cat /proc/mdstat command

Above output confirms that RAID array md0 has been successfully created from  disks (sdd and sdc) with RAID degree zero configurations.

Creating RAID zero Array with walls
Create a 1GiB partition with fdisk command

fdisk create new partition

By default all walls are created as Linux popular. Change partition kind to RAID and keep the partition. Exit from fdisk software and run partprobe command to replace the run time kernel partition desk.

Fdisk command change partition type

To examine fdisk command and its sub-command in element please see the second one a part of this academic and is the reason the way to create and manage partitions with fdisk command step by step.

Let’s create one more partition but this time use parted command.

Create new partition with parted

To analyze parted command in detail please sees the fourth part of this academic and is the reason how to manipulate disk with parted command grade by grade.

We have created two partitions. Let’s build every other RAID (Level zero) array however this time use walls rather than disks.

Same command may be used to create RAID array from partitions.

Madam create command

When we use mdadm command to create a brand new RAID array, it places its signature on furnished device or partition. It approach we can create RAID array from any partition kind or maybe from a disk which does not comprise any partition in any respect. So which partition type we use here isn't important, the vital factor which we have to continually take into account is that partition have to not comprise any valuable facts. During this method all records from partition might be wiped out.

Creating File system in RAID Array
We can not use RAID array for facts garage until it contains a valid document machine. Following command is used to create a record machine in array.

#mkfs –t [File system type] [RAID Device]
Let’s format md0 with ext4 file system and md1 with xfs report system.

Format md tool

RAID 0 Arrays are equipped to apply. In order to use them we have to mount them someplace in Linux document system. Linux record gadget (number one listing shape) starts offevolved with root (/) directory and the entirety is going below it or its subdirectories. We need to mount walls somewhere under this directory tree. We can mount partitions transient or permanently.

Temporary mounting RAID 0 Array
Following command is used to mount the array brief.

#mount [what to mount] [where to mount]
Mount command accepts several alternatives and arguments which I will explain one after the other in every other tutorial. For this tutorial this basic syntax is sufficient.

What to mount :- This is the array.

Wherein to mount :- This is the directory a good way to be used to get right of entry to the hooked up aid.

Once mounted, whatever action we are able to perform in mounted directory might be achieved in set up sources. Let’s apprehend it nearly.

Create a mount directory in / directory
Mount /dev/md0 array
List the content
Create a check directory and file
List the content again
Un-mount the /dev/md0 array and listing the content again
Now mount the /dev/md1 array and listing the content
Again create a check directory and file. Use one of a kind name for report and directory
List the content material
Un-mount the /dev/md1 array and list the content material once more
Following discern illustrates this workout step by step

transient mount

As above determine shows anything movement we executed in mount directory turned into simply accomplished in respective array.

Temporary mount option is good for array which we get entry to from time to time. If we get entry to array on everyday basis then this approach will now not beneficial. Each time we reboot the device all brief set up resources are get un-installed robotically. So if we have an array which goes to be used often, we ought to mount it permanently.

Mounting RAID Array permanently
Each useful resource in document gadget has a completely unique ID known as UUID. When mounting an array completely we have to use UUID instead of its call. From model 7, RHEL also makes use of UUID instead of tool call.

The UUID stands for Universally Unique Identifier. It is a 128-bit wide variety, expressed in hexadecimal (base 16) layout.

If you have got a static surroundings, you may use tool name. But if you have dynamic surroundings, you need to continually use UUID. In dynamic environment device call may also change each time whilst device boot. For example we attached a further SCSI disk in gadget; it will be named as /dev/sdb. We set up this disk completely with its device call. Now suppose a person else removed this disk and attached new SCSI disk in the equal slot. New disk may also be named as /dev/sdb. Since name of antique disk and new disk is same, new disk can be mounted at the vicinity of vintage disk. This manner, tool name should create a critical hassle in dynamic surroundings. But this problem can clear up with UUID. No matter how we connect the aid with gadget, its UUID will stay usually restoration.

If you've got static surroundings, you can consider device name to mount the resource. But when you have dynamic surroundings, you have to constantly use UUID.

To recognise the UUID of all partitions we will use blkid command. To recognize the UUID of a particular partition we must use its call as argument with this command.

Blkid command

Once we know the UUID, we will use it rather than tool call. We can also use copy and paste choice to type the UUID.

Use blkid command to print the UUID of array.
Copy the UUID of array.
Use mount command to mount the array. Use paste choice as opposed to typing UUID.
Following discern illustrates above steps

brief mount with uuid command

When machine boots, it appears in /and many others/fstab document to find out the devices (walls, LVs, change or array) which want to be mount in report device routinely. By default this record has entry for those partitions, logical volumes and change space which had been created at some stage in the installation. To mount any extra tool (Array) robotically we need to make an entry for that tool in this record. Each entry in this file has six fields.

Default fstab record

Number Filed Description
1 What to mount Device which we need to mount. We can use tool call, UUID and label in this filed to represent the tool.
2 Where to mount The listing in most important Linux File System where we need to mount the device.
Three File system File system kind of tool.
4 Options Just like mount command we can also use supported alternatives here to control the mount procedure. For this academic we are able to use default options.
5 Dump assist To enable the sell off in this device use 1. Use zero to disable the unload.
6 Automatic take a look at Whether this device need to be checked whilst mounting or not. To disable use zero, to allow use 1 (for root partition) or 2 (for all partitions except root partition).
Let’s make some directories to mount the arrays which we have created these days

mkdir command

Take the backup of fstab report and open it for modifying

and many others/fstab backup

Make entries for arrays and shop the file.

Fstab entries

For demonstration motive I used each tool call and UUID to mount the partitions. After saving always take a look at the entries with mount –a command. This command will mount the whole lot listed in /etc/fstab file. So if we made any mistake while updating this report, we will get an blunders as the output of this command.

If you get any blunders as the output of mount –a command, correct that before rebooting the device. If there may be no blunders, reboot the gadget.

Mount -a command

The df –h command is used to check the to be had area in all established walls. We can use this command to verify that all partitions are hooked up efficaciously.

Df -h command

Above output confirms that every one partitions are installed efficaciously. Let’s listing the both RAID gadgets.

Listing md device

How to delete RAID Array
We cannot delete a hooked up array. Un-mount all arrays which we created on this exercise

umount command

Use following command to forestall the RAID array

#mdadm --stop /dev/[Array Name]
mdstop command

Remove the mount listing and copy the authentic fstab report returned.

If you haven’t taken the backup of unique fstab report, put off all entries from this file that you made.

Restore fstab report

Finally reset all disks used in this exercise.

Dd command linux

The dd command is the easiest manner to relaxation the disk. Disk utilities keep their configuration parameters in wonderful block. Usually super block size is defined in KB so we simply overwritten the primary 10MB area with null bytes in every disk. To research dd command in detail, see the 5th a part of this tutorial and is the reason this command in detail.

Now reboot the machine and use df –h command once more to verify that all RIAD gadgets which we created on this workout are gone.

Df -h command

How to create RAID 1 and RAID 5 array
We can create RAID 1 or RAID five array with the aid of following identical process. All steps and commands may be equal except the mdadm --create command. In this command you need to alternate the RAID stage, wide variety of disks and location of associated disks.

To create RAID 1 array from /dev/sdd and /dev/sdb disks use following command

raid 1 array create

To create RAID 1 array from /dev/sdb1 and /dev/sdb2 partitions use following command

raid 1 partition

You may also get metadata warning when you have used same disks and partitions to create RAID array previously and that disks or walls nevertheless comprise metadata data. Remember we cleaned most effective 10Mb beginning area leaving ultimate area untouched. You can adequately forget about this message or can smooth the entire disk before using them again.

To create RAID 5 array from /dev/sdb, /dev/sdc and /dev/sdd disks use following command.

Raid five from disks

RAID 5 Configuration requires at the least 3 disks or walls. That’s why we used three disks here.

To create RAID 5 array from /dev/sdb1, /dev/sdb2 and /dev/sdb3 walls use following command

raid 5 from partition

To avoid unnecessary errors continually relaxation disks earlier than the usage of them in new practice.

So far in this educational we have learnt the way to create, mount and dispose of RAID array. In following segment we are able to learn how to manipulate and troubleshoot a RAID Array. For this section I expect which you have at the least one array configured. For demonstration motive I will use closing configured (RAID five with 3 walls) instance. Let’s create record gadget in this array and mount it.

Temporary mount md device

Let’s placed a few dummy records in this directory.

Dummy data

I redirected the guide page of ls command in /testingdata/guide-of-ls-command file. Later, to confirm that document incorporates actual records I used wc command which counts line, phrase and characters of report.

How to view the element of RAID device
Following command is used to view the certain records about RAID tool.

#mdadm --element /dev/[RAID Device Name]
This statistics consists of RAID Level, Array size, used sized from general to be had length, devices used in growing this Array, devices currently used, spare devices, failed devices, bite length, UUID of Array and lots greater.

Mdadm detial

How to feature additional disk or partition in RIAD
There are several situations where we must increase the size of RAID tool as an instance a raid tool might be crammed up with facts or a disk form Array is probably failed. To growth the distance of RAID device we must add extra disk or partition in present Array.

In strolling example we used /dev/sdb disk to create 3 walls. The /dev/sdc and /dev/sdd are nevertheless available to apply. Before we upload them in this Array ensure they're cleaned. Last time we used dd command to clean the disks. We can use that command once more or use following command

#mdadm --zero-superblock /dev/[Disk name]
To test a disk whether it includes superblock or now not we can use following command

#mdadm --study /dev/[Disk name]
Following parent illustrates the usage of both commands on both disks

mdadm exiamne

Now each disks are ready for RAID Array. Following command is used to feature extra disk in present array.

#mdadm --manage /dev/[RAID Device] --upload /dev/[disk or partition]
Let’s add /dev/sdc disk in this array and verify the equal.

Mdadm add aditional space

Right now this disk has been introduced as a spare disk. This disk will now not be used until any disk fails from existing array or we manually pressure RAID to apply this disk.

If any disk fails and spare disks are available, RAID will automatically select the first to be had spare disk to update the defective disk. Spare disks are the first-class backup plan in RAID tool.

For backup we will add another disk in array, allow’s use this disk to increase the scale of array. Following command is used to grow the size of RAID device.

#mdadm --grow --raid-devices=[Number of Device] /dev/[RAID Device]
RAID arranges all gadgets in a sequence. This collection is constructed from the order in which disks are added in array. When we use this command RAID will upload subsequent working tool in energetic gadgets.

Following determine illustrates this command

mdadm grow array

As we will see in above output disk has been delivered in array and the size of array has been efficiently elevated.

Removing faulty device
If spare device is available, RAID will automatically replace the defective tool with spare tool. End person will now not see any exchange. He can be able to access the records as regular. Let’s understand it nearly.

Right now there is no spare disk available in array. Let’s upload one spare disk.

Mdadm command spare disk

When a disk fails, RAID marks that disk as failed device. Once marked, it can be eliminated effectively. If we want to take away any running tool shape array for protection or troubleshooting reason, we ought to constantly mark that as a failed tool before disposing of. When a tool is marked as failed device, all information from failed device is reconstructed in working devices.

To mark a disk as failed device following command is used.

#mdadm --manage --set-faulty /dev/[Array Name] /dev/[Faulty Disk]
We currently extended the dimensions of this array. So earlier than doing this practice let’s verify another time that array nevertheless consists of the valid statistics.

Wc command

As above output confirms that array nevertheless includes valid information. Now permit’s mark a device /dev/sdc as faulty device from array and confirm the operation.

Mdadm set faulty disk

As above output confirms that tool sdc that is number four in array series has been marked as failed [F] device.

As we recognize if spare disk is to be had, it will likely be used as the alternative of defective tool mechanically. No manual action is required on this technique. Let’s affirm that spare disk has been used because the alternative of faulty disk.

Mdadm put off faulty tool

Finally let’s verify that facts continues to be found in array.

Affirm information

Above output confirms that array nevertheless consists of valid statistics.