How to check cutting-edge RAID configuration in Linux

RAID is an acronym for Redundant Array of Independent Disks. It is not anything however mixed single digital tool made out of disk drives or partitions. Some RAID ranges consist of redundancy and so can live to tell the tale some diploma of tool failure. Linux help following RAID devices:

RAID0 (striping)
RAID1 (mirroring)
RAID4
RAID5
RAID6
RAID10
MULTIPATH
FAULTY
CONTAINER
Check RAID configuration in Linux
The /proc/mdstat is a special document that stores critical information approximately all presently active RAID devices. Type the following cat command:
cat /and many others/mdadm.Conf

Or
cat /proc/mdstat

Linux check your contemporary RAID configuration
From the above output, it's miles clean that I even have RAID 10 viraul tool product of five disk partitions as follows:

md125 – RAID device file name
active raid10 – RAID type
sde3[3] sdb3[2] sdc3[1] sdd3[4] sda3[0] – RAID 10 device named /dev/md125 fabricated from 5 partitions (also known as “element device”)
[UUUUU] – Shows repute of each device of raid member disk/partition. The “U” method the tool is healthful and up/running. The “_” means the device is down or damaged
Reviewing RAID configuration in Linux
Want to decide whether a particular tool is a RAID device or a component device, run:
# mdadm --question /dev/DEVICE
# mdadm --query /dev/md125
# mdadm --question /dev/md125,6,7

/dev/md125: 1157.85GiB raid10 five devices, 0 spares. Use mdadm --detail for extra element.
/dev/md126: four.98GiB raid10 five gadgets, 0 spares. Use mdadm --detail for extra detail.
/dev/md127: 1281.00MiB raid10 5 devices, zero spares. Use mdadm --detail for extra element.
Let us observe a RAID tool called /dev/ in extra details, execute the subsequent command:
# mdadm --detail /dev/md125

How to test raid configuration in redhat Linux
Finally see data approximately element tool named /dev/sdd3, run:
# mdadm --study /dev/sdd3

Sample outputs:

/dev/sdd3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 4afdd8e1:a827d278:b1613938:cdc0a6ef
           Name : localhost.Localdomain:root
  Creation Time : Sun Jun 25 19:07:forty three 2017
     Raid Level : raid10
   Raid Devices : five

 Avail Dev Size : 971276288 (463.14 GiB 497.29 GB)
     Array Size : 1214095360 (1157.85 GiB 1243.23 GB)
    Data Offset : 262144 sectors
   Super Offset : eight sectors
   Unused Space : before=262056 sectors, after=0 sectors
          State : easy
    Device UUID : b6d9043e:fc1c8b6e:e82f970f:edf597e9

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Dec 15 00:44:25 2018
  Bad Block Log : 512 entries to be had at offset seventy two sectors
       Checksum : 7c314cad - accurate
         Events : 21001

         Layout : close to=2
     Chunk Size : 512K

   Device Role : Active device 4
   Array State : AAAAA ('A' == active, '.' == lacking, 'R' == replacing)