RAID 10, also called RAID 1+0 and sometimes RAID 1&0, is similar to RAID 01 with an exception that two used standard RAID levels are layered in the opposite order; thus, RAID 10 is a stripe of mirrors.
RAID 10, as recognized by the storage industry association and as generally implemented by RAID controllers, is a RAID 0 array of mirrors, which may be two- or three-way mirrors, and requires a minimum of four drives. However, a nonstandard definition of "RAID 10" was created for the Linux MD driver; Linux "RAID 10" can be implemented with as few as two disks. Implementations supporting two disks such as Linux RAID 10 offer a choice of layouts. Arrays of more than four disks are also possible.
According to manufacturer specifications and official independent benchmarks, in most cases RAID 10 provides better throughput and latency than all other RAID levels except RAID 0 (which wins in throughput). Thus, it is the preferable RAID level for I/O-intensive applications such as database, email, and web servers, as well as for any other use requiring high disk performance.
Make note of the names of the disks in your system. Linux names them /dev/sda through to /dev/sdz and then /dev/sdaa to /dev/sdaz and so on. You can use the commmand "/dev ls -l" to list all the devices in the system or you can use the Disk Utility to make note of the drive names.
For example in a 45 Drive machine with two redundant boot drives, the disks present in the system will be: /dev/sdc to /dev/sdz and /dev/sdaa to /dev/sdau.
In a 60 drive machine with two redundant drives, the disks present will be: /dev/sdc to /dev/sdz and /dev/sdaa to /dev/sdaz and /dev/sdba to /dev/sdbj
In a 30 drive machine with two redundant drives, the disks present will be: /dev/sdc to /dev/sdz and /dev/sdaa to /dev/sdaf
To build your RAID 10 you are going to need a even number of disks.
The following is how to construct a 44 disk RAID 10.
A 44 disk RAID10 would be 22 RAID1's mirrored together.
1) Build your RAID10
root@Proto:~# mdadm --create /dev/md0 --level=10 --raid-devices=44 /dev/sd[cdefghijklmnopqrstuvwxyz] /dev/sda[abcdefghijklmnopqrst]
2) Wait for your RAID01 to finish syncing. The following command will give you real time updates:
root@Proto:~# watch cat /proc/mdstat
You will notice the speed it is resynching is around 200 000 KB/s. Since this process does not involve any parity calculation the speed at which it resynchs is not CPU bound and can go much faster the 20000KB/s limit mdadm defaults to.
By using the following command you can significantly speed up the synching of a mirrored array.
root@Proto:~# echo 1000000 > /proc/sys/dev/raid/speed_limit_max
4) Add new RAID arrays to /etc/mdadm.conf (/etc/mdadm/mdadm.conf on debian)
root@Proto:~# mdadm --detail --scan >> /etc/mdadm.conf
5) Create a filesystem ontop of the array. This example uses XFS.
root@Proto:~# mkfs.xfs -L Backup /dev/md0
6) Create a mount point for your array. Note you can call it whatever you would like.
root@Proto:~# mkdir /mnt/data0
7) Add the following lines to /etc/fstab:
/dev/md2 /mnt/data0 xfs auto,rw 0 0
8) Mount all the volumes in the system
root@Proto:~# mount -av
NOTE: You will need to set the read and write permissions for your newly mounted filesystems. There are a number of ways to set your permissions using either chown or chmod. The method you choose will be very dependent on your particular security needs. You are encouraged to read the links provided to help with this process.