Creating RAID Arrays on Ubuntu 22.04 using mdadm
The introduction, a crucial part of any piece of writing.
Administrators can utilize the mdadm tool in Linux to effectively establish and supervise storage arrays by harnessing the software RAID capabilities. They can seamlessly coordinate individual storage devices and create logical storage devices with enhanced performance or redundancy features.
This guide will help you configure various RAID setups on an Ubuntu 22.04 server.
Requirements
In order to proceed with the instructions provided in this manual, you will require the following items:
- A non-root user with sudo privileges on an Ubuntu 22.04 server. To learn how to set up an account with these privileges, follow our Ubuntu 22.04 initial server setup guide.
- A basic understanding of RAID terminology and concepts. To learn more about RAID and what RAID level is right for you, read our introduction to RAID article.
- Multiple raw storage devices available on your server. The examples in this tutorial demonstrate how to configure various types of arrays on the server. As such, you will need some drives to configure.
- Depending on the array type, you will need two to four storage devices. These drives do not need to be formatted prior to following this guide.
Info
Option:
You can choose to reset current RAID devices if desired.
If you haven’t set up any arrays yet, you can ignore this section for now. This guide will familiarize you with various RAID levels. If you want to follow along and complete each RAID level for your devices, it’s recommended to reuse your storage devices after each section. Whenever you need to reset your component storage devices before testing a new RAID level, you can refer to the specific section called “Resetting Existing RAID Devices”.
Warning
Start by locating the active arrays within the /proc/mdstat file.
- cat /proc/mdstat
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid0 sdc[1] sdd[0] 209584128 blocks super 1.2 512k chunks unused devices: <none>
Next, dismount the array from the file system.
- sudo umount /dev/md0
Please halt and eliminate the array.
- sudo mdadm –stop /dev/md0
Discover the devices employed to construct the array by executing the subsequent command.
Warning
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME SIZE FSTYPE TYPE MOUNTPOINT sda 100G linux_raid_member disk sdb 100G linux_raid_member disk sdc 100G disk sdd 100G disk vda 25G disk ├─vda1 24.9G ext4 part / ├─vda14 4M part └─vda15 106M vfat part /boot/efi vdb 466K iso9660 disk
Once the devices utilized in constructing an array are detected, their superblocks containing metadata for the RAID configuration are erased. This process of zeroing effectively eliminates the RAID metadata and restores them to their default state.
- sudo mdadm –zero-superblock /dev/sda
- sudo mdadm –zero-superblock /dev/sdb
To ensure complete removal, it is advisable to eliminate any long-standing mentions of the array. Modify the /etc/fstab file and either comment out or delete the reference to your array. If you prefer using nano or any other text editor, add a hashtag symbol # at the start of the line to comment it out.
- sudo nano /etc/fstab
. . .
# /dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
Please disable or delete the array definition in the /etc/mdadm/mdadm.conf file.
- sudo nano /etc/mdadm/mdadm.conf
. . .
# ARRAY /dev/md0 metadata=1.2 name=mdadmwrite:0 UUID=7261fb9c:976d0d97:30bc63ce:85e76e91
In conclusion, ensure to update the initramfs once more to prevent the early boot process from attempting to bring an unavailable array online.
- sudo update-initramfs -u
Once you reach this point, you can proceed to utilize the storage devices separately or incorporate them into a new array.
Setting up a RAID 0 configuration
The RAID 0 configuration functions by dividing data into segments and distributing them across the available disks. As a result, each disk holds a fraction of the data, and multiple disks are accessed when retrieving information.
- Requirements: Minimum of 2 storage devices.
- Primary benefit: Performance in terms of read/write and capacity.
- Things to keep in mind: Make sure that you have functional backups. A single device failure will destroy all data in the array.
Determining the Component Devices
Firstly, locate the identifiers for the raw disks you will use.
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME SIZE FSTYPE TYPE MOUNTPOINT sda 100G disk sdb 100G disk vda 25G disk ├─vda1 24.9G ext4 part / ├─vda14 4M part └─vda15 106M vfat part /boot/efi vdb 466K iso9660 disk
In this scenario, you possess two 100G disks that lack a filesystem. These disks are assigned the identifiers /dev/sda and /dev/sdb for this session. They will serve as the raw components to construct the array.
Making the Array
To form a RAID 0 array using these components, input them into the mdadm –create command. You need to determine the device name, RAID level, and the number of devices. In this command illustration, you will assign the device as /dev/md0 and incorporate the two disks responsible for constructing the array.
- sudo mdadm –create –verbose /dev/md0 –level=0 –raid-devices=2 /dev/sda /dev/sdb
Please verify the successful creation of the RAID by examining the /proc/mdstat file.
- cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid0 sdb[1] sda[0] 209584128 blocks super 1.2 512k chunks unused devices: <none>
This result indicates that the RAID 0 setup generated the /dev/md0 device by utilizing /dev/sda and /dev/sdb devices.
Making and attaching the filesystem
Afterwards, proceed to establish a file system on the array.
- sudo mkfs.ext4 -F /dev/md0
Next, make a mount location to connect the fresh file system:
- sudo mkdir -p /mnt/md0
To mount the filesystem, use the following command.
- sudo mount /dev/md0 /mnt/md0
Afterward, verify if the newly designated area is accessible.
- df -h -x devtmpfs -x tmpfs
Filesystem Size Used Avail Use% Mounted on /dev/vda1 25G 1.4G 23G 6% / /dev/vda15 105M 3.4M 102M 4% /boot/efi /dev/md0 196G 61M 186G 1% /mnt/md0
The newly implemented filesystem is currently attached and reachable.
Preserving the Structure of the Array
If you want the array to be automatically reassembled during boot, you need to modify the /etc/mdadm/mdadm.conf file. To achieve this, simply scan the active array and add the following information to the file.
- sudo mdadm –detail –scan | sudo tee -a /etc/mdadm/mdadm.conf
Once finished, you have the option to modify the initramfs, also known as the initial RAM file system, to ensure the array is accessible in the early boot phase.
- sudo update-initramfs -u
To ensure automatic mounting at boot, incorporate the fresh mount options to the /etc/fstab configuration file.
- echo ‘/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0’ | sudo tee -a /etc/fstab
Your boot will now be automatically assembled and mounted by your RAID 0 array.
You have completed the setup for your RAID. If you wish to attempt a different RAID configuration, refer to the resetting guidelines mentioned at the start of this tutorial, in order to create a new RAID array format.
Setting up a RAID 1 configuration
RAID 1 utilizes disk mirroring to duplicate data across all disks present in the array. Each disk within the RAID 1 array holds an exact replica of the data, ensuring backup in case of any device malfunctions.
- Requirements: Minimum of 2 storage devices.
- Primary benefit: Redundancy between two storage devices.
- Things to keep in mind: Since two copies of the data are maintained, only half of the disk space will be usable.
Recognizing the Component Devices
First, locate the identifiers of the raw disks you plan to use.
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME SIZE FSTYPE TYPE MOUNTPOINT sda 100G disk sdb 100G disk vda 25G disk ├─vda1 24.9G ext4 part / ├─vda14 4M part └─vda15 106M vfat part /boot/efi vdb 466K iso9660 disk
In this instance, you possess two disks that are both 100G in size but lack a filesystem. For the current session, these devices have been assigned the identifiers /dev/sda and /dev/sdb. These disks will serve as the raw components for constructing the array.
Making the Array
To form a RAID 1 array using these components, include them in the mdadm –create command. Mention the desired device name, RAID level, and the total number of devices. In this specific instance, designate the device as /dev/md0 and add the disks that will be used to construct the array.
- sudo mdadm –create –verbose /dev/md0 –level=1 –raid-devices=2 /dev/sda /dev/sdb
If the constituent devices being utilized do not have the boot flag enabled, you will probably receive the subsequent caution. It is acceptable to reply with y and proceed without any harm.
mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store ‘/boot’ on this device please ensure that your boot-loader understands md/v1.x metadata, or use –metadata=0.90 mdadm: size set to 104792064K Continue creating array? y
The mdadm tool initiates drive mirroring, which may require some time to finish. However, you can still utilize the array throughout this process. To keep track of the mirroring progress, you can refer to the /proc/mdstat file.
- cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdb[1] sda[0] 104792064 blocks super 1.2 [2/2] [UU] [====>…………….] resync = 20.2% (21233216/104792064) finish=6.9min speed=199507K/sec unused devices: <none>
The initial line indicates that the /dev/md0 device was formed in RAID 1 style utilizing the /dev/sda and /dev/sdb devices. The subsequent line highlights the ongoing progress of mirroring. You are able to proceed with the next task as this procedure continues.
Creating and attaching the filesystem.
Then, proceed to establish a filesystem on the array.
- sudo mkfs.ext4 -F /dev/md0
Next, establish a location for mounting the newly created filesystem.
- sudo mkdir -p /mnt/md0
To mount the filesystem, execute the following command.
- sudo mount /dev/md0 /mnt/md0
Verify if the new area is accessible.
- df -h -x devtmpfs -x tmpfs
Filesystem Size Used Avail Use% Mounted on /dev/vda1 25G 1.4G 23G 6% / /dev/vda15 105M 3.4M 102M 4% /boot/efi /dev/md0 99G 60M 94G 1% /mnt/md0
The recently installed file system is now successfully mounted and can be easily accessed.
Preserving the Structure of the Array
In order to ensure the automatic reassembly of the array during boot, it is necessary to modify the /etc/mdadm/mdadm.conf file. You can effortlessly scan the active array and add the following details to the file.
- sudo mdadm –detail –scan | sudo tee -a /etc/mdadm/mdadm.conf
Once completed, you can then proceed with the update of the initramfs, also known as the initial RAM file system, in order to ensure that the array remains accessible during the initial stages of booting up.
- sudo update-initramfs -u
Include the additional mount options for the new filesystem in the /etc/fstab file to facilitate automatic mounting during system boot.
- echo ‘/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0’ | sudo tee -a /etc/fstab
Your RAID 1 array will be automatically assembled and mounted during each boot process from now on.
After completing the RAID setup, if you wish to experiment with a different RAID configuration, kindly refer to the initial instructions provided in this tutorial on how to reset and proceed with creating a new RAID array type.
Setting up a RAID 5 Array
The RAID 5 array utilizes data striping across devices with one calculated parity block in each stripe. In the event of a device failure, the remaining blocks and the parity block can be used to compute the missing data. The device that stores the parity block is rotated to ensure a balanced distribution of parity information among all devices.
- Requirements: Minimum of 3 storage devices.
- Primary benefit: Redundancy with more usable capacity.
- Things to keep in mind: While the parity information is distributed, one disk’s worth of capacity will be used for parity. RAID 5 can suffer from very poor performance when in a degraded state.
Identifying the Devices that Compose a System
Firstly, locate the identifiers for the raw disks that you plan on utilizing.
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME SIZE FSTYPE TYPE MOUNTPOINT sda 100G disk sdb 100G disk sdc 100G disk vda 25G disk ├─vda1 24.9G ext4 part / ├─vda14 4M part └─vda15 106M vfat part /boot/efi vdb 466K iso9660 disk
For this session, you have three 100G disks that do not have a filesystem. These disks are assigned the identifiers /dev/sda, /dev/sdb, and /dev/sdc. These disks will be used as the raw components to construct the array.
Forming the Array
In order to build a RAID 5 array using these components, you can use the mdadm –create command. Simply provide the desired device name, RAID level, and the number of devices. For instance, in this specific command, you will assign the device name /dev/md0 and include the disks that will form the array.
- sudo mdadm –create –verbose /dev/md0 –level=5 –raid-devices=3 /dev/sda /dev/sdb /dev/sdc
The mdadm tool initiates the configuration of the array using the recovery process to optimize performance. It may take a while for this process to finish; however, you can still utilize the array during this time. To keep track of the mirroring progress, you can monitor the /proc/mdstat file.
- cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdc[3] sdb[1] sda[0] 209582080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_] [>………………..] recovery = 0.9% (957244/104791040) finish=18.0min speed=95724K/sec unused devices: <none>
The initial line displays the creation of the /dev/md0 device in the RAID 5 setup, utilizing the /dev/sda, /dev/sdb, and /dev/sdc devices. The subsequent line indicates the ongoing progress of the build.
Warning
While this process is being completed, you are able to proceed with the guide.
Creating and attaching the filesystem.
Afterwards, proceed to establish a file system on the array.
- sudo mkfs.ext4 -F /dev/md0
Make a mount point in order to connect the new filesystem.
- sudo mkdir -p /mnt/md0
You have the option to mount the filesystem using the following method.
- sudo mount /dev/md0 /mnt/md0
Please verify if the new area is vacant.
- df -h -x devtmpfs -x tmpfs
Filesystem Size Used Avail Use% Mounted on /dev/vda1 25G 1.4G 23G 6% / /dev/vda15 105M 3.4M 102M 4% /boot/efi /dev/md0 197G 60M 187G 1% /mnt/md0
The newly implemented file system has been successfully mounted and can now be accessed.
Preserving the Configuration of the Array
In order to ensure that the array is automatically reassembled during boot, the /etc/mdadm/mdadm.conf file needs to be modified.
Warning
You can keep track of the mirroring progress by verifying the /proc/mdstat file.
- cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdc[3] sdb[1] sda[0] 209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none>
The provided result indicates that the reconstruction is finished. Subsequently, you have the ability to automatically scan the operational collection and add the file.
- sudo mdadm –detail –scan | sudo tee -a /etc/mdadm/mdadm.conf
Once that is done, you have the option to modify the initramfs, also known as the initial RAM file system, which will make the array accessible in the early stages of the boot process.
- sudo update-initramfs -u
Include the recently introduced filesystem mount preferences in the /etc/fstab file for automatic mounting during startup.
- echo ‘/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0’ | sudo tee -a /etc/fstab
Each time your system boots up, the RAID 5 array will be assembled and mounted automatically.
You have completed setting up your RAID. If you wish to experiment with a different RAID, refer to the resetting instructions provided in the beginning of this tutorial to continue creating a new type of RAID array.
Setting up a RAID 6 Array.
The RAID 6 array utilizes data striping across devices and incorporates two parity blocks within each stripe. In case of failure of one or two devices, the missing data can be calculated using the remaining blocks and the parity blocks. To ensure an equal distribution of parity information, the devices receiving the parity blocks are rotated. This approach resembles a RAID 5 array but provides additional tolerance for the failure of two drives.
- Requirements: Minimum of 4 storage devices.
- Primary benefit: Double redundancy with more usable capacity.
- Things to keep in mind: While the parity information is distributed, two disks worth of capacity will be used for parity. RAID 6 can suffer from very poor performance when in a degraded state.
Determining the Component Devices
Begin by identifying the raw disk identifiers that you intend to use.
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME SIZE FSTYPE TYPE MOUNTPOINT sda 100G disk sdb 100G disk sdc 100G disk sdd 100G disk vda 25G disk ├─vda1 24.9G ext4 part / ├─vda14 4M part └─vda15 106M vfat part /boot/efi vdb 466K iso9660 disk
In this given scenario, there are four 100G disks with no filesystem. For the purpose of this session, these disks are labeled as /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd. They will serve as the fundamental components to construct the array.
Making the Array.
To form a RAID 6 array using these components, include them in the mdadm –create command. Specify the desired device name, RAID level, and the quantity of devices. To illustrate, utilize the command below, where you will designate the device as /dev/md0 and incorporate the disks that will construct the array:
- sudo mdadm –create –verbose /dev/md0 –level=6 –raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
The mdadm tool initiates the array configuration by employing the recovery process to optimize performance. Although it may take a while to finish, the array remains usable throughout this period. To track the mirroring progress, you can refer to the /proc/mdstat file.
- cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sdd[3] sdc[2] sdb[1] sda[0] 209584128 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU] [>………………..] resync = 0.6% (668572/104792064) finish=10.3min speed=167143K/sec unused devices: <none>
The initial line indicates that the RAID 6 configuration utilizing /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd devices has successfully created the /dev/md0 device. The subsequent line, which is highlighted, displays the ongoing progress of the build. You can proceed with the guide while this procedure finishes.
Making and attaching the Filesystem
Afterward, generate a file system on the array.
- sudo mkfs.ext4 -F /dev/md0
Establish a mounting location for the new file system.
- sudo mkdir -p /mnt/md0
To mount the filesystem, use the following command.
- sudo mount /dev/md0 /mnt/md0
Please confirm if the new area is accessible.
- df -h -x devtmpfs -x tmpfs
Filesystem Size Used Avail Use% Mounted on /dev/vda1 25G 1.4G 23G 6% / /dev/vda15 105M 3.4M 102M 4% /boot/efi /dev/md0 197G 60M 187G 1% /mnt/md0
The recently installed file system is now mounted and easily accessible.
Preserving the configuration of the Array
In order to ensure that the array is reassembled automatically upon booting, you will need to modify the /etc/mdadm/mdadm.conf file. You can accomplish this by scanning the active array and adding it to the file by typing:
- sudo mdadm –detail –scan | sudo tee -a /etc/mdadm/mdadm.conf
Once done, you have the option to modify the initial RAM file system (initramfs) in order to make the array accessible during the early boot sequence.
- sudo update-initramfs -u
Include the fresh filesystem mount preferences in the /etc/fstab file to ensure automatic mounting during startup.
- echo ‘/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0’ | sudo tee -a /etc/fstab
Your RAID 6 array will be assembled and mounted automatically upon each boot.
Once you have completed the RAID configuration, you can experiment with various RAID types by referring to the instructions mentioned at the start of this tutorial for resetting. This will enable you to create a different RAID array.
Setting up a sophisticated RAID 10 configuration.
We will be utilizing the mdadm RAID 10 type, which offers similar advantages as the traditional RAID 10 implementation. While the traditional approach involves creating striped RAID 0 arrays consisting of RAID 1 arrays, the mdadm RAID 10 type provides comparable benefits without resorting to nested arrays. This alternative offers enhanced flexibility while maintaining the same characteristics and guarantees.
- Requirements: Minimum of 3 storage devices.
- Primary benefit: Performance and redundancy.
- Things to keep in mind: The amount of capacity reduction for the array is defined by the number of data copies you choose to keep. The number of copies that are stored with mdadm style RAID 10 is configurable.
By default, the near layout stores two duplicates of every data block. The different layouts that determine the storage of each data block are:
- near: The default arrangement. Copies of each chunk are written consecutively when striping, meaning that the copies of the data blocks will be written around the same part of multiple disks.
- far: The first and subsequent copies are written to different parts of the storage devices in the array. For instance, the first chunk might be written near the beginning of a disk, while the second chunk would be written halfway down on a different disk. This can give some read performance gains for traditional spinning disks at the expense of write performance.
- offset: Each stripe is copied, and offset by one drive. This means that the copies are offset from one another, but still close together on the disk. This helps minimize excessive seeking during some workloads.
To obtain additional information on these formats, please refer to the RAID10 section of this manual page.
- man 4 md
You can also locate this man page on the internet.
Determining the Component Devices
To begin, locate the raw disks’ identifiers that you intend to utilize.
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME SIZE FSTYPE TYPE MOUNTPOINT sda 100G disk sdb 100G disk sdc 100G disk sdd 100G disk vda 25G disk ├─vda1 24.9G ext4 part / ├─vda14 4M part └─vda15 106M vfat part /boot/efi vdb 466K iso9660 disk
In this instance, you possess four unformatted disks, each with a capacity of 100G. For the duration of this session, these disks are referred to as /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd. These specific identifiers will be utilized to construct the array.
Making the Array
To form a RAID 10 array using these components, input the device name, RAID level, and the number of devices into the mdadm –create command. In the provided command instance, assign the device as /dev/md0 and include the disks that will be utilized to construct the array.
To create two copies with a near layout, simply omit specifying the layout and number of copies.
- sudo mdadm –create –verbose /dev/md0 –level=10 –raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
To alter the layout or adjust the number of copies, you need to utilize the –layout= option. This option necessitates a specific layout and copy identifier. The available layouts are near (n), far (f), and offset (o). You can also specify the desired number of copies after the identifier.
As an example, if you want to make an array with three duplicates in the offset arrangement, you would use the following command.
- sudo mdadm –create –verbose /dev/md0 –level=10 –layout=o3 –raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
The mdadm utility initiates the configuration of the array by utilizing the recovery process to optimize performance. Although this process may require considerable time to finish, the array remains usable throughout. To track the mirroring progress, you can examine the /proc/mdstat file.
- cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid10 sdd[3] sdc[2] sdb[1] sda[0] 209584128 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] [===>……………..] resync = 18.1% (37959424/209584128) finish=13.8min speed=206120K/sec unused devices: <none>
In the initial marked statement, the /dev/md0 device has been established using the RAID 10 setup, incorporating the devices /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd. The subsequent highlighted portion exhibits the specific arrangement employed in this instance (two copies in a nearby configuration). The third highlighted portion displays the ongoing advancement of the build. You can proceed with the instructions while this process finalizes.
Generating and Setting up the Filesystem
Afterwards, establish a file system on the array.
- sudo mkfs.ext4 -F /dev/md0
Make a mount location in order to connect the new file system.
- sudo mkdir -p /mnt/md0
To mount the filesystem, you have the option to use the following method.
- sudo mount /dev/md0 /mnt/md0
Please verify if the new area is currently free.
- df -h -x devtmpfs -x tmpfs
Filesystem Size Used Avail Use% Mounted on /dev/vda1 25G 1.4G 23G 6% / /dev/vda15 105M 3.4M 102M 4% /boot/efi /dev/md0 197G 60M 187G 1% /mnt/md0
The recently mounted filesystem can be accessed.
Preserving the Structure of the Array
In order to ensure that the array is automatically reassembled during boot, it is necessary to make changes to the /etc/mdadm/mdadm.conf file. By executing the following command, you can automatically scan the active array and add it to the file:
- sudo mdadm –detail –scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you have the option to refresh the initramfs, which is the initial RAM file system, ensuring the array’s availability during the initial stages of booting.
- sudo update-initramfs -u
Include the recently introduced filesystem mount options in the /etc/fstab file to enable automatic mounting during boot.
- echo ‘/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0’ | sudo tee -a /etc/fstab
Your RAID 10 array will be assembled and mounted automatically with each boot.
In conclusion,
In this tutorial, you gained knowledge about creating different arrays using Linux’s mdadm software RAID tool. RAID arrays provide significant improvements in redundancy and performance compared to using separate disks.
After choosing the suitable array type for your setup and setting up the device, you can familiarize yourself with performing everyday administration using mdadm. To begin, our guide on Ubuntu provides assistance in managing RAID arrays with mdadm.
more tutorials
Partition in Linux Step-by-Step Guide(Opens in a new browser tab)
LVM concepts, terminology, and procedures.(Opens in a new browser tab)
convert string to character array in Java.(Opens in a new browser tab)
React Application Component Testing Integrate with Playwright(Opens in a new browser tab)