We are going to do things slightly different this time with the OCZ Vertex 450. Normally, we compare the performance of the single drive SSD. However, we already know that the OCZ Vertex 450 is going to be fast compare to the HDD and it would be comparable to other SSDs with similar storage capacity. What we are going to do here, in addition to show the performance of the OCZ Vertex 450 is also test the drive in RAID configuration. OCZ sent us two Vertex 450 128GB models for our purpose.
RAID was one way to improve performance with hard drives. By pairing multiple drives together, you would be able to gain additional performance as you read and write data across multiple drives. When comes to SSD, the biggest performance gain from paring up two disks in RAID 0 would be in sequential read and write. Random read and write do not benefit as much as the sequential read and write because SSDs are already fast enough for these tasks. However, it does not mean that there won’t be performance gain. In fact, at higher queue depth, having multiple drives would benefit.
We must mention that if you are going to be using SSDs in RAID 0, we highly recommend a system with Intel 7 series or higher chipset. Intel has finally brings TRIM support for RAID 0 to the 7 series and higher chipset. TRIM helps to reduce write amplification and prolongs drive’s endurance. Without TRIM, the drive’s endurance maybe suffered and the performance may not be the most optimal.
When comes to RAID, picking the right stripe size is critical for the type of data you are dealing with. We ran the Vertex 450 across different stripe size in order to get an idea how well the drives scaled up across different stripe size and queue depth. Iometer is a great tool to show how the performance scaled up across different configuration.
For random read, we can see that picking the small stripe size is preferred. Though, notice that at low queue depth, the performance difference with and without RAID is pretty small. It is not until about queue depth of 4 where there is a clear benefit in terms of IOPS (~30%). Higher queue depth further shows the performance improvement of RAID where we see about 58% higher IOPS at queue depth of 32.
Since the Vertex 450 does not saturate the number NAND die that the controller is capable of, adding a second drive in RAID helped out with its random write. The performance is still not going to be as good as single larger capacity drive but it helps to remedy some of its shortcomings. With two disks in RAID 0, the random write speed improves about 14% at queue depth of 1 and 240% at queue depth of 32 (stripe size 8). The biggest improvement comes from queue depth of 2 with 325%. After queue depth of 2, the performance plateaued. Picking stripe size of 8KB if preferred if you want the absolute best performance but even at stripe size of 128KB, we still get twice of the IOPS improvement.
While the Vector 450 sequential read performs poorly at low queue depth, its performance scaled up nicely and almost linearly as the number of queue depths. Immediately we can notice that two disks in RAID 0 doubles the the sequential read at stripe size of 8KB. The performance is further improved as we increased the stripe size to 16KB but larger stripe size than 16KB does not have a major impact on the performance until we hit the queue depth of 16 or higher.
Sequential write of a single Vertex 450 is once again increasing at almost linearly as the number of the queue depth though the performance increase is not as dramatic as what we observed with the sequential read. With two disks in RAID 0, the performance once again is doubled. Here, the performance between the different stripe size is not as dramatic as what we observed from the sequential read. Ultimately, we are at the mercy of just how much data can be written to the NAND. Interestingly, the Stripe size of 8 seemed to offer the best performance, followed by the stripe size 128KB.
Like hard drive RAID, picking the right stripe size is crucial if you want the best performance. Stripe size of 8KB and 16KB works best if majority of data you are dealing with are random read and write but 32KB or higher would be best if you primarily dealing with sequential read and write. With our Intel Z77 chipset board, the default stripe size of 16KB seems to offer a good balance if you are dealing with mixture of random and sequential data. While 8KB works reasonably well across the board, it suffered significantly at sequential read, 16KB stripe size offers better sequential read compared to the 8KB and still retained good random read and write performance.