Advancements in flash technology have come as a boon to data centers. Increasing layer counts coupled with better vendor confidence in triple-level (TLC) and quad-level cells (QLC) have contributed to top-line SSD capacities essentially doubling every few years. Data centers looking to optimize storage capacity on a per-rack basis are finding these top-tier SSDs to be an economically prudent investment from a TCO perspective.
Solidigm was one of the first vendors to introduce a 32 TB-class enterprise SSD a few years back. The D5-P5316 utilized Solidigm's 144L 3D QLC NAND. The company has been extremely bullish on QLC SSDs in the data center. Compared to other flash vendors, the company has continued to use a floating gate cell architecture while others moved on to charge trap configurations. Floating gates retain programmed voltage levels for a longer duration compared to charge trap (ensuring that the read window is much longer without having to 'refresh' the cell). The tighter voltage level retaining capability of the NAND architecture has served Solidigm well in bringing QLC SSDs to the enterprise market.
Source: The Advantages of Floating Gate Technology (YouTube)
Solidigm is claiming that their 192L 3D QLC is extremely competitive against TLC NAND from its competitors that are currently in the market (read, Samsung's 136L 6th Gen. V-NAND and Micron's 176L 3D TLC).
Solidigm segments their QLC data center SSDs into the 'Essential Endurance' and 'Value Endurance' lines. Back in May, the company introduced the D5-P5430 as a drop-in replacement for TLC workloads. At that time, the company had hinted at a new 'Value Endurance' SSD based on their fourth generation QLC flash in the second half of the year. The D5-P5336 announced recently is the company's latest and greatest in the 'Value Endurance' line.
Solidigm's 2023 Data Center SSD Flagships by Market Segment
The D5-P5316 used a 64KB indirection unit (IU) (compared to the 4KB used in normal TLC data center SSDs). While endurance and speeds were acceptable for specific types of workloads that could avoid sub-64KB writes, Solidigm has decided to improve matters by opting for a 16KB IU in the D5-P5336.
Thanks to the increased layer count, Solidigm is able to offer the D5-P5336 in capacities up to 61.44 TB. This takes the crown for the highest capacity in a single NVMe drive, allowing a single 1U server with 32 E1.L versions to hit 2 PB. For a 100 PB solution, Solidigm claims up to 17% lower TCO against the best capacity play from its competition (after considering drive and server count as well as total power consumption).
Solidigm D5-P5336 NVMe SSD Specifications | ||||
Aspect | Solidigm D5-P5336 | |||
Form Factor | 2.5" 15mm U.2 / 7.5mm E3.S / 9.5mm E1.L | |||
Interface, Protocol | PCIe 4.0 x4 NVMe 1.4c | |||
Capacities | U.2 7.68 TB, 15.36 TB, 30.72 TB, 61.44 TB |
E3.S 7.68 TB, 15.36 TB, 30.72 TB |
E1.L 15.36 TB, 30.72 TB, 61.44 TB |
|
3D NAND Flash | Solidigm 192L 3D QLC | |||
Sequential Performance (GB/s) | 128KB Reads @ QD 128 | 7.0 | ||
128KB Writes @ QD 128 | 3.3 | |||
Random Access (IOPS) | 4KB Reads @ QD 256 | 1005K | ||
16KB Writes @ QD 256 | 43K | |||
Latency (Typical) (us) | 4KB Reads @ QD 1 | ?? | ||
4KB Writes @ QD 1 | ?? | |||
Power Draw (Watts) | 128KB Sequential Read | ?? | ||
128KB Sequential Write | 25.0 | |||
4KB Random Read | ?? | |||
4KB Random Write | ?? | |||
Idle | 5.0 | |||
Endurance (DWPD) | 100% 128KB Sequential Writes | ?? | ||
100% 16KB Random Write | 0.42 (7.68 TB) to 0.58 (61.44 TB) | |||
Warranty | 5 years |
Note that Solidigm is stressing endurance and performance numbers for IU-aligned workloads. Many of the interesting aspects are not yet known as the product brief is still in progress.
Ultimately, the race for the capacity crown comes with tradeoffs. Similar to hard drives adopting shingled-magnetic recording (SMR) to eke out extra capacity at the cost of performance for many different workload types, Solidigm is adopting a 16KB IU with QLC NAND optimized for read-intensive applications. Given the massive capacity per SSD, we suspect many data centers may find it perfectly acceptable (at least, endurance-wise) to use it in other workloads where storage density requirements matter more than write performance.
0 Comments