Gaming PC

6500 ION TLC and XTR SLC

Micron today expands its data center SSD portfolio with the introduction of two new products: the 6500 ION and the XTR NVMe SSD. These two products do not fit into any of the existing enterprise SSD lineups. They are intended to fill a hole in high volume and high endurance product stacks.

Micron’s competitors offer high-density QLC products in various form factors for data centers aimed at maximizing storage capacity per rack. Meanwhile, after Optane’s retirement, Micron needed storage-class memory to replace write-intensive workloads. Micron’s current product stack offers both his NVMe categories of mainstream and performance, in the form of the 7400 series and 9400 series respectively. Two of his SSDs launching today serve the high-capacity and high-endurance market segments.

The Micron 6500 ION NVMe SSD addresses the high-capacity side. This is his TLC drive for QLC prices. Micron XTR NVMe SSDs address the high endurance aspect. Despite not being a low-latency play, its feature set makes it competitive against his SCM-class offerings in many metrics. Let’s take a closer look at the specs and market position of the two SSDs and discuss the competitive landscape.

Micron 6500 ION TLC NVMe SSD

NAND flash technology has evolved rapidly over the last decade or so, moving from low-density planar SLC to MLC to TLC. The advent of 3D NAND enabled quad-level cells (QLC), which use 16 different voltage levels to encode up to 4 bits of information in a single cell. This can significantly increase the capacity of a given die area. QLC has its own set of challenges and falls short of TLC in terms of almost every metric: performance, endurance and power consumption. QLC flash is primarily a cost play for both consumers and vendors in the client SSD market. In the enterprise space, cost is a factor, but more interesting is rack density. His QLC SSD in the last few years has broken the 30 TB capacity point at an affordable price, allowing a single rack to hold more than 1 PB.

Scaling up the competition…

Micron’s QLC efforts in the enterprise market have been silent. Competitor Solidigm, on the other hand, is very bullish on QLC for data centers. In particular, the Solidigm D5-P5316 proved popular from this point of view. 30 TB and larger drives at less than $100/TB in a 2.5 inch (U.2) form factor are proving attractive for data centers to increase capacity per rack. According to Solidigm’s own marketing materials, there is no free lunch. QLC has the disadvantage of slow sustained sequential writes (compared to TLC). D5-P5316 also chose to increase the indirect unit from 4K to 64K. This makes flash management easier, but 4K random writes are more sensitive to latency increased by repeated accesses to the indirect table and excessive write amplification, the latter of which has very low endurance ( in terms of random drive writes per day).



Source: Enhancing Real-Time Decision Making for Large Datasets with SSD-Like Economics, SNIA Persistent Memory + Computational Storage Summit, 2022

Solidigm D5-P5316 can sustain sequential writes (direct to QLC) at just 3.6 GBps, and 4K random write performance is just 7800 IOPS. However, due to the indirection table, this number is more reasonable for 64K random writes. Solidigm actually suggests adapting the software to avoid 4K random writes or only using the drive for workloads that are read intensive or involve sequential/random writes of large blocks. These types of workloads significantly reduce I/O amplification, which addresses the endurance aspect. Despite these shortcomings, the opportunity for rack density has been compelling for data centers. Unfortunately, Micron didn’t have a similarly priced alternative for its capacity. Today with his 6500 ION that changes.

Avoid QLC

Micron was the first to ship 3D NAND with more than 200 layers, and its 232L generation reached volume production ahead of its competitors. The company has been using his QLC in the client SSD market for some time. However, it’s a bit reluctant to use it on enterprise SSDs, and it’s not hard to see why. Adopted by Micron and Intel/Solidigm after dissolution of IMFT joint development program Completely different flash cell architecture. While Intel/Solidigm continued with floating gate technology, Micron moved to the charge trap method.

Generally speaking, Micron’s charge trap flash cell architecture is better suited for TLC. Using this for QLC is more difficult compared to what is possible with Solidigm’s floating gate approach.



Source: Advantages of Floating Gate Technology (YouTube)

With floating gates, there are fewer charge distribution problems and more electrons, which improves data accuracy over time, especially when 16 different voltage levels need to be tracked.

The die cost of flash dominates the final product cost of a high-capacity SSD.based on analysis hereSolidigm’s 144L QLC had a bit density of 12.86 Gb/mm.2Meanwhile, Micron’s 232L TLC has a comparable figure of 14.60 Gb/mm.2. With QLC, this number can be even higher. It’s possible that Micron used the 232L QLC to create an even cheaper 30TB SSD compared to the Solidigm D5-P5316, but the specs in terms of endurance and speed are probably on par (or worse) than the Solidigm offering. was). The nature of flash cells.

Micron has determined that the high-capacity 232L TLC SSD can be offered at a competitive price compared to the 144L Solidigm D5-P5316 QLC product. The result was the Micron 6500 ION NVMe SSD.

Find out more

NAND manufacturing leadership is key to the Micron 6500 ION’s promise of TLC performance at a QLC price. The company only focuses on his 30.72 TB capacity point on his 6500 ION. Other capacity points are appropriately provided by other products in the stack. Using TLC, sequential writes can reach up to 5 GBps, and a 4KB indirect unit guarantees random write performance of around 200K IOPS. Lower write amplification means the 6500 ION’s 4K RDWPD rating of 0.3 is more than 10x better than the Solidigm D5-P5316’s similar numbers. Micron claims that TLC SSDs are inherently more power efficient compared to QLC SSDs, which is backed up based on his datasheet comparison of the D5-P5316 and the new 6500 ION.




























Micron 6500 ION NVMe SSD Specifications
side 6500 ions
form factor 2.5 inch 15mm U.3 or 9.5mm E1.L
interface, protocol PCIe 4.0 x4 NVMe 2.0
capacity 30.72TB
3D NAND Flash Micron 232L Performance TLC
Sequential performance (GB/s) 128KB Read @ QD 128 6.8
128KB write @ QD 128 5.0
Random access (IOPS) 4KB read @ QD 128 1M
4KB write @ QD 128 200K
4KB 70%R / 30%W @ QD128 400K
Latency (typical) (us) 4KB read on QD 1 70
Write 4KB on QD 1 15
Power consumption (Watts) 128KB sequential read 15.0
128KB sequential write 20.0
4KB random read 14.0
4KB random write 15.0
idle state 5.0
Durability (DWPD) 100% 128KB sequential write 1.0
90% 128KB sequential writes
10% 4KB random writes
0.9
80% 128KB sequential write
20% 4KB random writes
0.85
70% 128KB sequential writes
30% 4KB random writes
0.75
50% 128KB sequential write
50% 4KB random writes
0.55
100% 4KB random writes 0.3
guarantee 5 years

On the business side, the Micron 6500 ION is TAA compliant and is also FIPS 140-3 L2 certified. Similar to other enterprise SSDs, the drive delivers a MTTF of 2.5 million hours and 1/10th the performance.17 Uncorrectable bit error rate.

The Micron 6500 ION SSD offers an attractive option for data center customers who have adopted or are actively considering the Solidigm D5-P5316. But Solidigm didn’t cut corners either. At last year’s Tech Field Day, the company discussed Learn about creating a 30TB SSD with 4KB IU using 192L QLC technology. So the 6500 ION challenger in the market will be these drives, not the Solidigm D5-P5316.

Before we dive into the XTR NVMe SCM competitors, we should note the RAID rebuild times on the 30 TB SSD. One of the thornier issues with large HDDs is rebuilding RAID with parity calculations and writes, and this process is bottlenecked by the SATA interface. Thankfully, the PCIe 4.0 x4 NVMe interface has sequential speeds nearing 6.8 GBps read and 5 GBps write, which it can easily handle even when handling other data workloads. A 50 GbE or greater network backbone allows this resilience with minimal performance impact, even when operating in a cluster that spans multiple physical machines.

Related Articles

Back to top button