One last blog on the 2019 Flash Memory Summit. This one covers some of the SSD keynotes announcements as well as software defined NVMe applications and end with a ReRAM Neural Network technology.
Shigeo (Jeff) Ohshima and Jeremy Werner were representing Toshiba Memory (soon to be called Kioxia) as the kickoff keynote at the 2019 Flash Memory Summit. The Information and Communications Technology (ICT) industry and particularly the growth of edge processing to deal with locally generated data will drive NAND flash demand as shown below.
Toshiba Memory sees this demand driving several more generations of 3D BiCS flash as well as its higher performance XL-Flash (using 1-bit per cell or SLC technology) as well as very high-density flash memory (up to 5-bits per cell). The XL-Flash was on display at several SSD companies providing non-volatile speeds approaching those of Intel’s Optane memory.
The company also talked about virtual multi-LUN flash memory that divided the NAND die into planes that could be combined in parallel to give higher overall SSD performance. In addition to having up to 5-bits per cell, Toshiba Memory was showing a structural that effectively created two cells where one would ordinarily be. The company also introduced a new very compact NVMe storage form factor, XFMEXPRESS, (about the size of an SD card) as shown below. The XFMEXPRESS slides into a connector for now but could also be inserted into a slot, like an SD card. This new form factor is targeted for very mobile application as well as automobiles and gaming consoles.
Toshiba Memory, like most of the SSD companies, was also promoting the performance boost with PCIe Gen 4 products (the first devices with Gen 4 are just coming on line with projected major adoption in servers by 2021). They pointed out that 4 lanes on PCIe Gen 4 has 12X faster performance than SATA. The figure below shows Toshiba Memory’s view of the evolution of NVMe SSD form factors through 2025. Another interesting Toshiba announcement was a new native Ethernet interface SSD for direct NVMe-oF storage systems at the lowest costs.
Western Digital and other companies were focused on zoned storage for more efficient utilization of flash memory. In Western Digital’s case it was driving SMR HDDs with the overwritten tracks organized in zones with zoned name space NVMe SSDs. Siva Sivaram and Christopher Bergey from WD gave a keynote talk on this topic.
They indicated that the capital costs for 1% big growth are increasing as the number of 3D NAND layers are increasing, so the bit cost improvements from increasing the number of layers are declining. They said that one approach to reduce costs further is to increase the density of the memory holes that the cells are created in. As shown in the chart below, combined with higher number of layers this can increase cell areal density considerably and with TLC (3-bits per cell) or QLC (4-bits per cell) this density is increased even further.
Western Digital also talked about going to 5-bits per cell (PLC), but the scaling benefit of going to 5 rather than 4 bits is only 25% (it was 33% going from MLC to TLC). WD projected 50% of the SSD bites would be with QLC flash by 2025, assuming the same adoption rate as for MLC to TLC bits as shown below.
This assumes that consumer are willing to live with a considerable performance degradation going from TLC to QLC flash as well as considerably lower erase/write cycle endurance. Western Digital think that QLC will play an important role in the data center with zoned name spaces in the SSDs combined with SMR HDDs. They call this approach Zoned Storage. Western Digital was one of several SSD companies at the FMS with Zoned Name Space SSDs.
Western Digital said that Zoned Name Spaces SSDs combined with system-level intelligence for data placement can reduce their DRAM by 8X and that you can get up to a 10X overprovisioning reduction in the SSD. Zones can be used for different purposes and even use different NAND (e.g. MLC and QLC) or virtual machines can each have their dedicated zone. The company said Zoned Storage provides advantages for sensor data analysis, artificial intelligence and machine learning and server-less applications. The company also set up a web site where people can learn more about zoned storage, zonedstorage.io.
The SK hynix keynote featured Hongsok Choi and Andrew Chong. They talked about their 4D NAND that includes the logic and other peripheral circuits under the memory cells (which seems similar to Micron’s CMOS under the array technology). They also mentioned the June announcement of SK Hynix 128 high 1TB TLC 4D NAND with up to 1.2 Gbps/pin IO speed. The photo below shows the company’s expectations for how their 4D NAND will support higher density chips out to 2030.
SK Hynix also owns its own in-house controller and firmware solutions and is launching mobile, client and enterprise SSD products using its 96-layer V5 4D NAND as shown below. SK Hynix was also introducing Zoned Namespace SSDs.
NGD’s Scott Shadley and Dr. Vladimir Alves gave a keynote on computational storage for AI applications. They pointed out that EDSFF 1U short (E1.S) SSDs are well suited for edge applications. This could be combined with a computational storage capability in the SSD to do some processing of the data on the SSD itself, one of these from NGD is shown below. A computational storage device requires much less power than a CPU and is easier to cool.
NGD spoke about artificial weightless Neural Networks with a lot less overhead using computational storage versus conventional approaches. They showed some demonstrations of object tracking using such a neural network in operation. Future improvements in AI using parallel and distributed training in computational storage could include federated and transfer learning and sending sparse model updates to reduce data transfers. A technique called WiSARD provides a multi-term memory framework for online tracking objects. They showed slides with this approach on NGD computational storage SSDs. They also used Keras plus TensorFlow running on a NGD computational storage drive for machine learning training as shown below. The major point is that in-situ (computational storage) processing brings AI learning techniques closer to the data.
Intel’s keynote talk was on their Optane products. They said that the Optane DC DIMM-based product, launched on April 2 is now shipping in volume (and I heard at the show several companies looking at integrating Optane DIMM’s into their storage systems). Although it is making these products available to other companies, Intel has tied Optane DIMMs to its own next generation server products as shown below.
Intel pointed out that Optane DC Persistent Memory (DIMM) provides up to 3.7X higher bandwidth than and NVMe SSD and even an Intel Optane NVMe SSD. Intel was showing performance improvements for Optane DC in the persistent memory mode with several applications including Cassandra, data replication, SAP HANA and Oracle Exadata. They showed a large operational region (IOS) for their persistent memory at a lower cost than other approaches with cached architectures. Intel sees its Optan DC PM along with its Intel Ruler QLC SSD and infrastructure nodes as part of its Data Management Platform. The image below shows the entire vision.
Samsung wasn’t at the FMS but did make announcements about its 6th generation V-NAND SSDs for client computing with 100+ layer single tier NAND flash. It said a 250GB SATA SSD with this technology is entering mass production. The company also announced that its PM1733 PCIe Gen4 SSD and RDIMM and LRDIMM DRAM will be used in the new AMD EPYC 7002 generation processors.
Excelero introduced a software-only block storage solution for building disaggregated NVMe All Flash Arrays, called Excelero NVEdge. The software works on traditional controller-based high availability storage architectures. The product demonstrated 2.7M 4K IOPS on a 100Gb ethernet link, while thin provisioned at up to a 1000X oversubscription rate. Western Digital announced its Ultrastar24-4N that incorporates Excelero’s NVMesh software for applications such as AI, ML, analytics and databases.
Lightbits, another software defined storage company offers a product that supports disaggregated storage with a remote low-latency pool of NVMe SSDs over a TCP/IP network (without RDMA). The figure below shows the building blocks for the Lightbits solution.
Lightbits runs on a server that has a global flash translation layer (FTL) and also an optional hardware accelerator for SSD management and data services. The company says it can increase SSD endurance and thus allow the use of QLC SSDs for some applications. The hardware accelerator does 100 Gbps compression/decompression at wire speed and helps with NVMe/TCP acceleration, performs erasure coding to help with data protection and data recovery and takes care of SSD garbage collection.
It can also do encryption and deduplication. Strata Storage Solutions announced a partnership at the 2019 FMS for turnkey NVMe/TCP using Lightbits LightOS with capacities from 32-96 TB. Lightbits also says it works with popular operations such as clustering and Kubernetes integration.
Israelii-based Weebit Nano and CEA-Leti were demonstrating a neuromorphic processing demo at the FMS. The neuromorphic were using Weebit Nano’s SiOx-based ReRAM technology. Weebit Nano says that by using SiOx ReRAMs it can avoid manufacturing issues found with other ReRAM technologies.
Solid state storage is filling multiple niches in the storage and memory hierarchy. New approaches such as Zoned Name Spaces will bring even more diversity to NAND flash use. Emerging memories such as phase change memory and ReRAM are joining NAND flash to create a solid future.