Facing reliability and supply chain problems, NVIDIA delayed SOCAMM memory technology

 4:43pm, 19 May 2025

Market news has been released, because NVIDIA has delayed SOCAMM to the next generation of products due to reliability and supply chain issues. According to foreign media ZDNet, NVIDIA originally planned to launch the upcoming Blackwell enterprise-level GPU to introduce the latest SOCAMM technology, and has now been postponed to the next generation of GPU "Rubin" series.

It is reported that SOCAMM was originally scheduled to be launched in GB300, but due to the changes in the design of the GB300 motherboard, NVIDIA delayed SOCAMM. The GB300 was originally scheduled to use a motherboard design with the "Cordelia" code, which supports SOCAMM memory and integrates two Grace CPUs and four Blackwell GPUs, but was later changed to an existing design "Bianca". The design supports only one Grace CPU and two Blackwell GPUs, and does not support SOCAMM, but uses existing LPDDR memory.

Information, Cordelia was mainly due to the abandonment of reliability issues and poor stability performance, which would lead to data loss. SOCAMM reliability also has problems and is challenged by heat dissipation, resulting in poor stability.

In addition, NVIDIA is also facing supply chain problems, which is also a major factor in the delay of SOCAMM. Therefore, after encountering the difficulty of yield management, NVIDIA turns to existing technologies (including old motherboard designs using traditional LPDDR memory) to relieve supply chain pressure.

SOCAMM is a new memory module specification developed by NVIDIA, SK Hynix and Micron, and is inspired by CAMM2; more traditional memory modules (such as mainstream DDR5 DIMMs and RDIMMs) provide higher memory performance and capacity on a single plane. A SOCAMM memory module size is 14×90 mm, equipped with four 16-die LPDDR5 memory stacks, with a capacity of up to 128GB and a memory bandwidth of up to 7.5Gbps.

SOCAMM is officially launched in NVIDIA's next-generation data center GPU architecture "Rubin". It is predicted that it will support 12-layer stacking HBM4E in 2027, provide up to 13TB/s bandwidth, and use 5.5-hood-size CoWoS intermediary layer and trunk-built 100mm x 100mm substrates. In addition, Rubin will also be compatible with the existing Blackwell NVL72 architecture foundation, achieving no-sharp upgrade.

Nvidia postpones disruptive new SOCAMM memory tech — originally planned for Blackwell Ultra GB300, now scheduled for Rubin/Rubin Ultra Extended reading: Intel estimates its crystal foundry business, 2027 14A session to achieve profit balance The technology giant has cut off its manpower again! About 100 staff members from the Amazon Equipment and Service Department

上一篇:没有了
下一篇:没有了