• Technology
  • Electrical equipment
  • Material Industry
  • Digital life
  • Privacy Policy
  • O name
Location: Home / Technology / Marvell CXL roadmap goes all-in on composable infrastructure

Marvell CXL roadmap goes all-in on composable infrastructure

techserving |
1286

Fresh off the heels of Marvell Technology's Tanzanite acquisition, executives speaking at a JP Morgan event this week offered a glimpse at its compute express link (CXL) roadmap.

"This is the next growth factor, not only for Marvell storage, but Marvell as a whole," Dan Christman, EVP of Marvell's storage products group, said.

Introduced in early 2019, CXL is an open interface that piggybacks on PCIe to provide a common, cache-coherent means of connecting CPUs, memory, accelerators, and other peripherals. The technology is seen by many, including Marvell, as the holy grail of composable infrastructure, as it enables memory to be disaggregated from the processor.

The rough product roadmap presented by Marvell outlined a sweeping range of CXL products including memory extension modules and pooling tech, switching, CXL accelerators, and copper and electro-optical CXL fabrics for rack-level and datacenter-scale systems.

Those aren't SSDs

With the first generation of CXL-compatible CPUs from Intel and AMD slated for release this year, one of the first products on Marvell's roadmap is a line of memory expansion modules. These modules will be supplemental to the traditional DDR memory DIMMs and feature an integrated CXL controller rather than relying on the CPU's onboard memory controller.

"DRAM is the largest component spend in the entire datacenter. It's more than NAND flash. It's more than CPUs," Thad Omura, VP of marketing for Marvell's flash business unit, said, adding that, traditionally, achieving the high-memory densities necessary for memory-intensive workloads has required high-end CPUs with multiple memory controllers onboard.

With CXL, now "you can plug in as many modules as you need," he said.

Marvell plans to offer these CXL memory modules in a form factor similar to that used by NVMe SSDs today. In fact, because both the SSDs and CXL memory modules share a common PCIe electrical interface, they could be mixed and matched to achieve the desired ratio of memory and storage within a system.

Additionally, because the CXL controller functions as a standalone memory controller, systems builders and datacenter operators aren't limited to just DDR4 or DDR5 memory.

"Maybe you want to use DDR4 because it's a cheaper memory technology, but your server's CPU only supports the latest DDR5 controller," Omura said. "Now you can plug those DDR4 modules directly into the front" of the system.

The modules' onboard controllers also have performance implications by enabling customers to achieve the desired memory density without resorting to a two-DIMM-per-channel configuration, Omura claims.

Marvell CXL roadmap goes all-in on composable infrastructure

While Marvell didn't commit to a specific timeline for bringing its first generation of CXL products to market, it did say it was aligning with the major server platform launches, including Intel's Sapphire Rapids and AMD's Genoa Epyc processor families later this year.

"We're really just at the beginning stages of CXL solutions going to market. Server platforms that support CXL are just starting to emerge, and the CXL solutions that follow will need to prove the value proposition and also be qualified in the systems," Omura said.

A true composable future remains years off

In fact, many of the products on Marvell's CXL roadmap are dependent on the availability of compatible microprocessors.

While the CXL 2.0 spec required for many of the technology's more advanced use cases — including composable infrastructure — has been around for more than a year, compatible CPUs from Intel and AMD aren't expected to launch until 2023 at the earliest.

These technologies include memory pooling and switching, which will enable datacenter operators to consolidate large quantities of memory into a single, centralized appliance that can be accessed by multiple servers simultaneously. "This is a tremendous value for hyperscalers looking to really optimize DRAM utilization," Omura argued.

At this stage, Marvell believes chipmakers may begin offering CPUs that forego the onboard memory controllers and instead interface directly with a CXL switch for memory, storage, and connectivity to accelerators like DPUs and GPUs.

"The resources will be able to scale completely independently," Omura said.

With CXL 2.0, Marvell also plans to integrate its portfolio of general compute and domain-specific engines directly into the CXL controller.

For example, these CXL accelerators could be used to operate on data directly on a memory expansion module to accelerate analytics, machine learning, and complex search workloads, Omura said.

Beyond the rack

For now, much of the chipmaker's CXL roadmap is limited to node and rack-level communications. But with the introduction of the CXL 3.0 spec later this year, Marvell expects this to change.

Last year, Gen-Z donated its coherent-memory fabric assets to the CXL Consortium. This kind of fabric connectivity will be key to extending the technology beyond the rack level to the rest of the datacenter.

"The rack architecture of the future will fully utilize CXL as a low-latency fabric," Omura said. "You'll have completely disaggregated resources that you can instantly compose to meet your workload needs at the click of a button."

To achieve this goal, Marvell plans to use its investments in copper serializer/deserializers and the electro-optical interface technology acquired from Inphi in 2020 to extend CXL fabrics across long distances.

"We're in a great position to leverage our electro-optics leadership technology to ensure CXL has the best possible distance, latency, cost, and performance over fiber connectivity," he said. "We absolutely believe this represents a multi-billion dollar opportunity."

Eventually, Marvell says all compute, storage, and memory resources will be disaggregated and composed across multiple racks on the fly over a CXL fabric. ®

Get our Tech Resources