A 9.2 Gbps HBM3E sub-system (PHY + Controller IP) silicon platform, based on Alphawave's HBM3E IP, takes chipset-enabled memory bandwidth to new heights of 1.2 Terabytes per second (TBps), addressing the demand for ultra-high-speed connectivity in high-performance compute (HPC) and accelerated computing for generative artificial intelligence (AI) applications.
Alphawave Semi has demonstrated its HBM3E IP subsystem at recent trade shows. Working with Micron, channel simulations have shown the highest level of performance of 9.2 Gbps in an advanced 2.5D package across an entire HBM3E system, comprising the HBM3E IP subsystem, an Alphawave Semi silicon interposer and Micron's HBM3E memory. These results demonstrate the platform enables a significant reduction in the time-to-market in delivering best-in-class industry performance and exceptional power efficiency for data center and HPC AI infrastructures.
HBM3E offers high bandwidth, optimal latency, a compact footprint, and power efficiency. Alphawave Semi customers are deploying complete HBM subsystem solutions that integrate the company’s HBM PHY with a versatile JEDEC-compliant, highly configurable HBM controller that can be fine-tuned to maximize the efficiency for application-specific AI and HPC workloads.
Alphawave Semi has also created an optimized silicon interposer design to achieve best-in-class results for signal integrity, power integrity, and thermal performance at 9.2 Gbps.