Powering the Next Wave of AI Infrastructure with Cloud-Native MegaDC Systems

The new platform can deliver up to 50% performance improvements for AI inference workloads, enabling our customers to deploy high-performance infrastructure while reducing operational costs. 

The pace of innovation in AI infrastructure is accelerating—and with it, the demands placed on modern data centers. Organizations are no longer just scaling compute; they’re rethinking how infrastructure is designed, deployed, and optimized for a new generation of workloads.

At the center of this shift is a simple reality: AI inference, edge applications, and cloud-native services require a fundamentally different approach to performance, efficiency, and scalability.

That’s exactly where Supermicro’s latest MegaDC systems come in.

With 12-channel memory and increased memory bandwidth, the new platform can deliver up to 50% performance improvements for AI inference workloads, enabling our customers to deploy high-performance infrastructure while reducing operational costs. By combining the efficiency of AmpereOne M processors with Supermicro’s architecture, we are delivering the linear scalability, along with GPU and DPU support, required for modern AI acceleration workloads.

Built for Cloud-Native, Designed for Scale

Supermicro has expanded its MegaDC server portfolio with new systems powered by AmpereOne® M processors—purpose-built to meet the needs of modern, distributed, and performance-intensive environments.

These systems are designed from the ground up to enable organizations to deploy commercial off-the-shelf (COTS) infrastructure that delivers both performance and cost efficiency. The result is a platform that aligns with how today’s cloud-native applications are built and scaled.

Whether supporting AI inference, edge deployments, or telco workloads, the architecture prioritizes flexibility without sacrificing performance.

Unlocking Performance

Performance is only part of the equation. As infrastructure scales, power efficiency and thermal design become critical constraints. The new MegaDC systems address this with an efficient, modular air-cooling architecture optimized for high-density deployments. This approach enables organizations to:

  • Maximize compute density without excessive cooling overhead
  • Reduce power consumption and operational costs
  • Deploy at scale in environments where liquid cooling may not be practical
  • First Supermicro system to support a total of 5 PCIe expansion slots including a TSFF OCP 3.0 slot for up to 400G OSFP nic in a 1U form factor   
  • Supports two NVIDIA RTX 6000 GPUs at 35°C ambient temperature 
  • OCP-inspired modular storage support, including E1.S and E3.S/L Non-Volatile Memory Express (NVMe) configurations  
  • 2U systems supporting multiple modules for storage and expansion, including: 
    • Up to 9 PCIe expansion, up to 24 NVMe all-flash bays 
    • Up to 32× 25GbE ports for high-density Open Radio Access Network (ORAN) 
  • PCIe switch module architecture enabling: 
    • Up to 4 AI accelerators per switch module 
    • Up to 8 AI accelerators using cascaded PCIe switch modules 

This balance of performance and efficiency is especially important for workloads like AI inference, edge computing, and content delivery—where scale and cost sensitivity go hand in hand.

Key capabilities of the latest MegaDC system include:

This level of modularity allows organizations to tailor infrastructure to specific workloads—from AI and EDA to telco and edge deployments—without redesigning their entire architecture.

A Platform for the AI-Driven Future

The expansion of MegaDC systems reflects a broader shift in the industry: infrastructure is no longer static. It must be adaptive, efficient, and purpose-built for AI-driven workloads.

Through its continued collaboration with Ampere, Supermicro is enabling organizations to scale efficiently while managing power consumption and total cost of ownership—two of the most critical challenges in modern data centers.

From AI inference and autonomous systems to cloud services and edge computing, the next generation of applications demands infrastructure that can keep up.

For more information on the next generation of AI infrastructure, visit www.supermicro.com.