We recently had a webinar discussing the recent MLPerf inference testing results for Supermicro systems with the NVIDIA HGX B200 8-GPU. With leading performance in many categories we covered how you can use these results to help you make informed decisions on your AI infrastructure.
Webinar: MLPerf Inferencing Benchmarks: Showcasing Supermicro NVIDIA HGX™ B200 Systems
We featured Supermicro experts Alok Srivastav, Director, Solutions Management AI, Advait Italia, Systems Engineer, and Michael Schulman, Sr. Corporate Communications Manager.
Let’s look at some highlights and key takeaways from the webinar.
Advait: “When you use these benchmarks and when you have these certifications from ML Commons, which is industry standard, you future-proof your infrastructure as well.”
Alok: “Data centers are power limited. To get the best use of their power, the best option is the liquid cooling route.”
Advait: “There has to be a level of transparency and a common ground where we all can agree and we all can conclude that this is a common level of benchmark”
These are just a few highlights. Watch the whole webinar to get more of an in-depth look at the benchmarks and take a deeper dive into the systems.
Watch On-Demand: MLPerf Inferencing Benchmarks: Showcasing Supermicro NVIDIA HGX™ B200 Systems
Additional Resources: