Recent Results Show HBM Can Make CPUs the Desired Platform for AI and HPC

Third-party performance benchmarks show CPUs with HBM2e memory now have sufficient memory bandwidth and computational capabilities to match GPU performance on many HPC and AI workloads. Recent Intel and third-party benchmarks now provide hard evidence that the upcoming Intel® Xeon® processors codenamed Sapphire Rapids with high bandwidth memory (fast, high bandwidth HBM2e memory) and Intel® Advanced Matrix Extensions can match the performance of GPUs for many AI and HPC workloads.

How Aerospace/Defense Can Harness Data with a Well-Designed AI Infrastructure

In this sponsored post, our friends over at Silicon Mechanics discuss how solving mission-critical problems using AI in the aerospace and defense industry is becoming more of a reality. Every day, new technologies emerge that can simplify deployment, management, and scaling of AI infrastructure to ensure long-term ROI. There are several questions to ask yourself to ensure deploying AI workloads, and harnessing the full potential of data, in aerospace/defense is much more plausible and efficient.

Accelerating the Modern Data Center – Gear Up for AI

Modern applications are transforming every business. From AI for better customer engagement, to data analytics for forecasting, to advanced visualization for product innovation, the need for accelerated computing is rapidly increasing. But enterprises face challenges with using existing infrastructure to power these applications.

Improving AI Inference Performance with GPU Acceleration in Aerospace and Defense

The aerospace/defense industry often must solve mission-critical problems as they arise while also planning and designing for the rigors of future workloads. Technology advancements let aerospace/defense agencies gain the benefits of AI, but it’s essential to understand these advancements and the infrastructure requirements for AI training and inference.

Overcome Form Factor and Field Limitations with AI/HPC Workloads on the Edge

In this sponsored post, our friends over at Silicon Mechanics discuss how form factor, latency, and power can all be key limitations, but find out how key advancements in technology will allow higher performance at the edge. For this discussion, the edge means any compute workloads taking place outside of both cloud and traditional on-prem data centers.

More ‘EPYC’ Options for HPC

[SPONSORED CONTENT]  High performance computing (HPC) often sets trends for the data center. Driving innovation, adding new functionality, and enabling simulations to deliver more accuracy, finer details, and insights. Recently AMD released four new AMD EPYC™ 7003 Series processors with AMD 3D V-Cache™ technology. Socket compatible with existing EPYC 7003 processors, the AMD 3D V-Cache […]

An Approach to Democratizing HPC-Style Computing

In this sponsored post, Ehsan Totoni, CTO of Bodo.ai, discusses opportunities to integrate a parallelization approach – capable of scaling to 10k cores or more – into popular cloud-based Data Warehouses to help speed large-scale analytics and ELT computing. And, because the model can be engineered for various special-purpose hardware and accelerators, there are opportunities to apply this to GPUs and FPGAs for media processing and encoding as well.

Calling All HPC Programmers! Remove Code Barriers Using One Complete Software Suite

[SPONSORED POST] HPC systems are growing larger, more complex, and feature different heterogenous hardware components to accommodate different workloads. Find out how HPE Cray Programming Environment helps ease application development.

Double-precision CPUs vs. Single-precision GPUs; HPL vs. HPL-AI HPC Benchmarks; Traditional vs. AI Supercomputers

If you’ve wondered why GPUs are faster than CPUs, in part it’s because GPUs are asked to do less – or, to be more precise, to be less precise. Next question: So if GPUs are faster than CPUs, why aren’t GPUs  the mainstream, baseline processor used in HPC server clusters? Again, in part it gets […]

Spend Less on HPC/AI Storage (and more on CPU/GPU compute)

[SPONSORED POST] In this whitepaper courtesy of HPE, you’ll learn about the three approaches that can help you to feed your CPU- and GPU-accelerated compute nodes without I/O bottlenecks while creating efficiencies in Gartner’s Run category. As the market share leader in HPC servers, HPE saw the convergence of classic modeling and simulation with AI methods such as machine learning and deep learning coming and now offers you a new portfolio of parallel HPC/AI storage systems that are purpose engineered to address all of the previously mentioned challenges—in a cost-effective way.