HPC and AI Workloads Drive Storage System Design

Many organizations are tied to outdated storage systems that cannot meet HPC and AI workload needs. Designing high‑throughput, highly scalable HPC storage systems require expert planning and configuration. The Dell Validated Designs for HPC Storage solution offers a way to quickly upgrade antiquated storage….

Reduce Costs while Accelerating Data-intensive HPC Workloads

Access virtually unlimited infrastructure with HPC optimized instances and quick interconnect speeds to run more complex (FEA) simulations faster. Reduce product development costs, improve product quality, and shorten time-to-market.

Open Source or Enterprise-grade Containers? How SingularityPRO Adds Value for Mission-critical HPC Workloads

Sylabs developed SingularityPRO and Singularity Enterprise to deliver an array of important features and capabilities, along with enterprise-grade support, for organizations that need more stability, security, and support. Organizations running AI, data science, and compute-driven analytics applications often have deeper needs for ensuring optimal performance from and security of mission-critical workloads running in containers.

It’s Time to Resolve the Root Cause of Congestion

[SPONSORED POST] In this paper, Matthew Williams, CTO at Rockport Networks, explains how recent innovations in networking technologies have led to a new network architecture that targets the root causes of HPC network congestion. Congestion can delay workload completion times for crucial scientific and enterprise workloads, making HPC systems unpredictable and leaving high-cost cluster resources waiting for delayed data to arrive. Despite various brute-force attempts to resolve the congestion issue, the problem has persisted. Until now.

It’s Time to Resolve the Root Cause of Congestion

Today, every high-performance computing (HPC) workload running globally faces the same crippling issue: Congestion in the network.

Congestion can delay workload completion times for crucial scientific and enterprise workloads, making HPC systems unpredictable and leaving high-cost cluster resources waiting for delayed data to arrive. Despite various brute-force attempts to resolve the congestion issue, the problem has persisted. Until now.

In this paper, Matthew Williams, CTO at Rockport Networks, explains how recent innovations in networking technologies have led to a new network architecture that targets the root causes of HPC network congestion, specifically:

– Why today’s network architectures are not a sustainable approach to HPC workloads
– How HPC workload congestion and latency issues are directly tied to the network architecture
– Why a direct interconnect network architecture minimizes congestion and tail latency

Atos HPC Software Suites

This whitepaper, “Atos HPC Software Suites,” from our friends over at Atos explains the main features  and functionalities of the Atos HPC Software Suites and shares the company’s vision for the significant evolutions coming next in the HPC software arena.

Atos HPC Software Suites

This whitepaper from our friends over at Atos explains the main features  and functionalities of the Atos HPC Software Suites and shares the company’s vision for the significant evolutions coming next in the HPC software arena.

HPC Cost Modeling

This ebook from our friends over at Rescale, “HPC Cost Modeling: Defining the Modern HPC Experience,” focuses on why the historical methodology for determining cost, price per core hour is at best incapable of providing effective cost optimization for HPC workloads. At worst, it could increase your overall cost of ownership and slow innovation and productivity.

HPC Cost Modeling

This ebook from our friends over at Rescale, focuses on why the historical methodology for determining cost, price per core hour is at best incapable of providing effective cost optimization for HPC workloads. At worst, it could increase your overall cost of ownership and slow innovation and productivity. Assessing the cost of your HPC practice […]

Workload Portability Enabled by a Modern Storage Platform

In this sponsored post, Shailesh Manjrekar, Head of AI and Strategic Alliances, WekaIO, explores what is meant by “data portability,” and why it’s important. Looking at a customer pipeline, the customer context could be a software defined car, any IoT edge point, a drone, a smart home, a 5G tower, etc. In essence, we’re describing an AI pipeline which runs over an edge, runs over a core, and runs over a cloud. Therefore we have three high-level components for this pipeline.