Edit

Share via


HBv2 sizes series

Important

HBv2-series VMs are scheduled for retirement on May 31, 2027. This applies to all HBv2 sizes: Standard_HB120rs_v2, Standard_HB120-96rs_v2, Standard_HB120-64rs_v2, Standard_HB120-32rs_v2, and Standard_HB120-16rs_v2. After this date, HBv2 VMs will be set to a deallocated state, stop working, stop incurring billing charges, and lose SLA and support. Plan your migration to current-generation HPC alternatives before the retirement date. For migration guidance, see the Migration guidance section below.

HBv2-series VMs are optimized for applications that are driven by memory bandwidth, such as fluid dynamics, finite element analysis, and reservoir simulation. HBv2 VMs feature 120 AMD EPYC 7V12 processor cores, 4 GB of RAM per CPU core, and no simultaneous multithreading. Each HBv2 VM provides up to 350 GB/s of memory bandwidth, and up to 4 teraFLOPS of FP64 compute. HBv2-series VMs feature 200 Gb/sec Mellanox HDR InfiniBand. These VMs are connected in a non-blocking fat tree for optimized and consistent RDMA performance. These VMs support Adaptive Routing and the Dynamic Connected Transport (DCT, in addition to standard RC and UD transports). These features enhance application performance, scalability, and consistency, and their usage is recommended.

Note

HBv2-series is scheduled for retirement on May 31, 2027. For new HPC architecture planning, consider current-generation alternatives such as HBv5-series, HX-series, HBv4-series, or HBv3-series VMs.

Host specifications

Part Quantity
Count Units
Specs
SKU ID, Performance Units, etc.
Processor 16 - 120 vCPUs AMD EPYC 7V12 (Rome) [x86-64]
L3 Cache 1536 MB
Memory 456 GB 350 GB/s
Local Storage 1 Temp Disk
1 NVMe Disk
480 GiB
960 GiB
Remote Storage 8 Disks
Network 8 vNICs
1 InfiniBand HDR NIC
40 Gb/s
200 Gb/s
Accelerators None

For features supported by this series, see the Feature support section.

Sizes in series

vCPUs (Qty.) and Memory for each size

Size Name vCPUs (Qty.) Memory (GB) L3 Cache (MB) Memory Bandwidth (GB/s) Base CPU Frequency (GHz) Single-core Frequency Peak (GHz) All-core Frequency Peak (GHz)
Standard_HB120rs_v2 120 456 512 350 2.45 3.3 3.1
Standard_HB120-96rs_v2 96 456 512 350 2.45 3.3 3.1
Standard_HB120-64rs_v2 64 456 512 350 2.45 3.3 3.1
Standard_HB120-32rs_v2 32 456 512 350 2.45 3.3 3.1
Standard_HB120-16rs_v2 16 456 512 350 2.45 3.3 3.1

VM Basics resources

Lifecycle and procurement

Important

Reserved Instance purchase end date: 1-year and 3-year Reserved Instance purchases for HBv2-series VMs ended on April 2, 2026. New long-term RI commitments are no longer available for this series. Existing RIs remain valid until their expiration.

  • Pay-as-you-go: HBv2-series VMs remain available on a pay-as-you-go basis until the retirement date of May 31, 2027.
  • Plan migration: Begin migrating workloads to current-generation HPC alternatives before May 31, 2027. After this date, HBv2 VMs will be deallocated automatically.

For migration recommendations, see the Migration guidance section.

Feature support

Feature name Support status
Premium Storage Supported
Premium Storage caching Supported
Live Migration Not Supported
Memory Preserving Updates Not Supported
Generation 2 VMs Supported
Generation 1 VMs Supported
Accelerated Networking Supported
Ephemeral OS Disk Supported
Nested Virtualization Not Supported
Backend Network InfiniBand HDR

Migration guidance

HBv2-series VMs will be retired on May 31, 2027. Microsoft recommends migrating to the following current-generation HPC VM series before the retirement date:

Recommended series Key characteristics
HBv5-series 4th Gen AMD EPYC (Zen4), up to 368 cores, HBM3 memory, NDR InfiniBand – best for memory bandwidth-intensive workloads
HX-series AMD EPYC 9004, up to 176 cores, large L3 cache – suited for memory-capacity-intensive HPC workloads
HBv4-series 4th Gen AMD EPYC (Genoa), up to 176 cores, NDR InfiniBand – strong all-round HPC replacement
HBv3-series 3rd Gen AMD EPYC (Milan), up to 120 cores, HDR InfiniBand – comparable generation step-up from HBv2

When selecting a replacement size, consider the following:

  • MPI and RDMA requirements: Verify that your MPI library and InfiniBand fabric (HDR vs. NDR) are supported on the target series.
  • Memory bandwidth: Benchmark your workload on the target series to confirm performance parity or improvement.
  • Workload compatibility: Test application correctness and performance on the target series before production migration.

Other size information

List of all available sizes: Sizes

Pricing Calculator: Pricing Calculator

Information on Disk Types: Disk Types

Next steps

Read about the latest announcements, HPC workload examples, and performance results at the Azure Compute Tech Community Blogs.

For a higher level architectural view of running HPC workloads, see High Performance Computing (HPC) on Azure.