Skip to main content

Performance Benchmarks for High Availability


Sonatype IQ Server High Availability (HA) installations can vary based on your application and the organization's needs. The following sections provide the performance metrics for IQ server HA installations under different environments. This will help you understand how to make the best installation choice based on your unique performance requirements, runtimes, and cost.

We have thoroughly tested and verified the functionality and performance of the Sonatype IQ Server with the named third-party tools, technologies, and platforms mentioned in this section. Using other equivalent technologies and platforms may not result in the exact same outcomes, and is not supported by Sonatype.

On this page

Environment specifications, reference architecture, and corresponding performance benchmarks for:

  • Simulation Approach and Steps

  • Performance Benchmarks for a sample environment with 3 Nodes in EKS Cluster, with Java Optimization

Simulation Approach and Steps

Scan application used: webgoatbinary scan.

Simulation approach: Simulated multiple policy evaluation requests per minute, against multiple IQ applications in a time period of 20 minutes.


  1. SubmitScan: Submits the scan.xml.gz (of webgoat app) to the performance environment using the endpoint /rest/integration/applications/{applicationName}/evaluations/cli/stages/build

  2. CheckEvaluationStatus: Check the status of the evaluation of each submitted scan every 1 second

Performance Benchmarks for a Sample Environment: 3 Nodes in the EKS Cluster with Java optimization

Environment Specifications

Infrastructure Component


EKS Cluster

Instance class: m5d.2xlarge

No of instances: 3

Instance type: AL2_x86_64

K8s version: 1.23


Instance class: db.m5.4xlarge

Allocated storage: 50 GB

Engine: Postgres

Version : 13.7


1 EFS drive

Other options

  • ALB configured

  • SSL enabled

  • External DNS configured

  • Java optimization used iq_server.javaOpts="-Xms24g -Xmx24g"

Reference Architecture


Policy Evaluation Performance Benchmarks

Policy Evaluations

Requests per Minute (RPM)

Scans Performed

(within 20 minutes)

Failed Scans

Average Duration

(in seconds)

Maximum Duration

(in seconds)

60 (8x* mode)

(86,400 per day / 604,800 for 7 days)





120 (16x* mode)

(172,800 per day / 1,209,600 for 7 days)





* x refers to 7.5 policy evaluations per minute (10,800 per day/75,600 for 7 days)