Option 3 - High Availability Deployment in Amazon Web Services (AWS)
Only available in Sonatype Nexus Repository Pro. Interested in a free trial? Start here.
NEW IN 3.50.0
Sonatype Nexus Repository High Availability deployments should be fully deployed and tested in a development environment before attempting to deploy in production. Improper deployment in a production environment can result in critical data loss.
Use Cases
This reference architecture is designed to protect against the following scenarios:
- An AWS Availability Zone (AZ) outage within a single AWS region
- A node/server (i.e., EC2) failure
- A Nexus Repository service failure
You would use this architecture if you fit the following profiles:
- You are a Sonatype Nexus Repository Pro user looking for a highly available Nexus Repository deployment option in AWS in order to reduce downtime.
- You would like to achieve automatic failover and fault tolerance as part of your deployment goals.
- You already have an Elastic Kubernetes Service (EKS) cluster set up as part of your deployment pipeline for your other in-house applications and would like to leverage the same for your Nexus Repository deployment.
- You have migrated or set up Nexus Repository with an external PostgreSQL database and want to fully reap the benefits of an externalized database setup.
Requirements
- Each Sonatype Nexus Repository instance must meet our Sonatype Nexus Repository System Requirements.
- You must also meet all System Requirements for High Availability Deployments, including using shared blob storage (see Migrating to Shared Blob Storage if necessary).
- Before proceeding, you must have adjusted the max_connections.
- If you are using EKS version 1.23+, you must first install the AWS EBS CSI driver before running the current HA Helm Chart (GitHub, ArtifactHub). We recommend using the EKS add-on option as described in AWS’s installation instructions.
- You must also have an AWS account with permissions for accessing the AWS services described in the sections below.
Limitations
- All active Sonatype Nexus Repository instances will have to be shut down in order to upgrade the Sonatype Nexus Repository version.
- We do not recommend deploying HA clusters across regions due to the large number of database interactions is involved in HA.
- In the absence of a distributed solution for logging, AWS EBS volumes are required to persist the log files that are needed for support zip creation.
Setting Up the Infrastructure
Step 1 - AWS EKS Cluster
Nexus Repository runs on an AWS EKS cluster spread across two or more AZs within a single AWS region. You can control the number of nodes by setting the min, max, and desired nodes parameters. EKS ensures that the desired number of Nexus Repository instances runs within the cluster. If something causes an instance or the node to go down, AWS will spin up another one. If an AZ becomes unavailable, AWS spins up new node(s) in the secondary AZ.
Begin by setting up the EKS cluster in the AWS web console, ensuring your nodes are spread across two or more AZs in one region. The EKS cluster must be created in a virtual private cloud (VPC) with two or more public or private subnets in different AZs. AWS provides instructions for managed nodes (i.e., EC2) in their documentation.
Step 1 Validation (Optional)
Step 2 - Service Account
If you plan to use the HA Helm Chart (GitHub, ArtifactHub), you do not need to perform the step below; the Helm chart will handle this for you.
A Kubernetes Service Account is associated with an IAM role containing the needed IAM policies for S3 and AWS Secrets Manager access. The Nexus Repository containers spun up by the statefulset will use this service account. Run the Service Account YAML to establish the service account.
Step 2 Validation (Optional)
Step 3 - AWS Aurora PostgreSQL Cluster
An Aurora PostgreSQL cluster containing three databases (one writer node and two replicas) spread across three AZs in the region where you've deployed your EKS provides an external database for Nexus Repository configurations and component metadata. We recommend creating these nodes in the same AZs as the EKS cluster.
AWS provides instructions on creating an Aurora database cluster in their documentation.
Step 3 Validation (Optional)
Step 4 - AWS Load Balancer Controller
The AWS Load Balancer Controller allows you to provision an AWS ALB via an Ingress type specified in your Kubernetes deployment YAML file. This load balancer, which is provisioned in one of the public subnets specified when you create the cluster, allows you to reach the Nexus Repository pod from outside the EKS cluster. This is necessary because the nodes on which EKS runs the Nexus Repository pod are in private subnets and are otherwise unreachable from outside the EKS cluster.
Follow the AWS documentation to deploy the AWS LBC to your EKS cluster. If you are unfamiliar with the AWS Load Balancer Controller and Ingress, read the AWS blog on AWS Load Balancer Controller.
If you encounter any errors when creating a load balancer using the AWS Load Balancer Controller, follow the steps in AWS's troubleshooting knowledgebase article.
Step 4 Validation (Optional)
Step 5 - Kubernetes Namespace
A namespace allows you to isolate groups of resources in a single cluster. Resource names must be unique within a namespace, but not across namespaces. See the Kubernetes documentation about namespaces for more information.
To create a namespace, use a command like the one below with the kubectl command-line tool:
kubectl create namespace <namespace>
Step 5 Validation (Optional)
Step 6 - AWS Secrets Manager
AWS Secrets Manager stores your Nexus Repository Pro license as well as the database username, password, and host address. In the event of a failover, Secrets Manager can retrieve the license when the new Nexus Repository container starts. This way, your Nexus Repository always starts in Pro mode.
Follow the AWS documentation for Secrets Store CSI Drivers to mount the license secret, which is stored in AWS Secrets Manager, as a volume in the pod running Nexus Repository.
Additional Instructions for Installing Secret Store CSI Driver
When you reach the command for installing the Secret Store CSI Driver, include the --set syncSecret.enabled=true
flag. This will ensure that secrets are automatically synced from AWS Secrets Manager into the Kubernetes secrets specified in the secrets YAML.
Note that only the AWS CLI can support storing a binary license file. AWS provides documentation for using a --secret-binary
argument in the CLI.
The command will look as follows:
aws secretsmanager create-secret --name supersecretlicense --secret-binary fileb://super-secret-license-file.lic --region <region>
This will return a response such as this:
{ "VersionId": "4cd22597-f0a9-481c-8ccd-683a5210eb2b", "Name": "supersecretlicense", "ARN": "arn:aws:secretsmanager:<region>:<account id>:secret:supersecretlicense-abcdef" }
You will put the ARN value in the secrets YAML.
put-secret-value
.Additional Instructions for Creating IAM Service Account
When you reach the command for creating an IAM service account, follow these additional instructions:
- You must include two additional command parameters when running the command:
--role-only
and--namespace <nexusrepo namespace>
- It is important to include the
--role-only
option in theeksctl create iamserviceaccount
command so that the Helm chart manages the Kubernetes service account.
- It is important to include the
- The namespace you specify to the
eksctl create iamserviceaccount
must be the same namespace into which you will deploy the Nexus Repository pod.- Although the namespace does not exist at this point, you must specify it as part of the command. Do not create that namespace manually beforehand; the Helm chart will create and manage it.
- You should specify this same namespace as the value of
nexusNs
in your values.yaml.
Your command should look similar to the following where $POLICY_ARN
is the access policy created in a previous step when you follow the AWS documentation.
eksctl create iamserviceaccount --name <sa-name> --region="$REGION" --cluster "$CLUSTERNAME" --attach-policy-arn "$POLICY_ARN" --approve --override-existing-serviceaccounts --role-only
Note that the command will not create a Service Account but will create an IAM role only
Step 6 Validation (Optional)
Step 7 - AWS CloudWatch (Optional, but Recommended)
When running Nexus Repository on Kubernetes, it is possible for it to run on different nodes in different AZs over the course of the same day. In order to be able to access Nexus Repository's logs from nodes in all AZs, you must externalize your logs. We recommend that you externalize your logs to CloudWatch for this architecture; however, if you choose not to use CloudWatch, you must externalize your logs to another external log aggregator to make sure that the Nexus Repository logs survive node crashes or when pods are scheduled on different nodes as is possible when running Nexus Repository on Kubernetes. Follow AWS documentation to set up Fluent Bit for gathering logs from your EKS cluster.
When first installed, Nexus Repository sends task logs to separate files. Those files do not exist at startup and only exist as the tasks are being run. In order to facilitate sending logs to CloudWatch, you need the log file to exist when Nexus Repository starts up. The nxrm-logback-tasklogfile-override.yaml available in our sample files GitHub repository sets this up.
Once Fluent Bit is set up and running on your EKS cluster, apply the fluent-bit.yaml to configure it to stream Nexus Repository's logs to CloudWatch. The specified Fluentbit YAML sends the logs to CloudWatch log streams within a nexus-logs log group.
AWS also provides documentation for setting up and using CloudWatch.
Step 7 Validation (Optional)
Step 8 - External DNS (Optional)
If you are using or wish to use our Docker Subdomain Connector feature, you will need to use external-dns to create 'A' records in AWS Route 53.
You must meet all Docker Subdomain Connector feature requirements, and you must specify an HTTPS certificate ARN in Ingress YAML.
You must also add your Docker subdomains to your values.yaml.
Permissions
You must first ensure you have appropriate permissions. To grant these permissions, open a terminal that has connectivity to your EKS cluster and run the following commands:
The commands below do not register a domain for you; you must have a registered domain before completing this step.
1. Use the following to create the policy JSON file.
cat <<'EOF' >> external-dns-r53-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "route53:ChangeResourceRecordSets" ], "Resource": [ "arn:aws:route53:::hostedzone/*" ] }, { "Effect": "Allow", "Action": [ "route53:ListHostedZones", "route53:ListResourceRecordSets" ], "Resource": [ "*" ] } ] } EOF
2. Use the following to set up permissions to allow external DNS to create route 53 records.
aws iam create-policy --policy-name "AllowExternalDNSUpdates" --policy-document file://external-dns-r53-policy.json POLICY_ARN=$(aws iam list-policies --query 'Policies[?PolicyName==`AllowExternalDNSUpdates`].Arn' --output text) EKS_CLUSTER_NAME=<Your EKS Cluster Name> eksctl utils associate-iam-oidc-provider --cluster $EKS_CLUSTER_NAME --approve ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | sed -e 's|^https://||')
EXTERNALDNS_NS
variable below should be the same as the one you specify in your values.yaml for namespaces.externaldnsNs
.EXTERNALDNS_NS=nexus-externaldns cat <<-EOF > externaldns-trust.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::$ACCOUNT_ID:oidc-provider/$OIDC_PROVIDER" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "$OIDC_PROVIDER:sub": "system:serviceaccount:${EXTERNALDNS_NS}:external-dns", "$OIDC_PROVIDER:aud": "sts.amazonaws.com" } } } ] } EOF IRSA_ROLE="nexusrepo-external-dns-irsa-role" aws iam create-role --role-name $IRSA_ROLE --assume-role-policy-document file://externaldns-trust.json aws iam attach-role-policy --role-name $IRSA_ROLE --policy-arn $POLICY_ARN ROLE_ARN=$(aws iam get-role --role-name $IRSA_ROLE --query Role.Arn --output text) echo $ROLE_ARN
Take note of the ROLE_ARN
output last above and specify it in your values.yaml for serviceAccount.externaldns.role
.
External DNS YAML
After running the permissions above, run the external-dns.yaml.
Then, in the ingress.yaml, specify the Docker subdomains you want to use for your Docker repositories.
Step 8 Validation (Optional)
Step 9 - Dynamic Provisioning Storage Class
Run the storage class YAML. This storage class will dynamically provision EBS volumes for use by each of your Nexus Repository pods. You must make sure that you have an AWS EBS Provisioner installed in your EKS cluster.
Step 9 Validation (Optional)
Step 10 - AWS S3
Located in the same region as your EKS deployment, AWS S3 provides your object (blob) storage. AWS provides detailed documentation for S3 on their website.
Step 10 Validation (Optional)
Starting Your HA Deployment
Step 1 - StatefulSet
Run the StatefulSet YAML to start your Nexus Repository Pods.
Step 1 Validation (Optional)
Step 2 - Ingress YAML
Run the Ingress YAML to expose the service externally; this is required to allow you to communicate with the pods.
Step 12 Validation (Optional)
HA Helm Chart
Unless otherwise specified, all steps detailed above are still required if you are planning to use the HA Helm chart.
To use the HA Helm Chart (GitHub, ArtifactHub), after completing the steps above, use git to check out the HA Helm Chart repository. Follow the instructions in the associated Readme file.
default
blob stores and repositories on all instances.YAML Order List
For those not using a Helm chart, you must run your YAML files in the order below after creating a namespace:
- Namespaces YAML
- Service Accounts YAML
- Secrets YAML
- Logback Tasklogfile Override YAML
- Fluent-bit YAML (only required if using CloudWatch)
- External-dns.yaml (Optional)
- Storage Class YAML
- Services YAML
- Ingress for Docker YAML (Optional)
- Docker Services YAML (Optional)
- StatefulSet YAML
- Ingress YAML
- The resources that these YAMLs create are not in the default namespace.
- The sample YAMLs are set up to disable the
default
blob stores and repositories on all instances.