Single-Node Cloud Resilient Deployment Example Using AWS
Only available in Sonatype Nexus Repository Pro. Interested in a free trial? Start here.
Helm charts are available for this deployment example. Be sure to read the deployment instructions in the associated README file before using these charts.
Already have a Nexus Repository instance and want to migrate to a resilient architecture? See our migration documentation.
We recognize that Nexus Repository is mission-critical to your business. With an Amazon Web Services (AWS)-based Nexus Repository deployment, you can ensure that your Nexus Repository instance is available even if disaster strikes. Whether a single service or an entire data center goes down, you can ensure that you still have access to Nexus Repository.
This section provides instructions and explanations for setting up a resilient AWS-based Nexus Repository deployment like the one illustrated below.
Similar architecture could be used for other cloud or on-premise deployments with Kubernetes and file-based or other supported blob storage. If you would like to manage your own deployment, see Single Data Center On-Premises Deployment Example Using Kubernetes.
Use Cases
This reference architecture is designed to protect against the following scenarios:
- An AWS Availability Zone (AZ) outage within a single AWS region
- A node/server (i.e., EC2) failure
- A Nexus Repository service failure
You would use this architecture if you fit the following profiles:
- You are a Nexus Repository Pro user looking for a resilient Nexus Repository deployment option in AWS in order to reduce downtime
- You would like to achieve automatic failover and fault tolerance as part of your deployment goals
- You already have an Elastic Kubernetes Service (EKS) cluster set up as part of your deployment pipeline for your other in-house applications and would like to leverage the same for your Nexus Repository deployment
- You have migrated or set up Nexus Repository with an external PostgreSQL database and want to fully reap the benefits of an externalized database setup
- You do not need High Availability (HA) active-active mode
Requirements
In order to set up an environment like the one illustrated above and described in this section, you will need the following:
- A Nexus Repository Pro license
- Nexus Repository 3.33.0 or later
- An AWS account with permissions for accessing the following AWS services:
- Elastic Kubernetes Service (EKS)
- Relational Database Service (RDS) for PostgreSQL
- Application Load Balancer (ALB)
- CloudWatch
- Simple Storage Service (S3)
- Secrets Manager
If you require your clients to access more than one Docker Repository, you must use one of the following:
- An external load balancer (e.g., NGINX) as a reverse proxy instead of the provided ingress for Docker YAML
or
- A Docker Subdomain Connector with external DNS to automatically create 'A' records for each Docker subdomain
Limitations
In this reference architecture, a maximum of one Nexus Repository instance is running at a time. Having more than one Nexus Repository instance replica will not work.
Setting Up the Architecture
Step 1 - AWS EKS Cluster
Nexus Repository runs on a single-node AWS EKS cluster spread across two AZs within a single AWS region. After you configure EKS to run only one instance (by setting the min, max, and desired nodes to one), EKS ensures that only one instance of Nexus Repository runs at any one time in the entire cluster. If something causes the instance or the node to go down, another will be spun up. If an AZ becomes unavailable, AWS spins up a new node in the secondary AZ with a new pod running Nexus Repository.
Begin by setting up the EKS cluster in the AWS web console. AWS provides instructions for managed nodes (i.e., EC2) in their documentation.
Your EKS cluster should have a max node count of one spread across two AZs in an AWS region.
Step 2 - AWS Aurora PostgreSQL Cluster
An Aurora PostgreSQL cluster containing three databases (one writer node and two replicas) spread across three AZs in the region where you've deployed your EKS provides an external database for Nexus Repository configurations and component metadata.
AWS provides instructions on creating an Aurora database cluster in their documentation.
Step 3 - AWS Load Balancer Controller
The AWS Load Balancer Controller allows you to provision an AWS ALB via an Ingress type specified in your Kubernetes deployment YAML file. This load balancer, which is provisioned in one of the public subnets specified when you create the cluster, allows you to reach the Nexus Repository pod from outside the EKS cluster. This is necessary because the nodes on which EKS runs the Nexus Repository pod are in private subnets and are otherwise unreachable from outside the EKS cluster.
Follow the AWS documentation to deploy the AWS LBC to your EKS cluster.
Step 4 - Kubernetes Namespace
A namespace allows you to isolate groups of resources in a single cluster. Resource names must be unique within a namespace, but not across namespaces. See the Kubernetes documentation about namespaces for more information.
To create a namespace, use a command like the one below with the kubectl command-line tool:
kubectl create namespace <namespace>
Step 5 - AWS Secrets Manager
AWS Secrets Manager stores your Nexus Repository Pro license as well as the database username, password, and host address. In the event of a failover, Secrets Manager can retrieve the license when the new Nexus Repository container starts. This way, your Nexus Repository always starts in Pro mode.
Use the AWS Secrets Store CSI drivers to mount the license secret, which is stored in AWS Secrets Manager, as a volume in the pod running Nexus Repository.
Include the --syncSecret.enabled=true flag when running the helm command for installing the Secret Store CSI Driver. This will ensure that secrets are automatically synced from AWS Secrets Manager into the Kubernetes secrets specified in the secrets YAML.
Note that only the AWS CLI can support storing a binary license file. AWS provides documentation for using a --secret-binary
argument in the CLI.
The command will look as follows:
aws secretsmanager create-secret --name supersecretlicense --secret-binary fileb://super-secret-license-file.lic --region <region>
This will return a response such as this:
{ "VersionId": "4cd22597-f0a9-481c-8ccd-683a5210eb2b", "Name": "supersecretlicense", "ARN": "arn:aws:secretsmanager:<region>:<account id>:secret:supersecretlicense-abcdef" }
You will put the ARN
value in the secrets YAML.
If updating the license (e.g., when renewing your license and receiving a new license binary), you'll need to restart the Nexus Repository pod after uploading the license to the AWS Secrets Manager. The AWS CLI command for updating a secret is put-secret-value
.
Step 6 - AWS CloudWatch (Optional, but Recommended)
When running Nexus Repository on Kubernetes, it is possible for it to run on different nodes in different AZs over the course of the same day. In order to be able to access Nexus Repository's logs from nodes in all AZs, you must externalize your logs. We recommend that you externalize your logs to CloudWatch for this architecture; however, if you choose not to use CloudWatch, you must externalize your logs to another external log aggregator to make sure that the Nexus Repository logs survive node crashes or when pods are scheduled on different nodes as is possible when running Nexus on Kubernetes. Follow AWS documentation to set up Fluentbit for gathering logs from your EKS cluster.
When first installed, Nexus Repository sends task logs to separate files. Those files do not exist at startup and only exist as the tasks are being run. In order to facilitate sending logs to CloudWatch, you need the log file to exist when Nexus Repository starts up. The nxrm-logback-tasklogfile-override.yaml available in our sample files GitHub repository sets this up.
Once Fluentbit is set up and running on your EKS cluster, apply the fluent-bit.yaml (example available in our sample files GitHub) to configure it to stream Nexus Repository's logs to CloudWatch. The specified Fluentbit YAML sends the logs to CloudWatch log streams within a nexus-logs
log group.
AWS also provides documentation for setting up and using CloudWatch.
Step 7 - External DNS (Optional)
If you are using or wish to use our Docker Subdomain Connector feature, you will need to use external-dns to create 'A' records in AWS Route 53.
You must meet all Docker Subdomain Connector feature requirements, and you must specify an HTTPS certificate ARN in Ingress YAML.
Permissions
You must first ensure you have appropriate permissions. To grant these permissions, open a terminal that has connectivity to your EKS cluster and run the following commands:
cat <<'EOF' >> external-dns-r53-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "route53:ChangeResourceRecordSets" ], "Resource": [ "arn:aws:route53:::hostedzone/*" ] }, { "Effect": "Allow", "Action": [ "route53:ListHostedZones", "route53:ListResourceRecordSets" ], "Resource": [ "*" ] } ] } EOF aws iam create-policy --policy-name "AllowExternalDNSUpdates" --policy-document file://external-dns-r53-policy.json POLICY_ARN=$(aws iam list-policies --query 'Policies[?PolicyName==`AllowExternalDNSUpdates`].Arn' --output text) EKS_CLUSTER_NAME=<Your EKS Cluster Name> aws eks describe-cluster --name $EKS_CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text eksctl utils associate-iam-oidc-provider --cluster $EKS_CLUSTER_NAME --approve ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | sed -e 's|^https://||')
EXTERNALDNS_NS
variable below should be the same as the one you specify in your values.yaml for namespaces.externaldnsNs
EXTERNALDNS_NS=nexus-externaldns cat <<-EOF > externaldns-trust.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::$ACCOUNT_ID:oidc-provider/$OIDC_PROVIDER" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "$OIDC_PROVIDER:sub": "system:serviceaccount:${EXTERNALDNS_NS}:external-dns", "$OIDC_PROVIDER:aud": "sts.amazonaws.com" } } } ] } EOF IRSA_ROLE="nexusrepo-external-dns-irsa-role" aws iam create-role --role-name $IRSA_ROLE --assume-role-policy-document file://externaldns-trust.json aws iam attach-role-policy --role-name $IRSA_ROLE --policy-arn $POLICY_ARN ROLE_ARN=$(aws iam get-role --role-name $IRSA_ROLE --query Role.Arn --output text) echo $ROLE_ARN
Take note of the ROLE_ARN
output last above and specify it in your values.yaml for serviceAccount.externaldns.role.
External DNS YAML
After running the permissions above, run the external-dns.yaml.
Then, in the ingress.yaml, specify the Docker subdomains you want to use for your Docker repositories.
Step 8 - Local Persistent Volume and Local Persistent Volume Claim
Using a local persistent volume allows Elasticsearch indexes to survive pod restarts on a particular node. When used in conjunction with the NEXUS_SEARCH_INDEX_REBUILD_ON_STARTUP
flag, this ensures that the Elasticsearch index is only rebuilt if it is empty when the Nexus Repository pod starts. This means that the only time the Elasticsearch index is rebuilt is the first time that a Nexus Repository pod starts on a node.
Storage space will be allocated to a local persistent volume from your root EBS volume (i.e., the EBS volume attached to the provisioned node). Therefore, you must ensure that the size you specify for your EKS node's root EBS volume is sufficient for your usage. Ensure that the size of the root EBS volume is bigger than that specified in the local persistent volume claim so that there's some spare storage capacity on the node. For example, for a local persistent volume claim size of 100 Gigabytes, you could make the actual size of the root EBS volume 120 Gigabytes.
See the sample storage class YAML, local persistent volume YAML, and local persistent volume claim YAML examples in our sample files GitHub repository.
Step 9 - AWS S3
Located in the same region as your EKS deployment, AWS S3 provides your object (blob) storage. AWS provides detailed documentation for S3 on their website.
Sample Kubernetes YAML Files
You can use the sample AWS resiliency YAML files from our sample files GitHub repository to help set up the YAMLs you will need for a resilient deployment. Note that these are diferent from the AWS resiliency Helm charts.
Before creating and running the YAML files linked below, you must create a namespace. To create a namespace, use a command like the one below:
kubectl create namespace <namespace>
Then, you must run your YAML files in the order below:
- Storage Class YAML
- Secrets YAML as mentioned in Secrets Manager setup
- Fluent-bit Setup as mentioned in the CloudWatch section
- nxrm-logback-tasklogfile-override YAML
- External DNS YAML
- Local Persistent Volume YAML
- Local Persistent Volume Claim YAML
- Deployment YAML
- Services YAML
The resources created by these YAMLs are not in the default namespace.