Deployment Pattern Library
Defining the Problem
Sonatype Nexus Repository is mission-critical with an ever-growing need to balance availability with infrastructure costs. Many Sonatype Nexus Repository customers also have deployment teams across the globe, making effective and efficient distribution essential. In evaluating industry struggles and our customers' needs, we've identified three primary concerns that can be addressed with appropriate deployment patterns: resiliency, scalability, and distribution.
Resiliency refers to the ability to recover from disruptions to critical processes and supporting technology systems. Regardless of your distribution and scalability needs, you must have some level of resiliency to ensure a successful deployment.
Resiliency always starts with effective backup and restore procedures as described in Backup/Same-Site Restore. If you do nothing else, ensure you are effectively using this pattern for every Sonatype Nexus Repository instance that is a system of record in your deployment.
A system of record is the authoritative Sonatype Nexus Repository instance for a given artifact.
Beyond backup and restore, you can increase Sonatype Nexus Repository resiliency in a multitude of ways. Which of the following types of disruptions do you need to protect against?
- Data integrity disasters where data stored in Sonatype Nexus Repository becomes accidentally corrupted, leading to data loss
- Node failure when a VM or server crashes leading to downtime
- Data center outages for on-premises deployments
- Availability zone outages for cloud deployments
Our most resilient deployments, including high availability, will require Sonatype Nexus Repository Pro and an external PostgreSQL database.
Establish Scalability Needs
Enterprise deployments experience large spikes in requests or require scalability to handle intense workloads. Scalability refers to a deployment's ability to adjust to these flexible, changing demands. These are some ways Sonatype Nexus Repository can be scaled.
- Additional capacity in terms of compute power, system memory, and faster storage to the server on which Sonatype Nexus Repository is running.
- Separating distinct workloads against multiple environments. Examples include development teams, product distribution, or by specific ecosystems (Docker, Yum, Maven, npm, etc.).
- Distributing requests across multiple nodes in a highly available cluster or among independent servers proxying from a central source.
Define Distribution Needs
A solid Sonatype Nexus Repository deployment requires establishing both your current and future distribution needs. You can define these by answering a few important questions:
How many Sonatype Nexus Repository systems of record do you plan to have?
You can think of a system of record as an authoritative Sonatype Nexus Repository instance. You may need multiple or only one. Even if you are geographically dispersed, you may not need multiple systems of record. You'll need to decide how many authoritative vs. transient instances you will need to have; this will help determine a base deployment pattern from which to design your full deployment.
For example, a star pattern will allow centralized administration with transient servers as local proxies. Meanwhile, a federated deployment will involve distinct regional hubs sharing replicated content.
How much traffic will each system of record need to handle?
Even if you only have one system of record, you may need to distribute the traffic. One common scenario our customers encounter is very high read traffic on a single system of record. You can lighten this load by distributing this traffic to more transient proxy nodes as described in scaling with proxies.
Do you need on-demand artifact sharing or proactive artifact availability?
There are multiple ways to make content available across distributed teams. For example, bi-directional proxying will allow users to pull artifacts on demand from a distant repository via a proxy repository; meanwhile, content replication will proactively pull all content from a hosted repository to a proxy repository so that all content is already available on the proxy.
Once you've fully defined your problem areas by establishing your resiliency, scalability, and distribution needs, it's time to start building your deployment pattern.
The patterns in this library represent proven ways of deploying Sonatype Nexus Repository; you will likely need to combine multiple patterns to meet all of your needs.
You can combine the patterns in this section with any distribution patterns to add scalability and/or resiliency to your deployment.
|Pattern||Needs Addressed||Description||Requires Pro?||Requires PostgreSQL?|
Backup/Same-site restore is an essential foundational pattern that involves backing up your Sonatype Nexus Repository system of record so that you can restore from a point in time if necessary.
If you do nothing else, you should use this pattern to protect against data corruption.
|Disaster Recovery Site|
Resiliency: Data and Region
Adding a disaster recovery (DR) site involves having a cold standby available so that you can spin up a Sonatype Nexus Repository instance and switch to this site in the event of failure at your primary Sonatype Nexus Repository system of record.
|On-Prem Active/Passive Resiliency|
|Our resiliency patterns make recovery less manual by using an automatic recovery mechanism (e.g., Kubernetes) to accomplish automatic failover if a system of record goes down.||Yes||PostgreSQL Required|
|Cloud Active/Passive Resiliency||Resiliency: Node and Availability Zone|
Our resiliency patterns make recovery less manual by using an automatic recovery mechanism (e.g., Kubernetes) to accomplish automatic failover if a system of record goes down.
The cloud version of our resiliency pattern also protects against availability zone (AZ) outages.
|On-Prem Active/Active High Availability||Advanced high availability (HA) deployment options use multiple active Sonatype Nexus Repository instances to maximize uptime and minimize data loss in the event of node failure.||Yes||PostgreSQL Required|
|Cloud Active/Active High Availability||Advanced HA deployment options use multiple active Sonatype Nexus Repository instances to maximize uptime and minimize data loss in the event of node failure.||Yes||PostgreSQL Required|
Base Distribution Patterns
|Pattern||Needs Addressed||Description||Requires Pro?||Requires PostgreSQL?|
|Star Pattern (aka hub-and-spoke pattern)||Distribution: Single system of record||The star pattern is a base pattern that uses the scaling with proxies pattern to add more transient satellites to a single system of record.||No||No|
|Federated Repositories||Distribution: Multiple systems of record||Federated repositories involve multiple systems of record that also use the bi-directional proxy pattern and/or content replication pattern to distribute artifacts between them.||No|
Extensions to Base Distribution Patterns
|Extension Pattern||Builds Off Of||Needs Addressed||Description||Requires Pro?||Requires PostgreSQL?|
|Scaling with Proxies||Star Pattern||Scalability||Scaling with proxy nodes allows you to take read load off of a primary Sonatype Nexus Repository system of record (node) by using proxy nodes with a load balancer to split read traffic.||No||No|
|Bi-Directional Proxying||Federated Repositories||On-Demand Distribution||Bi-directional proxying provides on-demand distribution by having writes published to a hosted repository in a given region. Then, when a team in another region needs those components, they request them via a proxy repository in their region. The proxy then pulls those requested components from the hosted repository in the first region.||No||No|
|Content Replication Pattern||Federated Repositories||Proactive Distribution||Content Replication allows teams to publish artifacts to a hosted repository in a Sonatype Nexus Repository system of record. Then, a proxy repository in another Sonatype Nexus Repository instance pre-emptively fetches these artifacts via HTTP to provide faster artifact availability across distributed teams.||Yes||Yes|
Example Combination Patterns
The examples below combine some of the patterns described above to meet multiple needs.
|Combination Pattern||Needs Addressed||Description||Requires Pro?||Requires PostgreSQL?|
|Combination Active/Active High Availability + Disaster Recovery Site|
This pattern provides two layers of resiliency. It uses active/active HA to protect against node or AZ outages and a DR site to protect against a regional outage.
|Combination Active/Active High Availability + Disaster Recovery Site + Federated Repositories||This pattern is the same as the Active/Active HA + DR Site combo described above but adds a layer of redundancy by using federated repositories to distribute artifacts between two regions.||Yes||PostgreSQL Required|