Configuring Blob Stores
The files of a repository are stored in blob stores. Configure new blob stores by navigating to Administration→ Repository→ Blob Stores in Nexus Repository. The nx-all or nx-blobstore privileges are required to access this configuration in Nexus Repository.
Check out Storage Planning for information on blob store types and planning storage requirements.
Click on a specific row to see the Path to the file system storage (for file system blob stores) and the Soft Quota of the selected blob store.
We recommend that your blob store location be outside of the $data-dir
directory and read/write accessible by the node.
Blob Store display
The following fields appear in the blob store listing:
Name - The blob store's name displayed in repository administration.
Type - The type of the blob store backend. The following options are available:
Azure Cloud Storage - Stores blobs in Azure Cloud storage. PRO
File - Store blobs in file system-based storage.
Google Cloud Storage - Stores blobs in Google Cloud storage. PRO
Group - Combines multiple blob stores into one. PRO
S3- Store blobs in AWS S3 Cloud storage.
State - The state of the blob store.
Started - indicates the blob store is running as expected.
Failed - indicates a configuration issue and, as a consequence, the blob store failed to initialize.
Blob Count - The number of blobs currently stored in a blob store.
Total Size - For most blob stores, this is the approximate size on disk in bytes. For S3, this number will not include blobs (objects) marked for expiration but not yet removed.
Available Space - A blob store's remaining storage capacity.
Creating a New Blob Store
Select the Create Blob Store button
Select the Type and provide a Name for your blob store
For file system blob stores, the Path field provides the path to the desired file system location.
This path must be fully accessible by the operating system user account that is running Nexus Repository.
You are not able to modify a blobstore once it has been created. You will also not be able to delete any blob store that a repository or repository group uses.
Blobs deleted in a repository are only marked for deletion (i.e., a soft delete). You can use the Compact blob store task to permanently delete these soft-deleted blobs and therefore free up the used storage space.
The following sections cover specific information for each supported file system or blob store type.
NFS v4 File System
The recommended settings for an NFS v4 server in /etc/exports
are as follows:
/srv/blobstores 10.0.0.0/8(rw,no_subtree_check)
The recommended settings for mounting the NFS filesystem on the node:
defaults,noatime,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,vers=4.1
Note
Versions of NFS older than v4 are not supported.
AWS Elastic File System (EFS)
EFS acts as a limitless NFS v4 server and is an option if Nexus Repository is running in AWS. Mount the EFS volume with the same settings as NFS v4.
Note
EFS performance will be lower than a dedicated NFS server.
AWS Simple Storage Service (S3)
Configure blob stores to use AWS Simple Storage Service instead of a local file system. We recommend only using S3 as a blob store when Nexus Repository is running on EC2 instances within AWS. Files are stored in an S3 bucket by using the AWS REST APIs over HTTP. HTTP traffic to S3 blobstores introduces significant I/O latency across the network from other networks.
Nexus Repository uses the IAM role assigned to the current EC2 instance to access the S3 buckets.
The bucket may use server-side encryption with KMS key management transparently or S3-managed encryption.
Nexus Repository applies a lifecycle rule to expire deleted content.
Required AWS Permission
The AWS user needs permission for the following actions for accessing the S3 bucket resource.
CreateBucket
and DeleteBucket
are required to automatically add new buckets but are not needed to use an existing one.
s3:PutObject s3:GetObject s3:DeleteObject s3:ListBucket s3:GetLifecycleConfiguration s3:PutLifecycleConfiguration s3:GetObjectTagging s3:PutObjectTagging s3:DeleteObjectTagging s3:GetBucketPolicy s3:CreateBucket s3:DeleteBucket
Replace the following parameters for your environment.
<user-arn>
- the ARN of the AWS user<s3-bucket-name>
- the S3 bucket name
{ "Version": "2012-10-17", "Id": "NexusS3BlobStorePolicy", "Statement": [ { "Sid": "NexusS3BlobStoreAccess", "Effect": "Allow", "Principal": { "AWS": "<user-arn>" }, "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:ListBucket", "s3:GetLifecycleConfiguration", "s3:PutLifecycleConfiguration", "s3:GetObjectTagging", "s3:PutObjectTagging", "s3:DeleteObjectTagging", "s3:GetBucketPolicy" ], "Resource": [ "arn:aws:s3:::<s3-bucket-name>", "arn:aws:s3:::<s3-bucket-name>/*" ] } ] }
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:DeleteBucket", "s3:CreateBucket" ], "Resource": [ "arn:aws:s3:::*" ] } ] }
The following sections explain the S3-specific fields when selecting S3 as the blob store.
Name Field
Give your blob store a unique name.
Region Field
A drop-down list populated with a list of AWS regions. Select the appropriate region for your blob store; for optimum performance, it should be the same region in which Nexus Repository is run.
See the Custom S3 Regions to add other regions.
Bucket Field
Provide an existing or new AWS S3 bucket name. Nexus Repository automatically creates the S3 bucket when the provided name does not yet exist.
Prefix Field
The prefix is the complete path in front of the object name, which includes the bucket name.
For instance, if ObjectA.txt
is stored as BucketA/ProjectA/FilesA/ObjectA.txt
, the prefix is BucketA/ProjectA/FilesA/
.
Expiration Days Field
Use the Expiration Days field to configure the number of days until deleted blobs are removed from the S3 bucket.
See the AWS documentation on deleting Amazon S3 objects for details.
1 or greater
: the files are soft deleted in the blob store and the S3 policy hard deletes them at the set number of days.0
: the files are hard deleted right away. Does not require the S3 policy to delete them.-1
: Nexus Repository soft delete blobs while the S3 policy will not hard delete them.This is not recommended as files are never deleted. The compact blob store task is used for file-based blob stores and does not delete files on the S3 blob store.
For more information on soft and hard deletion, see the Storage Guide.
Authentication
In this optional section, provide AWS Identity and Access Management (IAM) authentication information.
See Temporary security credentials in the IAM section of the AWS documentation for more information on AWS IAM.
Encryption Type
Setting the encryption type is not required to use S3 buckets. When desired, select either S3 Managed Encryption or KMS Managed Encryption as the encryption type used for objects in the S3 blob store.
Nexus Repository has tested using SSE-S3
and SSE-KMS
. See the AWS server-side encryption documentation for more information.
Providing a KMS key ID
When left blank Nexus Repository uses a default key.
When providing a KMS key ID, enter the full KMS key ID rather than the human-readable key alias.
Enabling Encryption on an Existing Blob Store
To avoid errors when encrypting an existing blob store, we recommend the following steps before enabling encryption:
Upgrade Nexus Repository to the latest available version
Schedule a Nexus Repository shutdown
Add the KMS key in the blob store user interface immediately after encryption is enabled
While the access to encrypted S3 objects is transparent, it would be best to start Nexus Repository after encryption is enabled and then immediately add the KMS key in the blob store user interface to enable bucket decryption.
Advanced Connection Settings
The following configuration is useful when configuring third-party storage devices that implement the S3 API. These settings are not needed when using AWS directly.
Endpoint URL
Use this field to provide the storage device's URL.
Max Connection Pool Size
The maximum number of connections that can be created to meet client requests. When set, this value overrides the default connection pool size defined by Nexus Repository or the AWS Client. For better performance or to avoid timeouts, large deployments with a lot of traffic may want to increase the connection pool that the S3 client uses.
Signature Version
Identifies the AWS Signature version that you want to support for authenticated requests. For authenticated requests, Amazon S3 supports both Signature Version 4 and Signature Version 2. You can add this condition to your bucket policy to require a specific signature version.
Use path-style access
AWS uses the bucket as a subdomain of the URL. Enabling this setting will result in using the older style that appends the bucket name to the path of the URL.
AWS S3 Replication Buckets
Note
Only available in Sonatype Nexus Repository Pro. Interested in a free trial? Start here.
For highly available deployments, failover buckets in alternate regions may be configured in case of failure in the primary region. At startup, Nexus Repository chooses the bucket based on the detected AWS region. The primary bucket is used when the operating region does not match the configured failovers.
The Region Status
found when editing the blob store reports which region is available and is in use.
Bi-directional Replication
Nexus Repository does not upload or remove content in both buckets. Content is only added to the currently active bucket.
Bi-directional replication should be configured between the primary bucket and the replication bucket to copy artifacts added to either bucket. Set this configuration via the AWS console API. Include deletes as well as additions and redeployment of existing artifacts.
Select an alternate
region
to use in case the primary region is not available.Provide the alternate bucket name. This bucket must be created in the alternate region manually. See the Bi-Directional Replication note above.
Multiple failover buckets may be added however S3 replication should be configured to copy content added to any one to all of the others.
AWS S3 Replication Buckets are only intended for regional failover support when the Nexus instance is started in the failover region. Failover buckets should not be used by multiple Nexus Repository clusters connected to different databases. This functionality is not intended for replication across different repository instances and attempting this use case is not supported.
Soft Quota
This section allows you to enable a soft quota for the blob store, which will raise an alert when a blob store exceeds a constraint.
See Adding a Soft Quota for more information.
Custom S3 Regions
To use a custom S3 region, create the capability by performing these steps:
Navigate to Administration → System → Capabilities
Select the
Create capability
buttonSelect the
Custom S3 Regions
capability typeEnter a list of custom S3 region names separated with a comma, and no spaces
region-1,region-2,region-3
Select
Create capability
The list of custom regions is available in the Regions drop-down menu when configuring an S3 blob store.
Performance Notes
For optimum performance, take the following measures:
Run Nexus Repository on AWS on EC2 instances.
Ensure that the S3 connection is using the region in which Nexus Repository is run.
S3 has a hard limit of 10 thousand parts for multi-part uploads. The chunk size when uploading to S3 can be adjusted by setting the following property in the nexus.properties
file.
nexus.s3.multipartupload.chunksize = 5242880
The default value in bytes is 5242880 (5MB)
. This may be tuned to reduce the number of parts uploaded to S3 and avoid errors on large uploads.
Access Denied Errors
Access denied errors are an indication that the user configured to connect to the bucket has insufficient permissions. AWS does not provide information specific to which permission is required to perform the failed action. For this reason, Nexus Repository will generically report access-denied errors.
An error occurred saving data. ValidationErrorXO{id='*', message='Unable to initialize blob store bucket: name, Cause: Failed to find encrypter for id:'}
Azure Blob Store
PRO
You must create the Azure storage account in Azure before using Nexus Repository to create an Azure blob store. Below are the recommended storage account settings:
Location: the location hosting Nexus Repository
Performance: Standard general-purpose v2 or Premium block blobs
Account kind: StorageV2 if using Standard general-purpose v2 or BlockBlobStorage if using Premium block blobs
Replication: Any
Nexus Repository will automatically create an Azure container when a blob store is created if one does not already exist.
Warning
The Azure storage container name must be a valid DNS name that follows the rules that Microsoft states in its documentation.
Changing the Blob Store Server
If you need to change the server that is contacted for Azure blob storage from "blob.core.windows.net
" to something else, edit the existing <data-dir>/etc/nexus.properties
file or set a Java system property as demonstrated below:
nexus.azure.server=<your.desired.blob.storage.server>
You will then need to restart Nexus Repository for the change to take effect.
Accessing the Azure Storage Account
There are three methods of gaining access to the Azure storage account from Nexus Repository:
Use a secret access key supplied by the Azure storage account.
If you're running Nexus Repository on an Azure VM, you can use System Managed Identity access.
Use environment variables.
System Managed Identity Access
System Managed Identity allows Azure to manage the access via roles assigned to the VM in which you are running Nexus Repository. See the Microsoft documentation for details.
To properly use the System Managed Identity, the Azure VM will need the following roles assigned to the Azure storage container:
Storage Account Contributor
Storage Blob Data Contributor
Warning
Nexus Repository does not validate that the proper roles are assigned before storing the configuration. If the aforementioned roles are not properly granted to the VM, you will need to delete the blob store and then add it again after the roles have been set up in the Azure storage instance.
Environment Variables
There are three environment variables for Azure blob stores:
AZURE_CLIENT_ID
AZURE_CLIENT_SECRET
AZURE_TENANT_ID
To use environment variables, you will need to register an Azure AD application and give it access to the blob storage.
Following Microsoft's documentation, complete the following steps:
Create an application.
Grant permission to Azure storage.
Create a client secret.
Copy the secret value (Not Secret Id) to use as AZURE_CLIENT_SECRET.
Note
You must copy this immediately as you will not be able to access it later.
From the app registration overview screen, retrieve the other values for your environment variables:
The Directory (tenant) ID provides the value for AZURE_TENANT_ID.
The Application (client) ID provides the value for AZURE_CLIENT_ID.
You must then navigate to the storage container and grant the Storage Blob Data Contributor role to the application:
Select Storage Accounts and then the storage account to which you want to grant access.
Select Access Control (IAM); then, add a role assignment.
Select Storage Blob Data Contributor.
Select Next and then Add Member.
Search for your application and add it as the member.
Now, set the environment variables in the terminal before launching Nexus Repository.
Optimizing Performance
For optimum performance, you'll want to take the following steps:
Run Nexus Repository on Azure on virtual machines
Ensure that the Azure connection is using the location where Nexus Repository is being run
The chunk size when uploading to Azure can be adjusted by setting the property nexus.azure.blocksize
in the nexus.properties
file (e.g., nexus.azure.blocksize=1000000
). By default, this is set to 5242880 bytes (5MB). You can tune this for optimal performance on your network.
Google Cloud Blob Store
Note
Only available in Sonatype Nexus Repository Pro. Interested in a free trial? Start here.
When creating a new Google Cloud Storage blob store in Sonatype Nexus Repository, you will configure the fields described in the sections below.
Give your blob store a unique name.
In Google Cloud, a project is a container containing related resources for a Google Cloud solution. In this case, you should enter the unique Google Cloud Project ID for the project that owns the bucket where blobs will be stored.
You can provide the projectId
value from UI or let Nexus Repository automatically retrieve it from your configuration, properties file (nexus.gcloud.projectId
), or credentials JSON file.
Provide a globally unique name for the Google Cloud bucket where blobs will be stored. As a best practice, follow Google Cloud Storage naming conventions.
Provide the path within your Cloud Storage bucket where blob data should be stored. For example, enter "blob-data/" as the path prefix to store data in a folder named "blob-data."
Enter the region where the Google Cloud bucket is hosted; this should be the same region in which Nexus Repository is running.
To access your Google Cloud Storage bucket, Nexus Repository needs to authenticate with Google Cloud. There are two options for authentication:
Use Google Application Default Credentials - This option uses the credentials already configured in your environment, simplifying setup and enhancing security. You can also directly provide a service account key file. See Google's documentation about application default credentials.
Use a separate credential JSON file - This option provides more granular control over authentication. You can use a dedicated service account with specific permissions for accessing your Google Cloud Storage bucket. Select this option to upload a service account key file (JSON) through the user interface. This service account must have permission to access your Google Cloud Storage bucket. See Google's documentation on creating and deleting service account keys.
This section allows you to enable a soft quota for the blob store, which will raise an alert when a blob store exceeds a constraint. See Adding a Soft Quota for more information.
Promoting a Blob Store to a Group
To promote a blob store to a group, select the Convert to group button. This launches the promotion process to add your blob store to a group for more flexibility. Follow the on-screen prompts to create the blob store group, which will contain the previously concrete blob store and to which you can add other blob stores.
Warning
You cannot undo promoting a blob store to a group.
What is a Fill Policy?
When configuring a blob store group, you will be asked to select a fill policy (i.e., a write policy). A fill policy is the method that the blob store group uses to choose a member for writing blobs. You can change the fill policy at any time.
Available fill policy choices include the following:
Round Robin - Incoming writes alternate between all blob stores in the group. This is useful when you have a number of blob stores available and you want each of them to receive a roughly equal number of writes. This does not balance based upon any other metric.
Write to First - All incoming writes are given to the first writeable blob store (skipping blob stores in a read-only state). If you need to direct all blobs to a specific blob store (e.g., you have a blob store for a new empty disk), then this fill policy will ensure that the new blob store gets all the writes.
Removing a Blob Store from a Group
To remove a blob store from a group, you will need to use the Admin - Remove a member from a blob store group task to ensure that repositories still have access to their blobs. Groups allow you to add members dynamically, but removing a blob store requires a task to ensure that repositories still have access to their blobs.
Moving a Blob Store
What's the Difference Between the Change Repository Blob Store Task and Moving a Blob Store by Promoting it to a Group and Using a Fill Policy?
The Admin - Change Repo Blob Store task extracts a repository's blobs from a blob store where they are intermingled with blobs from other repositories and moves them to another blob store.
Moving a blob store by promoting it to a group and using a fill policy to move its contents once you remove it from that group as described in our Moving a Blob Store documentation pulls all of the blobs out of a blob store so that it can be decommissioned.
The following steps can be used to move a blob store to a new location.
Before moving a blob store to a new location, ensure you have backed up your blob store.
Create a new blobstore with the storage path set to the new location.
When asked in the form, set the new group's Fill Policy to Write to First.
Add the new blobstore that you created in step 1 to the newly promoted group blobstore underneath the original blob store.
Schedule and run an Admin - Remove a member from blob store group task via Administration → System → Tasks to remove the original blob store from the group.
The original blob store's contents will be moved over to the new blob store before the original blob store is removed.
Migrating from an On-Premises Blob Store to a Cloud Blob Store Using Vendor-Provided Tools
For the greatest efficiency, we recommend using your cloud vendor's tools to migrate to a cloud blob store. Before proceeding with any migration, you should back up your blob store.
Note
If you need to migrate multiple Terabytes of blob data, you should consult directly with your cloud provider for guidance to avoid prolonged downtime.
Amazon Web Services (AWS)
For migrating to Amazon S3, we suggest using a tool such as AWS DataSync. Amazon provides documentation for using DataSync to move data between other storage systems and AWS storage services.
Microsoft Azure
For migrating to Azure Blob, we suggest using Microsoft's AzCopy. Microsoft provides documentation for using AzCopy to move data to and from Azure Blob. The following is a brief overview:
Create an Azure storage container with the name
default
.Generate a SAS token for the container; ensure it has at least
Read
,Write
,List
, andDelete
permissions.Install AzCopy.
Construct and execute an AzCopy command like the following replacing the placeholders with your actual details; ensure that you add the "
/*
" at the end of your root blob store path as in the example:
azcopy copy "/nexus/sonatype-work/nexus3/blobs/default/*" "https://<your_account_name>.blob.core.windows.net/default?<your_sas_token>" --recursive=true
After the AzCopy operation completes, verify that the copied files now exist in your Azure storage container using a list command.
Google Cloud Platform
Google offers a Storage Transfer Service and provides documentation for moving data between other storage systems and GCP.
Post-Migration Step: Update Database Configuration
After using any of the vendor-specific tools above, you will need to update the blob store configuration in your database to match the new type of blob store and new location.
Create the new blob store in your Sonatype Nexus Repository instance.
Shut down your Nexus Repository instance
Run a SQL query like the following:
delete from blob_store_configuration where name = '<blob store name>'; -- the name of the blob store which was moved to S3/Azure -- update the reference to the new S3 blob store update blob_store_configuration set name = '<blob store name>' where type = '<S3 or Azure Cloud Storage or Google Cloud Storage>' and name = '<name of the new blob store>';
Re-start your Nexus Repository instance.
Splitting a Blob Store
Move repositories from one blob store to another using the Admin - Change repository blob store
task. This task is a Pro feature requiring a PostgreSQL or H2 database for version 3.58 or later.
Determine the repositories to move
Create the new blob store
Use the
Admin - Change Repository Blob Store
task to move the repositoriesSet the repository to move and the blob store to move it to
Run the task manually or schedule it to run at a later time.
The task may take some time to run as the artifacts are copied to the new blob store. The task should only be run once. Artifacts are available as the move is performed in the background.
Adding a Soft Quota
Use soft quotas to monitor a blob store and raise an alert when the storage space exceeds the set limit. Soft quotas do not block read or write operations on the blob store.
Configure a soft quota by following these steps:
Navigate to Blob Stores in the Nexus Repository menu
Select the blob store to configure the soft quota
Check the Enable Soft Quota box
Select the Type of Quota from the following choices:
Space Used - Issues an alert when a blob store exceeds your quota limit for total bytes used.
Space Remaining - Issues an alert when the space available for new blobs drops below your quota limit.
Enter the Quota Limit in MB to receive an alert for the quota type you selected
Select Save
When a soft quota is violated, that information will be available in the following locations:
A WARN level message in the logs.
Under Administration → Support → Status, soft quotas' statuses are aggregated in the Blob Stores status.
A REST API endpoint.