Data Migration to PostgreSQL

NEW IN RELEASE 70

If you have previously been using IQ Server with the embedded database and now wish to employ PostgreSQL as an external database, you need to migrate your existing data into the new database.

Review the External Database Configuration main page for requirements and license restrictions.


At a high level, this migration consists of two steps:

  1. Exporting your data from the embedded database into a SQL dump
  2. Importing the SQL dump into the PostgreSQL database

The following instructions describe this process in detail.

Preparing for the Migration

Prepare the PostgreSQL Server

A PostgreSQL server can host multiple databases. The specific database you wish to use for IQ Server needs to be created by your database administrator and this is best done ahead of the actual migration to reduce its overall duration and avoid surprises midway.

For requirements of the disk storing the PostgreSQL database, we recommend the same requirements for the In-Memory based instance (refer to System Requirements), however, please consult your database administrator.

Additionally, a user account needs to be provisioned that IQ Server will use to connect to that database and manage its schemas.

Please refer to your database administrator or the PostgreSQL documentation for details and consider the following example commands merely as a starting point:

CREATE USER nexusiq WITH PASSWORD 'the-secret-password';
CREATE DATABASE nexusiq WITH OWNER nexusiq ENCODING 'UTF8';

These commands would create a user named "nexusiq" and an equally named database owned by that user, granting that user the needed privileges for IQ Server to function.

However you create the database, be sure its encoding is set to UTF8 to ensure compatibility with the character set used by IQ Server.

Perform a Test Run of the Migration in a Staging Environment

The migration is an I/O intensive task. Depending on the size of your current database and the performance of the involved storage systems, export and import of the SQL dump can take notable time, time during which IQ Server will be down. To appropriately plan for and schedule this downtime of your production server, we recommend you try a test run of the migration in a staging environment, using a recent copy/backup of your production IQ Server and its data.

For that test run to be representative, it should be executed on hardware closely comparable to your production system. Especially the I/O performance of the storage holding the server's embedded database (sonatype-work/clm-server/data) is a key factor for the duration of the data export.

There are many factors contributing to the overall duration of the data migration but to give you a rough idea, a dataset of ~10 GB takes about 5 - 15 min to export and another 5 - 15 min to import into PostgreSQL.

A test run will also help to shake out any issues with the external database provisioned earlier. If the test IQ Server does not start successfully once configured to use the external database, you can troubleshoot the issue.

Ensure Sufficient Free Disk Space

During the data export, the directory given by sonatypeWork in your config.yml will store temporary files that are almost as big as your current dataset. So before attempting to migrate, check the size of your database, by default located at sonatype-work/clm-server/data/ods.h2.db, and ensure you have roughly that same size available as free space on the drive storing the embedded database.

A similar concern applies to the drive where you would like to store the SQL dump itself. The SQL dump can use gzip compression which tends to work well for this kind of file but if you want to be on the safe side, it's best to plan for the SQL dump to be roughly half the size of your current database.

Performing the Migration

The commands (java, psql, etc.) shown in the following procedure assume that those executables are included in your shell's search path. If that is not the case in your environment, you will need to replace those commands with their full pathname. The same applies if your shell's search path contains incompatible versions of the commands. For instance, the java executable used must meet the System Requirements for IQ Server.

Likewise, references to other files (config.yml, etc.) simply use the filename for brevity. You might need to replace those filenames with relative or absolute pathnames that match the directory structure of your environment.

  1. Shut down IQ Server.
  2. Generate a SQL dump from the embedded database using the following command:
    java -jar nexus-iq-server-[version].jar export-embedded-db --dump-file iq-db-dump.sql.gz config.yml
    Note that the filename of the dump file ends in ".gz", making it use gzip compression. Otherwise, the dump file will be uncompressed in plain text and require significantly more disk space.
  3. Set the environment variable PGCLIENTENCODING=UTF8 to ensure the following psql invocation properly reads the SQL dump.
  4. Import the compressed SQL dump into the external PostgreSQL database using the psql client tool:
    gunzip -c iq-db-dump.sql.gz | psql --set ON_ERROR_STOP=1 --host [db-server-host] --port [db-server-port] --username [db-owner-user] --dbname [db-name]
    If your environment lacks the gunzip command or you otherwise prefer to use an uncompressed SQL dump, use the following command instead:
    psql --set ON_ERROR_STOP=1 --host [db-server-host] --port [db-server-port] --username [db-owner-user] --dbname [db-name] < iq-db-dump.sql
    While performing the import, the psql tool emits some harmless notices and log messages to inform about its progress. In case of any error, it aborts the task.
    If despite an earlier test run you encounter an error with the import, you can now restart IQ Server and resume operation using the embedded database while you troubleshoot and prepare for a retry once the issue is resolved.
  5. Update IQ Server config.yml to use the external database as described in External Database Configuration.
  6. Restart IQ Server.

After the migration, be sure to update your backup procedures to ensure the external database is included.