Migrate PostgreSQL to MySQL

When you start DBConvert or DBSync application in GUI mode it guides you through several steps to set up the database migration or synchronization:

1. Connect to PostgreSQL source database.

If a source database requires you to log in, you can specify a user name/ password and host/ port parameters.

Connect to PostgreSQL source database from DBConvert

2. Connect to MySQL destination database.

Specifying parameters for destination database looks like the same as for source. Usually, it consists of defining connection settings and username/password pairs.

Connect to MySQL target database from DBConvert

NOTE #1: Every DBConvert or DBSync tool has two different databases in its name. That means any specified database from a pair can be set up as a source or destination. Besides, the same type of database may be set up both as a source or destination.

As an example, here is the list of possible migration directions with on-premises databases:

  • PostgreSQL to MySQL
  • MySQL to PostgreSQL
  • PostgreSQL to PostgreSQL
  • MySQL to MySQL

NOTE #2: Don't be confused by the fact that connections to cloud databases like Amazon RDS, Google Cloud, and Heroku are not explicitly specified in the configuration of a source or destination in the DBConvert / DBSync interface. To connect to Cloud database instances, use the same settings as you do for traditional on-premises databases.

NOTE #3: Your connections to source and target databases stay active until you close DBConvert/ DBSync application or reopen new connections on "source" and "destination" steps.

Read more about the specific source/ destination configurations for different databases.

3. Configure database migration options.

At the next step, you can specify precisely which tables, fields, indices, views you want to transfer to the MySQL destination database. Just check/ uncheck the box in front of each database object you want to convert.

Customize general database/ tables settings. Or set up a particular table , field, index individually when migrating data from PostgreSQL to MySQL.

Check out our articles about Configure database migration options. for detailed information.

The screenshot below sums up general features available in DBConvert software solutions.

DBConvert screenshot

4. Detection of potential database migration issues. Errors and Warnings

The database typically constrains certain relations on the data that cannot be violated. On the customization step, a smart error checker verifies all possible Data integrity and Referential integrity issues and highlights them, if any, before performing a migration.

By default, DBConvert tries to automatically map the database types of the source PostgreSQL database to the closest equivalent of the target MySQL database types. However, you can manually change the data types for the entire database globally using the "Global mapping" or individually for each field.

Check out Smart error checker. Errors and Warnings for more information.

5. Execution. The final stage of data migration from PostgreSQL to MySQL

Once you configure source and destination databases for migration in the previous steps, you can start the actual conversion or synchronization process.

Click the "Commit" button to start conversion. Also, here, you can monitor the migtation/ synchronization process.

Optionally save connection settings and configuration parameters into the session file to schedule the launching of sync or migration jobs regularly.

Execution step of DBConvert products

Read more about execution stage available options.

Command line mode

Previously saved sessions can be passed as parameters to Command-Line DBConvert Client. A session keeps PostgreSQL source and MySQL target database connection settings with other specified options.

Example: C:\Program Files\DBConvert\mysql2postgresqlPro\mysql2postgresqlPro_Cons.exe /Session:"Session_Name"

NOTE: First, you have to run the software in GUI mode to create a session file with initial parameters.

Built-in scheduler.

Our applications come with a built-in scheduler to run database migration and sync jobs at specified times. Just set the scheduled date and time to execute job sessions automatically.

This document explains how to migrate the contents of an existing Nautobot PostgreSQL database to a new MySQL database.

Export data from PostgreSQL

In your existing installation of Nautobot with PostgreSQL, run the following command to generate a JSON dump of the database contents. This may take several minutes to complete depending on the size of your database. From the Postgres host (nautobot-postgres) $:

nautobot-server dumpdata \
    --natural-foreign \
    --natural-primary \
    --exclude contenttypes \
    --exclude auth.permission \
    --exclude django_rq \
    --format json \
    --indent 2 \
    --traceback \
    > nautobot_dump.json

This will result in a file named nautobot_dump.json.

Create the MySQL database

Create the MySQL database for Nautobot, ensuring that it is utilizing the default character set (utf8mb4) and default collation (utf8mb4_0900_ai_ci) settings for case-insensitivity. It is required that MySQL will be case-insensitive. Because these encodings are the defaults, if your MySQL installation has not been modified, there will be nothing for you to do other than make sure.

In very rare cases, there may problems when importing your data from the case-sensitive PostgreSQL database dump that will need to be handled on a case-by-case basis. Please refer to the instructions for CentOS/RHEL or Ubuntu as necessary if you are unsure how to set up MySQL and create the Nautobot database.

Confirming database encoding

To confirm that your MySQL database has the correct encoding, you may start up a database shell using

nautobot-server dbshell
mysql> SELECT @@character_set_database, @@collation_database;
+--------------------------+----------------------+
| @@character_set_database | @@collation_database |
+--------------------------+----------------------+
| utf8mb4                  | utf8mb4_0900_ai_ci   |
+--------------------------+----------------------+
1 row in set (0.00 sec)
0 and run the following command with the prompt
nautobot-server dbshell
mysql> SELECT @@character_set_database, @@collation_database;
+--------------------------+----------------------+
| @@character_set_database | @@collation_database |
+--------------------------+----------------------+
| utf8mb4                  | utf8mb4_0900_ai_ci   |
+--------------------------+----------------------+
1 row in set (0.00 sec)
1

nautobot-server dbshell
mysql> SELECT @@character_set_database, @@collation_database;
+--------------------------+----------------------+
| @@character_set_database | @@collation_database |
+--------------------------+----------------------+
| utf8mb4                  | utf8mb4_0900_ai_ci   |
+--------------------------+----------------------+
1 row in set (0.00 sec)

Apply database migrations to the MySQL database

With Nautobot pointing to the MySQL database (we recommend creating a new Nautobot installation for this purpose), run

nautobot-server dbshell
mysql> SELECT @@character_set_database, @@collation_database;
+--------------------------+----------------------+
| @@character_set_database | @@collation_database |
+--------------------------+----------------------+
| utf8mb4                  | utf8mb4_0900_ai_ci   |
+--------------------------+----------------------+
1 row in set (0.00 sec)
2 to create all of Nautobot's tables in the MySQL database
nautobot-server dbshell
mysql> SELECT @@character_set_database, @@collation_database;
+--------------------------+----------------------+
| @@character_set_database | @@collation_database |
+--------------------------+----------------------+
| utf8mb4                  | utf8mb4_0900_ai_ci   |
+--------------------------+----------------------+
1 row in set (0.00 sec)
1:

nautobot-server migrate

Remove the auto-populated Status records from the MySQL database

A side effect of the

nautobot-server dbshell
mysql> SELECT @@character_set_database, @@collation_database;
+--------------------------+----------------------+
| @@character_set_database | @@collation_database |
+--------------------------+----------------------+
| utf8mb4                  | utf8mb4_0900_ai_ci   |
+--------------------------+----------------------+
1 row in set (0.00 sec)
2 command is that it will populate the
nautobot-server dbshell
mysql> SELECT @@character_set_database, @@collation_database;
+--------------------------+----------------------+
| @@character_set_database | @@collation_database |
+--------------------------+----------------------+
| utf8mb4                  | utf8mb4_0900_ai_ci   |
+--------------------------+----------------------+
1 row in set (0.00 sec)
5 table with a number of predefined records. This is normally useful for getting started quickly with Nautobot, but since we're going to be importing data from our other database, these records will likely conflict with the records to be imported. Therefore we need to remove them, using the
nautobot-server dbshell
mysql> SELECT @@character_set_database, @@collation_database;
+--------------------------+----------------------+
| @@character_set_database | @@collation_database |
+--------------------------+----------------------+
| utf8mb4                  | utf8mb4_0900_ai_ci   |
+--------------------------+----------------------+
1 row in set (0.00 sec)
6 command in our MySQL instance of Nautobot (
nautobot-server dbshell
mysql> SELECT @@character_set_database, @@collation_database;
+--------------------------+----------------------+
| @@character_set_database | @@collation_database |
+--------------------------+----------------------+
| utf8mb4                  | utf8mb4_0900_ai_ci   |
+--------------------------+----------------------+
1 row in set (0.00 sec)
1 shell prompt):

nautobot-server nbshell

Example output:

### Nautobot interactive shell (32cec46b2b7e)
### Python 3.9.7 | Django 3.1.13 | Nautobot 1.1.3
### lsmodels() will show available models. Use help(<model>) for more info.
>>> Status.objects.all().delete()
(67, {'extras.Status_content_types': 48, 'extras.Status': 19})
>>>

Press Control-D to exit the

nautobot-server dbshell
mysql> SELECT @@character_set_database, @@collation_database;
+--------------------------+----------------------+
| @@character_set_database | @@collation_database |
+--------------------------+----------------------+
| utf8mb4                  | utf8mb4_0900_ai_ci   |
+--------------------------+----------------------+
1 row in set (0.00 sec)
8 when you are finished.

Import the database dump into MySQL

Use the

nautobot-server dbshell
mysql> SELECT @@character_set_database, @@collation_database;
+--------------------------+----------------------+
| @@character_set_database | @@collation_database |
+--------------------------+----------------------+
| utf8mb4                  | utf8mb4_0900_ai_ci   |
+--------------------------+----------------------+
1 row in set (0.00 sec)
9 command to import the database dump that you previously created. This may take several minutes to complete depending on the size of your database. This is from the MySQL host with the prompt (
nautobot-server dbshell
mysql> SELECT @@character_set_database, @@collation_database;
+--------------------------+----------------------+
| @@character_set_database | @@collation_database |
+--------------------------+----------------------+
| utf8mb4                  | utf8mb4_0900_ai_ci   |
+--------------------------+----------------------+
1 row in set (0.00 sec)
1):

nautobot-server loaddata --traceback nautobot_dump.json

Assuming that the command ran to completion with no errors, you should now have a fully populated clone of your original database in MySQL.

How do I migrate from Postgres to mssql?

Introduction:.
In "Choose a Data Source" dialog, choose "PostgreSQL"; Input the server name (default: localhost) and port (default: 5432). ... .
In "Choose a Destination" dialog, choose "Microsoft SQL Server"; ... .
In "Select source Tables(s) & View(s)" dialog; ... .
In "Execution" Dialog; ... .
Finished!.

How do I migrate a Postgres database?

Migrate to an upgraded PostgreSQL.
Back up the current database..
Stop and remove the container..
Delete the directory and files of the old PostgreSQL version from the Podman volume..
Move the SQL dump file to the Podman volume..
Start the pod with the rhel8/postgresql-13 container..
Restore the database..

Can PostgreSQL connect to MySQL?

With access to live PostgreSQL data from MySQL Workbench, you can easily query and update PostgreSQL, just like you would a MySQL database.

How do I migrate to MySQL?

How to migrate a MySQL database to a new server.
Go to the Database menu and select Copy Databases..
On the Copy Database page that opens, select the source and target connections..
Select the required databases and the Include Data and Drop if exists on target checkboxes if needed..