Message-ID: <349063341.8715.1711696396294.JavaMail.confluence@host3.pipelinefx.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_8714_1864359074.1711696396294" ------=_Part_8714_1864359074.1711696396294 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html Transferring data from MySQL to PostgreSQL

Transferring data from MySQL to PostgreSQL

=20

Do I need to migrate my Qube data?

=20 Icon=20
=20

If both of the following conditions are true, you = will need to take the extra steps outlined in this document during your sup= ervisor upgrade process:

  • You wish to bring your Qube 6.x data forward into Q= ube 7

  • You are upgrading th= e supervisor from Qube 6.x to Qube 7 (for example from 6.10-0a to 7.0-0)

=20
=20
=20
=20

Minimum Qube version requirements for a successful data = migration

=20 Icon=20
=20

Only data from Qube schema v= ersion 37 (present from 6.10-0 onwards) can be brought forward into Qube 7.= If you're upgrading a Qube supervisor from a version prior to 6.10-0, you = can upgrade only the schema to version 37; see the second phase of "Step Two: Export MySQL data to .csv"

=20
=20
=20
=20

Ensure that database_port is commented out (if present) = in your supervisor's qb.conf file

=20 Icon=20
=20

We have seen cases with customers upgrading from Qube 6 to Qube 7 where = the supervisor is attempting to connect to the PostgreSQL server on the def= ault port for MySQL. This is due to the database_port being defined in the qb.conf file, but still set to = the value for a MySQL server.

=20
=20
=20

=20 =20

Introduction= :

One of the major feature updates to Qube 7.0 is the switch of the superv= isor's database server from MySQL to PostgreSQL. Accordingly, when upgradin= g from a pre-7.0 version (such as 6.10-0a) to 7.0 or above, there are extra= manual steps that must be taken outside of the Qube Installer (or the RPM/= MSI/PKG installer, if you chose to bypass the QubeInstaller), particularly = if you require that the old data from MySQL be transferred to PostgreSQL. T= his step-by-step document will guide you through the straightforward proces= s.

Upgrading the supervisor and transferring the data is essentially a five= -step process, and the order is critical:

  1. Stop all activities on the current farm
  2. Export data from MySQL to csv files on disk
  3. Upgrade the supervisor
  4. Import data into PostgreSQL from the csv files
  5. Enable activities on the new supervisor
=20
=20

IMPORTANT!

=20 Icon=20
=20

The order of these steps are crucial! Note that there are steps to be ta= ken before upgrading the supervisor.

=20
=20
=20
=20

Python on Windows

=20 Icon=20
=20

Python 2.x (2.6 or above) is now a prerequisite for sup= ervisor installation. It comes standard on Linux and Mac, but you will have= to install it on Windows in advance. Also make sure that python is= added to the System PATH. It's an opt-in feature for the Python.o= rg MSI installer. https://www.python.org/downloads/

=20
=20
=20

 

Step One: Stop All Activities on the Current F= arm

You will want to first stop all current farm activities before doing the= upgrade of the Supervisor. Run the following commands on a command prompt = with a qube administrator account to prevent the supervisor from dispatchin= g jobs, and stop accepting new job submissions from users:

=20
qbadmin supervisor --set stop_activity
qbadmin supervisor --set reject_submit
=20

You will also want stop= workers from picking up new work. If you want to allow workers= to finish up their current frames, do:

=20
qblock --all
=20

If you'd rather stop running jobs immediately, do: 

 

=20
qblock --all --purge
=20

Then wait for all activities to stop.

Step Two: Export Data from MySQ= L to .csv (Comma-separated values) files

=20
=20

Estimate time for data export

=20 Icon=20
=20

The time it takes for the export depends on many parameters, such as you= r supervisor system's hardware (especially the speed of the database and ex= port destination disk), the number of existing jobs, and number of frames (= aka agenda items) for each job. For reference, on a Mac system with an SSD,= every 256 jobs, each with 100 frames, took about 6 seconds to process. Tha= t would add up to about 4 minutes to process 10,000 jobs.<= /p>=20

=20
=20


In this step, you'll be using a script that we provide to dump MyS= QL data to .csv files into a folder on disk.

  1. Make sure that the MySQL server is running, and that you can connect= to it using the mysql client, and that your qube table version is at 37.

    1. If you haven't changed the database administrator user and password,= you should be able to do the following on a command prompt to confirm that= the MySQL server is running:

      =20
      Linux: /usr/bin/mysql -u root -e 'SELECT * FROM qube.tableversi=
      on' 
      
      Mac: /usr/local/mysql/bin/mysql -u root -e 'SELECT * FROM qube.tableversion=
      ' 
      
      Windows: "C:\Program Files\pfx\qube\mysql\bin\mysql" -u root -e &=
      quot;SELECT * FROM qube.tableversion" 
      =20
    2. Make sure that the above command works and returns:

      =20
      +---------+
      | version |
      +---------+
      | 37      |
      +---------+
      =20
  2. If you get something less than 37 returned by the above comm= and, it means that your current Qube supervisor version is older t= han 6.10-0, and that you need to update your MySQL database tables first, b= efore you can upgrade the database schema. To do so:

    1. Download the "upgrade_supervisor&= quot; program suitable for your supervisor platform from http://repo.pipelinefx.com/downloads/pub/db_migra= tion_tools/

    2. On a command prompt, run the upgrade_supervisor program that you just downloaded. 

      1. On Windows, you will need to unzip the upgrade_supervisor_WIN32-6.1-x64= .zip file first, and run the upgrade_supervisor.bat file found in the unzip= ped folder.
    3. Check that there weren't any critical errors reported by = upgrade_supervisor
    4. Check that the version is now indeed updated to 37, by running the = ;mysql -u root -e 'SELECT * FROM qube.tableversion' command again
  3. Choose a destination folder on your supervisor for the MySQL csv fil= es. Make sure that your user and the mysql server process both = ;have write permission to this folder and all its parent folders, = and that the volume is sufficiently large. Also note that a faster disk, su= ch as an SSD, will help speed up the export/import process. 

    =20
    =20

    On CentOS 7.x and possibly other Linux distros, create a= working directory under /opt and do the export while running as the root u= ser

    =20 Icon=20
    =20


    Do NOT use /tmp, /var/tmp (/usr/tmp), or any su= bdirectories under them. These OSs give the MySQL service its own = private /tmp and /var/tmp folders, which prevents the mysqldump command fro= m running correctly. Creating a subdirectory under /root does not w= ork either, nor will a subdirectory in any user's home dir= ectory, since non-root users home directories are usually mode 700= , so the MariaDB server can't access it.

    One approach that doeswork is creating= a directory under /opt and opening up the permissions:

    =20
    sudo mkdir -p /opt/mysql_dump=20
    sudo chmod 755 /opt/mysql_dump
    =20

    Then, install the export_data_from_mysql.py script from the next step into this directory as the r= oot user, and run the export script as root.

    =20
    =20
  4. Download the export_data_from_mysql.py script from http://repo.pipe= linefx.com/downloads/pub/db_migration_tools/ and copy it into the = destination folder.
  5. On a command prompt, go to the destination folder, and run export_data_from_mysql.py. Running it without any= argument will create a subfolder in the current directory named "qube_mysqldump" and dump all files into it.<= /p>

    =20
    python export_data_from_mysql.py
    =20
    1. You may override the dump subfolder and DB username, password, and m= ysql install location. Run "export_data_from_mysql.py -h= " to see the list of options.

  6. Sit back. This process can take a long time to complete, depending o= n how many jobs you have on the system. 

  7. Once the process completes, make sure there were no errors reported = on the terminal. Also have a look a the dump directory to confirm that ther= e is a subfolder "qube" and a bunch= of subfolders like "<number>qube<= /strong>" .

  8. Take a note of the dump directory location, and proceed to the next ste= p, "Upgrade the Supervisor".

Proceed with the upgrade of the super= visor software.  Using the QubeInstaller is recommen= ded, but you can also run the individual installer packages (RPMs, DEBs, MS= Is, or PKGs), should you choose.

Step Three: Upgrade the Supervisor

See Upgrading Qube!<= /a> for details, but come back here after upgrading the supervisor software= .

 

Step Four: Import Data into PostgreSQL=  from the .csv files

=20
=20 Icon=20
=20

Do this before making any change to your farm, or submitting new= jobs.

=20
=20
=20
=20

Estimated time for data import

=20 Icon=20
=20

We have found that the import takes roughly 1/4 of the tim= e for the export. Importing 10,000 jobs with 100 frames on average= on a Mac system with an SSD took about 33 seconds.

=20
=20
=20

Importing the previously exported data

Once you upgrade the supervisor, you are ready to import data into the n= ew PostgreSQL server. 

  1. Make sure that PostgreSQL server is running, and accepting connectio= ns:

    =20
    Linux: /usr/local/pfx/pgsql/bin/psql -p 50055 -d pfx -U qube -c=
     "SELECT * FROM qube.tableversion"
    
    Mac: /Applications/pfx/pgsql/bin/psql -p 50055 -d pfx -U qube -c "SELE=
    CT * FROM qube.tableversion"
    
    Windows: "C:\Program Files\pfx\pgsql\bin\psql" -p 50055 -d pfx -U=
     qube -c "SELECT * FROM qube.tableversion"
    =20

    Note that this should return= :

    =20
    version 
    ---------
          51
    (1 row)
    =20
  2. On a command prompt, go to the folder where you ran the export = script earlier. This should be the parent folder of the "qube= _mysqldump" folder, by default.
  3. Run the import_data_into_pgsql.py scri= pt to import data from the csv files that were generated earlier.

    =20
    Linux: python /usr/local/pfx/qube/utils/pgsql/import_data_=
    into_pgsql.py
    
    Mac: python /Applications/pfx/qube/utils/pgsql/import_data_into_pgsql.=
    py
    
    Windows: python "C:\Program Files\pfx\qube\utils\pgsql\import_dat=
    a_into_pgsql.py
    =20
  4. Sit back. This process will also take some time to complete, although i= t should be significantly faster than the export.
  5. Make sure there weren't any errors reported on the terminal.


Step Five: Enable Activities on the Ne= w Supervisor

Run the following commands to enable the new supervisor to accept new jo= bs and start dispatching jobs to workers

=20
qbadmin supervisor --unset stop_activity
qbadmin supervisor --unset reject_submit
=20

You'll also need to unlock the workers you want to start using again. If= you'd like to unlock all workers, then do:

=20
qbunlock --all
=20

 

Congratulations, you are done. Enjoy the new ride!

 

------=_Part_8714_1864359074.1711696396294--