I am hosting a Seafile instance for personal purposes, and for a long time, I used Postgres as database backend. It was never spectacularly supported; I think once the official documentation mentioned Postgres configuration, now there are only traces of it left.
A big problem with this is keeping the server up to date. Seafile’s update process involves calling scripts that describe, e.g., schema migrations. These migrations are provided for MySQL and sqlite; and it’s not always possible to just feed one of them into Postgres. When they contained auto-incrementing keys or funny named datatypes, I had to rewrite the migration script and fit it to Postgres. Or hope that someone else already did it, like on GitHub or in the Seafile forums. This fiddling always added unnecessary friction, and now I want to migrate back from Postgres to MySQL.
I researched some options linked in the Postgres wiki, but I don’t want to use a GUI program on a headless server, and very commercial looking product pages with “buy” buttons seem not the right path for a private project. pgloader looks like a very cool project; unfortunately, it migrates to Postgres, not from. There are two projects, both called pg2mysql this and this; however, I did not want the trouble of setting up a PHP or Go environment just for this little bit of conversion. Instead, I went another rather scripty way.
In my scenario, I can use that I only want to populate a database with an already known schema. I can tell Seafile to create the schema for me and save the trouble of thinking about what
SERIAL INTEGER should be mapped to. Then I only need to move data from Postgres to this fresh MySQL database. For this, I opted to use simple CSV with one file per table. I first exported Seafile’s three databases into separate folders with this script:
#!/bin/bash for db in ccnet_db seafile_db seahub_db; do echo "- $db"; mkdir -p $db; for table in $(psql $db -t \ -c "select table_name from information_schema.tables where table_schema = 'public';"); do echo "Exporting $table"; psql $db -c "\copy $table to '$db/$table.csv' CSV HEADER" done done
The for-loops help with staying lazy, since otherwise, I had to alter that command for each of the 122 involved tables. These files can then be imported using mysqlimport; at least in theory. In practice, I had two problems: (1) the new schema for ccnet and seafile has tables’ names with capitalized words like
EmailUser, which was not the case in Postgres. And mysqlimport also respects this capitalization, so
emailuser.csv wants to be inserted into the table
emailuser. And (2), several tables in my Postgres schema did not have an id column, but the MySQL schema has such a column.
The first problem was surprising, as it usually is more work to enforce case in a SQL database. But when looking at the files with
wc -l, I found that many files only contained one line: the CSV header and no data. After deleting them, I was left with 15 files and bit the bullet and just renamed them. Interestingly, with seahub I had no problem; the table names there are in snake_case. The following script imported nearly all tables as I intended:
#!/bin/bash for db in ccnet_db seafile_db seahub_db; do echo $db for file in $(ls $db); do cols=$(head -n1 $db/$file) mysqlimport -pseafile-mysql --ignore-lines=1 \ --delete $db /tmp/manuel/$db/$file -c $cols \ --fields-terminated-by=',' done done
“Nearly,” because some files could not be imported, but they were small enough for manually putting them into Sublime converting them into a bunch
VALUES for an
In the end, my rebellious Seafile+Postgres setup did survive longer than it probably should have; but is now gone for good, as well. After the successful migration, Seafile runs now with MySQL and I can update it in a more convenient way.