Skip to content
Go back

Migrating from MySQL to PostgreSQL on Laravel Forge

In the previous article, I covered why I decided to move from MySQL to PostgreSQL and the code changes needed. This article covers the actual migration — getting production data from one database to the other on Laravel Forge, and every problem I hit along the way.

Table of contents

Open Table of contents

The migration strategy

The key insight that reduced risk was splitting the work into independent steps:

  1. Make the code database-agnostic — deploy on MySQL, zero risk
  2. Prove CI passes on Postgres — catches compatibility issues before touching production
  3. Provision and migrate — the actual database switch

Steps 1 and 2 were already done. The code was running happily on MySQL with all the raw SQL replaced. CI was passing on Postgres. The only thing left was the data migration itself.

The rollback plan

Before touching anything, I wanted to know: what’s my rollback?

The answer was reassuringly simple. MySQL stays completely untouched throughout. You run Postgres in parallel — both databases exist side by side. To roll back at any point, you change .env back to DB_CONNECTION=mysql and the app is back on MySQL instantly. The code changes from step 1 are database-agnostic, so they work on either database.

You don’t need to delete MySQL until you’re fully confident. That could be days or weeks later.

Running migrations on Postgres

The first real test was running Laravel’s migrations against an empty Postgres database. With 107 migrations, this is where you find out if your schema is truly database-agnostic.

It failed immediately.

SQLSTATE[22P02]: Invalid text representation: 7 ERROR:
invalid input syntax for type uuid: "69bd984d8d82d"

The teams table had a uuid column with a default value of '69bd984d8d82d' — not a valid UUID. MySQL is lenient about this. You can store any string in a uuid column. Postgres is strict — if you declare a column as uuid, the value must be a valid UUID.

The fix was a new migration to generate proper UUIDs, plus updating the model’s creating event to use str()->uuid() instead of a hardcoded string.

After that fix, all 107 migrations passed cleanly.

Running the test suite on Postgres

Tests: 1 failed, 5 skipped, 203 passed (527 assertions)

203 out of 204 tests passed on the first try. The single failure was another UUID issue — a factory was trying to explicitly set id to null on the teams table. MySQL ignores this and auto-increments. Postgres tries to literally insert null into a NOT NULL column.

After fixing the factory, all 204 tests passed.

Migrating production data with pgloader

pgloader is an open-source tool that copies data from MySQL to PostgreSQL. It handles type conversions automatically — datetime to timestamp, tinyint to smallint, json to jsonb, and so on.

I installed it with Homebrew:

brew install pgloader

The config file tells pgloader where to read from and where to write to:

LOAD DATABASE
    FROM mysql://root@127.0.0.1:3306/my_app
    INTO postgresql://root@127.0.0.1:5432/my_app_pg

WITH data only, reset sequences

EXCLUDING TABLE NAMES MATCHING
    'pulse_aggregates', 'pulse_entries', 'pulse_values',
    'quotes', 'migrations'

ALTER SCHEMA 'my_app' RENAME TO 'public'
;

A few things to note about the config:

The UUID gotcha (again)

pgloader failed on the teams table:

ERROR Database error 22P02: invalid input syntax for type uuid: "5ddd32d825cc0"

The same UUID issue, but in the production data this time. Every team in the database had an invalid short string in the s3hash column. The fix was to update them in MySQL before running pgloader:

mysql -u root -h 127.0.0.1 my_app \
  -e "UPDATE teams SET s3hash = UUID() WHERE LENGTH(s3hash) < 36;"

This is safe to run multiple times — the WHERE clause only matches invalid values.

The foreign key cascade

When the teams table failed to import, every other table with a team_id foreign key also failed. Once the UUID fix was in place and teams imported successfully, all the foreign key constraints resolved.

The result

Total import time  ✓  6647  9.0 MB  0.878s

6,647 rows across all tables, imported in under a second.

Provisioning a new Forge server

My existing Forge server was provisioned with MySQL. This is an important thing to know about Forge: you cannot add a PostgreSQL database to an existing server running MySQL via the Forge UI. Databases are provisioned alongside the server at creation time. If you want Forge-managed Postgres, you need a new server.

I had three options:

OptionProsCons
New server with PostgresForge manages everything, clean slateNeed to set up deployments, SSL, cron, queues again
Install Postgres alongside MySQL via SSHQuickest, existing config staysForge won’t manage Postgres (no UI, no backups panel)
Managed Postgres service (RDS, etc.)Cleanest separationExtra monthly cost, external dependency

I went with a new server. This gave me a clean rollback plan — both servers run in parallel, each with their own database. If anything goes wrong on the new Postgres server, the old MySQL server is still there, untouched, ready to serve traffic again with a DNS switch. No downtime risk, and Forge manages Postgres natively — including the database panel.

Getting data to the new server

The new Forge server has Postgres. The old server has MySQL locked to localhost (no remote connections). How do you get the data across?

I already had the production data in my local Postgres from the pgloader dry run. So the simplest path was:

  1. Dump local Postgres: pg_dump my_app_pg > ~/database_backup.sql
  2. Upload to the new server: scp ~/database_backup.sql forge@YOUR_SERVER_IP:~/
  3. Drop the empty schema and restore:
psql -U forge -d my_app_pg \
  -c "DROP SCHEMA public CASCADE; CREATE SCHEMA public;"
psql -U forge -d my_app_pg < ~/database_backup.sql

The restore showed role "root" does not exist warnings — harmless. The dump was created under the root user (Herd’s default), but the server uses forge. The ownership warnings don’t affect the data.

The first 500 error

The site loaded. Ideas worked. But campaigns returned a 500 error. The Laravel log revealed:

The GetObject operation requires non-empty parameter: Bucket

Not a Postgres issue — just missing environment variables. The new server’s .env didn’t have the AWS, OpenAI, and other API keys configured. Once I copied the keys from the old server, everything worked.

What I’d do differently

Fix UUIDs before starting. The invalid UUID values caused the most friction. If I’d audited the uuid and s3hash columns first, both the migration run and the pgloader import would have worked on the first try.

Dump production data earlier. Having a local copy of production data in Postgres meant I could test everything locally before touching the server. The pgloader dry run caught every issue.

Check environment variables. The 500 error on the new server wasn’t a database issue at all — it was just missing API keys. A quick diff of the old and new .env files would have caught it immediately.

The full checklist

For anyone doing this migration, here’s the complete sequence:

  1. Replace any MySQL-specific raw SQL with database-agnostic alternatives
  2. Run php artisan migrate:fresh on a local Postgres database
  3. Run your test suite against Postgres
  4. Fix any UUID, type strictness, or case sensitivity issues
  5. Use pgloader to copy production data to local Postgres
  6. Test the app with real data locally
  7. Provision a new Forge server with PostgreSQL (or install Postgres alongside MySQL)
  8. Deploy your code to the new server
  9. Restore the Postgres dump on the new server
  10. Configure environment variables
  11. Set up scheduler, queue workers, and SSL
  12. Switch DNS when ready
  13. Keep the old server as a fallback until you’re confident

Key takeaway

The code changes were the easy part — three files with MySQL-specific SQL, all fixable in an afternoon. The production data migration was where the real work happened, and the real lesson was: Postgres is stricter than MySQL about data types. Invalid UUIDs, null values in NOT NULL columns, case-sensitive LIKE — MySQL lets these slide, Postgres does not. That strictness is actually a feature. It caught data quality issues that had been silently living in my database for years.


Back to top ↑