• Database migration planned as a weekend maintenance window hoping it completes before Monday morning -- with no validated rollback path if it does not?

  • Migrating between database engines (Oracle to PostgreSQL, MSSQL to Aurora) and the schema differences are larger than the initial assessment suggested?

Database Migration Services

Databases carry years of operational data. A migration that loses records, corrupts data, or takes your application offline for half a day is not an acceptable risk. Most database migrations fail not because the technical approach is wrong, but because the validation is insufficient and the cutover plan is made up on the day.
We migrate databases to managed cloud services -- PostgreSQL to RDS, MySQL to Aurora, MSSQL to Azure SQL, Oracle to PostgreSQL, and NoSQL migrations -- using replication-based strategies that keep downtime measured in minutes. Every migration includes schema validation, data integrity verification, and a documented cutover runbook tested in staging before the production window.

  • Schema conversion and data validation completed before any production cutover is attempted

  • Replication-based cutover strategy for relational databases -- measured in minutes of downtime, not hours

  • Automated post-migration data validation comparing row counts, checksums, and data samples at the record level

  • Post-migration database tuning so query performance on the managed service matches or exceeds the on-premises instance

RaftLabs migrates databases to managed cloud services -- PostgreSQL to RDS, MySQL to Aurora, MSSQL to Azure SQL, Oracle to PostgreSQL, and NoSQL migrations. Migrations use replication-based cutover strategies to minimise downtime, with automated validation at the record level before any production cutover. Database migration projects typically cost $15,000 to $60,000 depending on database size, engine complexity, and whether schema conversion between engines is required.

Vodafone
Aldi
Nike
Microsoft
Heineken
Cisco
Calorgas
Energia Rewards
GE
Bank of America
T-Mobile
Valero
Techstars
East Ventures

A database migration is not a copy operation. It is the movement of the operational record that the business runs on -- customer data, transaction history, configuration state -- from one platform to another without losing a record and without taking the application that depends on it offline for longer than the business can accept.

The risk in database migration is not the technical approach. The risk is the gap between what was tested in staging and what exists in production: data quality issues that only appear at scale, stored procedures that behave differently under replication load, application queries that assume engine-specific behaviour that the destination does not have. The migration plan that accounts for this starts with a complete inventory and ends with automated validation at the record level.

What we build

PostgreSQL and MySQL cloud migration

Migration of PostgreSQL and MySQL databases from on-premises servers to managed cloud services -- AWS RDS, Aurora, Azure Database for PostgreSQL, and Azure Database for MySQL. Replication-based migration using native PostgreSQL logical replication, MySQL binlog replication, or AWS DMS for near-zero downtime cutover. Automated data validation comparing row counts, checksums, and representative data samples before cutover is executed. Post-migration parameter group tuning for query performance. Read replica configuration for read-heavy workloads. Automated backup and point-in-time recovery configured from day one.

Oracle to PostgreSQL migration

Full Oracle to PostgreSQL migration including schema conversion, stored procedure rewriting, and data migration with validation. AWS Schema Conversion Tool assessment to identify incompatible objects and the manual work required to resolve them. PL/SQL to PL/pgSQL stored procedure conversion and testing. Oracle-specific data type mapping and resolution. Application SQL query review for engine-specific syntax that needs updating. Data migration using replication or a phased cutover approach depending on Oracle edition and available tooling. Functional testing of the converted application against the PostgreSQL database before production cutover is approved. The migration that removes the Oracle licence cost without leaving schema compatibility debt behind.

MSSQL to Azure SQL migration

SQL Server migration to Azure SQL Database or Azure SQL Managed Instance using Azure Database Migration Service. Compatibility assessment to identify SQL Server features that require resolution before migration -- linked servers, SQL Agent jobs, CLR objects, and cross-database queries. SQL Managed Instance for near-complete SQL Server compatibility when Azure SQL Database compatibility constraints are too restrictive. Continuous replication during the migration window for minimal downtime cutover. Post-migration configuration of elastic pools for cost optimisation across multiple databases. The SQL Server migration that preserves your database behaviour in Azure without a lengthy compatibility remediation project.

Schema conversion and data transformation

Schema conversion for cross-engine migrations -- identifying incompatible data types, constraints, indexes, and procedural objects, and resolving each incompatibility before data migration starts. Data transformation logic for records that require format conversion, normalisation, or denormalisation as part of the migration. Transformation scripts built and tested against a full copy of production data in a staging environment before the live migration runs. The schema conversion that arrives at migration day with all incompatibilities resolved, not discovered during the cutover window.

Zero-downtime cutover strategy

Cutover planning and execution for database migrations where extended downtime is not acceptable. Replication-based cutover: source database replicates continuously to destination until lag is under a second, application is pointed to destination, source set read-only, remaining lag applied. Cutover window measured in seconds to low minutes. Documented cutover runbook specifying every step, each team member's role, the validation checks that must pass before the source is decommissioned, and the rollback procedure if any step fails. Cutover rehearsal in staging environment before the production window. The cutover plan that has been practiced before it runs in production.

Post-migration database optimisation

Database performance tuning after migration to ensure the managed cloud service matches or exceeds the on-premises baseline. Query performance analysis using the slow query log and EXPLAIN plans to identify regressions introduced by the move to managed infrastructure. Index review and optimisation for the new storage engine configuration. Parameter group tuning for the specific workload -- transaction-heavy OLTP versus read-heavy reporting workloads have different optimal configurations. Connection pooling setup with PgBouncer or RDS Proxy for applications with high connection counts. Vacuum and autovacuum tuning for PostgreSQL workloads. The database that performs as well after migration as the application team expects.

Which database is the highest-risk part of your migration?

Tell us the engine, size, and acceptable downtime window. We will design the migration approach and give you a fixed cost before work starts.

  • Custom Software Development -- application development after the database migration is complete

  • DevOps -- automated database backup and recovery pipelines post-migration

Frequently asked questions

The approach that avoids extended downtime is replication-based migration rather than dump-and-restore. We set up ongoing replication from the source database to the destination -- using AWS DMS, Azure Database Migration Service, or native database replication depending on the source and target engines. The destination database receives a continuous stream of changes from the source as replication runs. When the replication lag drops to seconds, we execute the cutover: write traffic switches from source to destination, replication stops, and the source becomes read-only for a rollback window. The cutover itself -- the window where writes are unavailable -- is measured in seconds to low minutes, not hours. This approach is only possible when the migration has been running in replication mode for long enough that the destination is fully caught up. Dump-and-restore is appropriate for smaller databases where a maintenance window is acceptable and the restore time is predictable.

Change Data Capture (CDC) replication captures every insert, update, and delete from the source database's transaction log and replays those changes on the destination database in near real time. For a migration, this means the destination is kept continuously in sync with the source as replication runs -- days or weeks before the cutover window. When the cutover window arrives, the replication lag is typically under a second. The application is pointed at the destination, the source is set to read-only, and any remaining replication lag is applied. The downtime window is the time between making the source read-only and the destination being fully caught up -- usually seconds. AWS DMS supports CDC for most common database engines. Native database replication (PostgreSQL logical replication, MySQL binlog replication) is used where DMS introduces limitations. CDC replication is the standard approach for production database migrations where extended downtime is not acceptable.

Cross-engine migrations (Oracle to PostgreSQL, MSSQL to Aurora PostgreSQL) involve schema conversion, not just data movement. Oracle and MSSQL have proprietary data types, functions, and procedural language features that do not have direct PostgreSQL equivalents. AWS Schema Conversion Tool (SCT) or similar automated tools identify incompatibilities and produce a conversion report with the items that require manual resolution. Common issues: Oracle NUMBER types mapped incorrectly to PostgreSQL NUMERIC, Oracle stored procedures using PL/SQL syntax that must be rewritten in PL/pgSQL, Oracle sequences replaced with PostgreSQL SERIAL or generated columns, and Oracle-specific date arithmetic functions replaced with PostgreSQL equivalents. We work through the SCT report manually, rewrite the incompatible objects, and test the converted schema against a data sample from the source database before the migration runs. Application queries that use engine-specific SQL are also reviewed and updated as part of the schema conversion engagement.

Database migration cost depends on three factors: database size, the number of databases being migrated, and whether schema conversion between engines is required. Like-for-like migrations (PostgreSQL on-prem to PostgreSQL on RDS) for a single database of moderate size typically run $15,000 to $30,000. Multi-database migrations or databases over 1TB typically run $30,000 to $60,000. Cross-engine migrations that require schema conversion (Oracle to PostgreSQL, MSSQL to Aurora) add schema conversion and testing effort and typically run $40,000 to $80,000 depending on stored procedure and schema complexity. We scope every migration based on a discovery engagement -- database inventory, size assessment, schema analysis, and a review of application query patterns that may be affected by the migration.