Google Cloud Database Migration Service Tutorial: Architecture, Pricing, Use Cases, and Hands-On Guide for Databases

Category

Databases

1. Introduction

Google Cloud Database Migration Service is a managed service for migrating relational databases into Google Cloud with minimal downtime. It’s designed for teams that want a repeatable, auditable, and operationally safer alternative to building custom migration scripts and replication pipelines.

In simple terms: Database Migration Service helps you move a database (for example, MySQL or PostgreSQL) from on‑premises or another cloud into Cloud SQL or AlloyDB, and—when supported—keeps the target continuously updated until you’re ready to cut over.

Technically, Database Migration Service orchestrates the migration workflow end-to-end: source and destination connectivity, initial load, and (for “continuous” migrations) change replication so the destination stays in sync. You configure migrations using resources like connection profiles and migration jobs, and you monitor progress using Google Cloud’s operations tooling (logging/monitoring/audit trails).

The problem it solves: database migrations are risky and time-consuming because they combine data movement, networking, security, schema and user preparation, downtime coordination, and verification. Database Migration Service reduces that complexity by providing a structured, supported path for common migration patterns in Google Cloud Databases.

2. What is Database Migration Service?

Official purpose (high level): Database Migration Service (DMS) helps you migrate databases to Google Cloud managed databases—primarily Cloud SQL and, for supported PostgreSQL scenarios, AlloyDB. It provides guided workflows for one-time migrations and (where supported) continuous replication for low-downtime cutovers.
Verify supported sources/targets and versions in the official docs because support evolves over time: https://cloud.google.com/database-migration/docs

Core capabilities

Common capabilities you should expect from Database Migration Service include:

  • Homogeneous migrations (same engine family), such as MySQL→Cloud SQL for MySQL or PostgreSQL→Cloud SQL for PostgreSQL (verify exact engine/version support).
  • One-time migrations for moving a snapshot of data.
  • Continuous migrations (where supported) to replicate changes after an initial load to minimize downtime during cutover.
  • Connectivity options for reaching sources across VPCs and hybrid networks (for example, over VPN/Interconnect) with private networking patterns.
  • Operational visibility into migration status and errors through Google Cloud console and logging/monitoring integrations.

Major components

Database Migration Service is configured and operated with a few key resource types (names as used in Google Cloud):

  • Migration job: the main resource that represents a migration workflow (one-time or continuous), including state and progress.
  • Connection profiles: reusable connection definitions for source and destination endpoints (host, port, engine type, credentials, TLS settings, etc.).
  • Private connectivity resources (when used): a way to enable private network paths between DMS and your source environment (exact resource names and steps depend on the chosen connectivity model—verify in docs).

Service type and scope

  • Service type: fully managed Google Cloud service (control plane in Google Cloud; you do not manage replication servers in the same way you would with self-managed tools).
  • Scope: configured per Google Cloud project and typically per region for migration resources (jobs/connectivity are region-bound in many Google Cloud services; confirm region behavior for your specific migration type in official docs).
  • Ecosystem fit: Database Migration Service sits in the Databases portfolio as the migration/orchestration layer that helps you adopt Cloud SQL and AlloyDB while integrating with IAM, VPC networking, Cloud Logging, Cloud Monitoring, and Cloud Audit Logs.

Important: Google Cloud Database Migration Service is not the same product as “AWS Database Migration Service” or “Azure Database Migration Service.” They are separate services with different architectures and pricing.

3. Why use Database Migration Service?

Business reasons

  • Lower migration risk: a managed workflow reduces bespoke scripting and “tribal knowledge” migrations.
  • Faster cloud adoption: simplifies moving to managed databases (Cloud SQL/AlloyDB) so teams can focus on application modernization.
  • Predictable cutovers: continuous replication (when supported) enables planned cutovers with shorter downtime windows.

Technical reasons

  • Standardized migration flow: consistent job lifecycle, progress tracking, and error reporting.
  • Supports common migration patterns: especially homogeneous migrations where schema conversion is not the primary hurdle.
  • Better repeatability: migration job definitions and connection profiles make it easier to reproduce dev/test rehearsals.

Operational reasons

  • Centralized visibility: status and events through Google Cloud console plus logging/monitoring.
  • Separation of concerns: the platform team can standardize connectivity and IAM patterns; application teams can execute migrations with guardrails.
  • Auditable operations: integrates with Cloud Audit Logs for administrative actions.

Security/compliance reasons

  • IAM-based access control: restrict who can create connection profiles/jobs and who can view details.
  • Private networking options: align with least exposure principles; avoid public database endpoints when possible.
  • Encryption in transit: use TLS where supported; manage secrets carefully (ideally with Secret Manager).

Scalability/performance reasons

  • Designed for production cutovers: continuous replication helps avoid long downtime for large datasets (subject to source engine and workload).
  • Compatible with managed targets: moving to Cloud SQL/AlloyDB can improve operational scalability (HA, backups, patching) compared to self-managed instances.

When teams should choose it

Choose Database Migration Service if: – You’re migrating supported relational engines into Cloud SQL or AlloyDB. – You want low downtime with continuous replication (if supported for your engine/method). – You prefer a managed migration workflow with Google Cloud integrations (IAM, logging, monitoring).

When teams should not choose it

Database Migration Service may not be the right tool if: – You need heterogeneous migration with schema conversion as the primary capability (e.g., Oracle→PostgreSQL with complex conversion). Google Cloud may provide other tools or partner solutions for assessment and conversion—verify current recommendations in official docs. – Your source environment cannot meet the connectivity requirements (routing, firewall, TLS, user privileges). – You need advanced transformations, filtering, or event streaming semantics—consider Datastream (CDC streaming) or data pipelines (Dataflow) depending on the goal. – Your migration is extremely small/simple and a native backup/restore is sufficient and safer for your use case.

4. Where is Database Migration Service used?

Industries

  • Financial services (regulated migrations with strict audit controls)
  • Healthcare (PHI-aware migrations with private networking)
  • Retail/e-commerce (downtime-sensitive transactional systems)
  • SaaS and technology companies (frequent environment cloning and upgrades)
  • Manufacturing/logistics (hybrid/on‑prem to cloud transitions)

Team types

  • Platform/Cloud Center of Excellence (standardizing migration patterns)
  • SRE/Operations teams (cutover and reliability ownership)
  • DevOps teams (automation, IaC, CI/CD of infrastructure)
  • Database administrators (source readiness, replication, performance)
  • Security teams (network and access controls)

Workloads

  • OLTP systems (orders, payments, inventory)
  • SaaS tenant databases
  • Back-office systems (ERP/CRM integrations)
  • Reporting read replicas (when migrating to a managed target)
  • Dev/test refresh pipelines (if using one-time migrations to seed environments)

Architectures

  • Hybrid connectivity (on‑prem source over Cloud VPN/Interconnect)
  • Multi-cloud migrations (source in another cloud into Google Cloud)
  • VPC-based private database access patterns
  • Cloud SQL HA deployments with private IP

Production vs dev/test usage

  • Dev/test: run rehearsal migrations repeatedly to validate sizing, networking, privileges, and cutover steps.
  • Production: rely on continuous replication (if supported) plus a carefully planned cutover with validation, rollback plan, and monitoring.

5. Top Use Cases and Scenarios

Below are realistic scenarios where Database Migration Service is commonly used in Google Cloud Databases programs.

1) On‑prem MySQL to Cloud SQL with low downtime

  • Problem: A business-critical MySQL database must move off aging hardware with minimal downtime.
  • Why DMS fits: Provides an initial load plus continuous replication (when supported) to reduce the downtime window.
  • Example: A retail POS backend migrates from an on‑prem VM to Cloud SQL for MySQL, cutting over during a short maintenance window.

2) PostgreSQL from another cloud to AlloyDB for performance modernization

  • Problem: A PostgreSQL workload needs better performance and managed scaling, but downtime must be minimal.
  • Why DMS fits: Supports migrations into AlloyDB for PostgreSQL in supported scenarios (verify engine/version and migration mode).
  • Example: A SaaS vendor migrates from self-managed PostgreSQL to AlloyDB to improve query throughput.

3) SQL Server to Cloud SQL for SQL Server (lift-and-shift)

  • Problem: A Windows-based app depends on SQL Server and needs to move to managed infrastructure.
  • Why DMS fits: Provides a structured migration workflow for supported SQL Server sources/targets (verify support and limitations).
  • Example: A manufacturing MES app moves from on-prem SQL Server to Cloud SQL for SQL Server.

4) Data center exit with staged cutovers per application

  • Problem: Dozens of databases must be migrated with consistent controls and reporting.
  • Why DMS fits: Standardizes migrations via connection profiles/jobs; integrates with IAM and audit logs.
  • Example: A healthcare org migrates departmental databases in waves, each with rehearsals and approval gates.

5) Migration rehearsal automation (pre-production validation)

  • Problem: Teams need repeated practice runs to validate time-to-migrate, errors, and operational steps.
  • Why DMS fits: Repeatable job setup and consistent observability streamline rehearsals.
  • Example: A platform team runs monthly rehearsal migrations to validate ongoing readiness for production.

6) Hybrid connectivity migrations over VPN/Interconnect

  • Problem: The source database cannot be exposed publicly due to policy.
  • Why DMS fits: Supports private networking patterns when properly configured.
  • Example: A bank migrates PostgreSQL over Cloud Interconnect without public IPs.

7) Consolidation of fragmented MySQL instances into managed Cloud SQL

  • Problem: Many small MySQL servers are costly to operate and patch.
  • Why DMS fits: Provides a consistent method to migrate each instance into Cloud SQL (note: consolidation into fewer instances requires planning; DMS migrates a source to a destination—merging may need extra work).
  • Example: A media company migrates 20 departmental MySQL servers into managed Cloud SQL instances per environment.

8) Application modernization prerequisite: move database first

  • Problem: The app will be refactored later, but the database needs managed ops now.
  • Why DMS fits: Enables “database-first” migration patterns for quicker operational wins.
  • Example: An enterprise moves PostgreSQL to Cloud SQL first, then modernizes app services to GKE later.

9) Disaster recovery strategy change (new primary in Google Cloud)

  • Problem: The organization wants to relocate primaries into Google Cloud and keep the old site as fallback during transition.
  • Why DMS fits: Continuous replication (if supported) can help maintain synchronization until cutover.
  • Example: A logistics company migrates primary MySQL to Cloud SQL and uses the old environment temporarily as contingency.

10) Managed database adoption to improve compliance posture

  • Problem: Compliance audits require consistent patching, backups, and access controls.
  • Why DMS fits: Supports moving to Cloud SQL/AlloyDB, where operational controls are standardized.
  • Example: A fintech moves to Cloud SQL with CMEK (where applicable) and centralized IAM policies.

11) Cross-region migration preparation (regional redesign)

  • Problem: A database must be moved to a different region as part of latency/sovereignty requirements.
  • Why DMS fits: Helps run controlled migrations into a new regional Cloud SQL/AlloyDB deployment (ensure connectivity and region support).
  • Example: An EU business migrates workloads into an EU region and updates app routing.

12) Test environment seeding with one-time migration

  • Problem: Developers need a realistic dataset in a new Cloud SQL instance.
  • Why DMS fits: One-time migrations can seed data with less manual dump/restore.
  • Example: A QA team seeds a staging Cloud SQL instance from a sanitized on-prem MySQL source.

6. Core Features

Feature availability varies by database engine, version, and migration mode. Always confirm current support in the official docs: https://cloud.google.com/database-migration/docs

1) Migration jobs (one-time and continuous)

  • What it does: Represents the migration workflow; you choose one-time or continuous (CDC-like) replication when supported.
  • Why it matters: Provides a structured lifecycle: configure → start → monitor → validate → cut over.
  • Practical benefit: Less custom scripting; consistent operational steps.
  • Caveat: Continuous replication requires specific source configuration (e.g., binary logs/WAL, privileges). Verify prerequisites per engine.

2) Connection profiles (source and destination)

  • What it does: Stores connection info and settings for endpoints.
  • Why it matters: Separates connectivity configuration from migration job logic.
  • Practical benefit: Reuse profiles across rehearsals; simplify standardization.
  • Caveat: Treat credentials and TLS materials as sensitive. Prefer least privilege and secret management patterns.

3) Private connectivity options (for non-public sources)

  • What it does: Enables DMS to reach private IP databases over VPC/hybrid networking, reducing exposure.
  • Why it matters: Many enterprises disallow public database endpoints.
  • Practical benefit: Aligns with private-by-default security.
  • Caveat: Requires correct routing, firewall rules, and non-overlapping CIDRs. Misconfigurations are a common cause of connection failures.

4) Guided setup and validation checks

  • What it does: The console workflow typically validates reachability and basic configuration.
  • Why it matters: Catches issues early (auth, network, permissions).
  • Practical benefit: Faster troubleshooting than ad-hoc replication setup.
  • Caveat: Validation is not a substitute for application-level testing and performance testing.

5) Observability: status, logs, metrics (via Google Cloud Ops)

  • What it does: Surfaces job state, errors, and operational signals.
  • Why it matters: Migrations are time-bound and risk-sensitive; you need visibility.
  • Practical benefit: Integrate with alerting on failures or lag (where available).
  • Caveat: Metric/log availability depends on engine/mode. Verify what signals are exposed for your migration type.

6) IAM and auditability

  • What it does: Uses Google Cloud IAM roles to control who can create/modify jobs and view details; admin actions appear in Cloud Audit Logs.
  • Why it matters: Migration credentials and operations are sensitive.
  • Practical benefit: Enforce least privilege and change control.
  • Caveat: Over-broad roles are a common risk. Use custom roles when needed.

7) Integration with Cloud SQL / AlloyDB target provisioning

  • What it does: Works with managed targets; you typically pre-create the target instance and then migrate into it.
  • Why it matters: Successful migration depends on target sizing, flags, users, extensions, and network design.
  • Practical benefit: Standard target patterns (private IP, HA, backups) can be applied consistently.
  • Caveat: Schema/user objects and permissions may need careful planning. DMS focuses on migration, not ongoing database tuning.

8) Cutover support (promotion/finalization flow)

  • What it does: Provides a defined process to stop writes on source, let replication catch up (continuous), and finalize.
  • Why it matters: Cutover is where most outages occur.
  • Practical benefit: Clear step sequence reduces human error.
  • Caveat: You still need an application cutover plan: connection strings, DNS, secrets, and rollback.

7. Architecture and How It Works

High-level architecture

At a high level, Database Migration Service sits between:

  • A source database (on‑prem, VM, or another cloud)
  • A target managed database in Google Cloud (Cloud SQL or AlloyDB)
  • Supporting components: VPC networking, IAM, and Operations tooling

The service coordinates: 1. Connectivity to source and target 2. Initial data load 3. Optional continuous replication to keep the target updated 4. Final cutover steps

Control flow vs data flow

  • Control plane: You create and manage connection profiles and migration jobs in a Google Cloud project (console/API). IAM controls access.
  • Data plane: The actual data movement occurs over the configured network path from source to Google Cloud target.

Integrations with related services

Common Google Cloud integrations in real deployments:

  • Cloud SQL / AlloyDB: destination platforms
  • VPC: routing, firewall rules, private IP design
  • Cloud VPN / Cloud Interconnect: hybrid connectivity to on‑prem sources
  • Secret Manager (recommended): store and rotate database passwords/keys (DMS may require direct credential entry depending on workflow; if so, enforce internal processes)
  • Cloud Logging / Cloud Monitoring: migration observability
  • Cloud Audit Logs: admin and configuration audit trail

Dependency services (typical)

Depending on your configuration, you may need: – Database Migration Service API (Database Migration API) – Cloud SQL Admin API – Compute Engine API (if using VMs or VPC constructs) – Service Networking API (commonly required for private service networking scenarios, especially for Cloud SQL private IP)

Security/authentication model

  • User/admin authentication: Google Cloud IAM identities (users/groups/service accounts) interact with the DMS control plane.
  • Data authentication: Database username/password and potentially TLS client/server certificates depending on your engine and security configuration.
  • Authorization: IAM decides who can operate DMS; database grants decide what the migration user can read/replicate.

Networking model

Common patterns include:

  • Public IP connectivity: simplest but often least preferred. Requires opening firewall rules on the source and/or allowing connections from DMS, plus TLS for protection.
  • Private connectivity: preferred for production. Source is reachable via VPC (and optionally VPN/Interconnect). Requires careful CIDR planning, routing, and firewall rules.

Monitoring/logging/governance considerations

  • Monitoring: define alerts for job failures, replication lag (if exposed), and Cloud SQL instance health.
  • Logging: enable and retain relevant logs; ensure sensitive values are not leaked in logs.
  • Governance: define naming conventions for jobs and connection profiles; use labels/tags where supported for cost allocation and inventory.

Simple architecture diagram (Mermaid)

flowchart LR
  A[Source DB\n(MySQL/PostgreSQL/SQL Server)] -->|Initial load + optional continuous replication| B[Database Migration Service]
  B --> C[Target DB\n(Cloud SQL or AlloyDB)]
  D[IAM] --> B
  E[VPC / VPN / Interconnect] --- A
  E --- B
  E --- C

Production-style architecture diagram (Mermaid)

flowchart TB
  subgraph OnPrem[On-prem / Other cloud]
    SDB[(Source Database)]
    FW[Firewall / Security Groups]
  end

  subgraph Hybrid[Hybrid Connectivity]
    VPN[Cloud VPN or Interconnect]
    RT[Routing]
  end

  subgraph GCP[Google Cloud Project]
    VPC[VPC Network\nSubnets + Firewall rules]
    DMS[Database Migration Service\n(Migration job + Connection profiles)]
    LOG[Cloud Logging]
    MON[Cloud Monitoring]
    AUD[Cloud Audit Logs]
    SM[Secret Manager\n(recommended)]
    TDB[(Cloud SQL / AlloyDB Target)]
  end

  SDB --- FW
  FW --> VPN --> RT --> VPC
  VPC --- DMS
  DMS --> TDB

  DMS --> LOG
  DMS --> MON
  DMS --> AUD
  SM -.store secrets.-> DMS

8. Prerequisites

Google Cloud requirements

  • A Google Cloud project with billing enabled
  • APIs enabled (commonly):
  • Database Migration API
  • Cloud SQL Admin API (and/or AlloyDB Admin API if applicable)
  • Compute Engine API (for VPC, firewall, VMs)
  • Service Networking API (often needed for private IP targets like Cloud SQL)
  • Verify exact API list in official docs for your migration path.

IAM permissions (typical)

You need permissions to: – Create/manage DMS resources (migration jobs, connection profiles, connectivity) – Create/manage Cloud SQL or AlloyDB target resources – Configure VPC firewall rules/routes (if using private connectivity)

Common predefined roles you may see in guides include: – roles/datamigration.admin (Database Migration Service Admin) – roles/cloudsql.admin (for Cloud SQL targets) and/or AlloyDB admin roles as applicable – roles/compute.networkAdmin (VPC/firewall) – roles/iam.serviceAccountUser (if operating via service accounts)

Exact minimum roles vary; prefer least privilege and consider custom roles for production.

Tools

  • Google Cloud Console (web UI)
  • gcloud CLI (recommended for repeatability): https://cloud.google.com/sdk/docs/install
  • A SQL client (mysql/psql/sqlcmd) for validation

Region availability

Database Migration Service is region-based. Choose a region that: – Supports your target (Cloud SQL/AlloyDB) features – Minimizes latency to the source (especially for continuous replication) – Meets data residency requirements

Verify current region support in official docs.

Quotas/limits

Quotas can apply to: – Number of migration jobs / connection profiles – API request quotas – Cloud SQL instance quotas – Network resources (IP ranges, peering limits)

Check quotas in the Cloud console and in product docs before large-scale migrations.

Prerequisite services and source readiness

Your source must meet engine-specific prerequisites. Examples (verify per engine/version): – MySQL: binary logging configuration, appropriate privileges, stable network connectivity – PostgreSQL: WAL settings, replication permissions, extensions compatibility (if applicable) – SQL Server: required permissions and features

9. Pricing / Cost

Pricing model (what you pay for)

Database Migration Service pricing can change over time. Do not assume it is always free or always billed—confirm on the official pricing page for your date/region.

  • Official pricing page (verify current model): https://cloud.google.com/database-migration/pricing
  • Pricing calculator: https://cloud.google.com/products/calculator

In many real deployments, the largest costs are often not the DMS control plane itself but the target database and networking/data transfer involved.

Common pricing dimensions and cost drivers

Even when the migration service has minimal direct charges, migrations typically incur costs in these areas:

  1. Target database costs – Cloud SQL: instance size (vCPU/RAM), storage type/size, HA configuration, backups, read replicas – AlloyDB: compute and storage costs – These costs run during rehearsals and the full migration window.

  2. Network and data transfer – Data egress from the source environment (especially if source is outside Google Cloud) – Cross-region network charges if source and target are in different regions – VPN/Interconnect costs (tunnel charges, bandwidth/attachments)

  3. Compute used for source hosting or staging – If you stand up temporary VMs for source simulation or intermediate steps – Bastion hosts / proxies for private connectivity patterns

  4. Storage and backups – Cloud SQL automated backups and PITR (if enabled) – Logs and monitoring retention (Cloud Logging costs can rise with verbose logs)

Hidden/indirect costs to plan for

  • Long migration windows: continuous replication may run for days while you validate—meaning you pay for target instances longer than expected.
  • Overprovisioned targets: teams often oversize early “just in case” and forget to resize later.
  • Rehearsal environments: multiple non-prod targets can multiply costs.
  • Data transfer surprises: egress from another cloud provider can be significant.

Cost optimization tips

  • Use the smallest target size that still meets migration performance needs for rehearsals, then resize for production.
  • Keep source and target in the same region where feasible.
  • Prefer private connectivity that avoids unnecessary hops and reduces exposure (cost impact depends on your topology).
  • Limit log verbosity and set Cloud Logging retention policies appropriately.
  • Clean up old migration jobs, connection profiles, and temporary infrastructure after cutover.

Example low-cost starter estimate (conceptual)

A low-cost lab typically includes: – 1 small Compute Engine VM (source) – 1 small Cloud SQL instance (target) – Minimal storage – A short migration window (hours, not days)

Use the pricing calculator to estimate based on: – VM type + hours – Cloud SQL instance size + storage + hours – Network egress (ideally $0 if all in one region inside Google Cloud)

Example production cost considerations (conceptual)

Production migration planning should include: – Cloud SQL HA (regional) if required – Sufficient IOPS/storage for sustained replication catch-up – VPN/Interconnect costs if hybrid – Monitoring and on-call time (people cost) – A rollback environment (potentially parallel run)

10. Step-by-Step Hands-On Tutorial

This lab walks through a small, realistic migration: MySQL on a Compute Engine VM (source)Cloud SQL for MySQL (target) using Database Migration Service.

Notes before you begin: – Exact screens and required flags can change. Use this lab as a practical workflow and verify each engine prerequisite in the official documentation for your chosen MySQL version. – The tutorial uses private IP connectivity inside a VPC to avoid exposing MySQL publicly. If you can’t use private connectivity in your environment, you can adapt to public IP connectivity (but do so only with strong firewall restrictions and TLS).

Objective

Migrate a MySQL database into Cloud SQL for MySQL using Database Migration Service, validate replicated data, and clean up resources safely.

Lab Overview

You will: 1. Create a VPC firewall rule and a MySQL source VM 2. Configure MySQL prerequisites for replication/migration 3. Create a Cloud SQL target instance 4. Create Database Migration Service resources: – Private connectivity (if required by your chosen connectivity model) – Source and destination connection profiles – A migration job (continuous preferred for low downtime, if available) 5. Run the migration and validate data replication 6. Clean up all resources


Step 1: Create a project and set gcloud defaults

1) Set variables (edit to your needs):

export PROJECT_ID="YOUR_PROJECT_ID"
export REGION="us-central1"
export ZONE="us-central1-a"
export NETWORK="default"

2) Set your project and region:

gcloud config set project "$PROJECT_ID"
gcloud config set compute/region "$REGION"
gcloud config set compute/zone "$ZONE"

Expected outcome: gcloud config list shows your project/region/zone.


Step 2: Enable required APIs

Enable common APIs used in this lab:

gcloud services enable \
  datamigration.googleapis.com \
  sqladmin.googleapis.com \
  compute.googleapis.com \
  servicenetworking.googleapis.com

Expected outcome: APIs enable successfully (may take 1–3 minutes).

Verification:

gcloud services list --enabled --filter="name:datamigration OR name:sqladmin"

Step 3: Create the MySQL source VM (Compute Engine)

Create a small Linux VM (choose a machine type appropriate for your quotas). This example uses a Debian image:

gcloud compute instances create mysql-source-vm \
  --zone "$ZONE" \
  --machine-type "e2-medium" \
  --image-family "debian-12" \
  --image-project "debian-cloud" \
  --network "$NETWORK" \
  --tags "mysql-source"

Expected outcome: VM is created.

SSH into it:

gcloud compute ssh mysql-source-vm --zone "$ZONE"

Step 4: Install and configure MySQL on the source VM

On the VM, install MySQL server (package name can vary by distro/version):

sudo apt-get update
sudo apt-get install -y default-mysql-server

Check MySQL is running:

sudo systemctl status mysql --no-pager

Expected outcome: MySQL service is active/running.

Configure MySQL for migration/replication prerequisites

Database Migration Service continuous migrations typically require binary logging and specific settings. The exact required settings depend on your MySQL version and DMS method. Verify the current requirements in the official docs for “MySQL source requirements”.

A common baseline includes: – server-idlog_binbinlog_format=ROW

Edit the MySQL config (location may vary). On Debian, commonly:

sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

Add/update under [mysqld] (example only—verify required values):

[mysqld]
server-id=1
log_bin=mysql-bin
binlog_format=ROW
binlog_row_image=FULL

Restart MySQL:

sudo systemctl restart mysql

Verify binary log is enabled:

sudo mysql -e "SHOW VARIABLES LIKE 'log_bin';"
sudo mysql -e "SHOW VARIABLES LIKE 'binlog_format';"

Expected outcome: log_bin = ON and binlog_format = ROW.

Create a sample database and table

Create a database and some rows to migrate:

sudo mysql <<'SQL'
CREATE DATABASE dms_lab;
USE dms_lab;

CREATE TABLE customers (
  id INT PRIMARY KEY AUTO_INCREMENT,
  email VARCHAR(255) NOT NULL,
  created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

INSERT INTO customers (email) VALUES
 ('alice@example.com'),
 ('bob@example.com');
SQL

Verify:

sudo mysql -e "SELECT * FROM dms_lab.customers;"

Expected outcome: Two rows appear.

Create a dedicated migration user

Create a user for DMS. Privileges required vary by migration method. Follow the official docs for the minimal grants. For a lab, you can start with broader grants and then tighten later.

Example (adjust host and password):

sudo mysql <<'SQL'
CREATE USER 'dms_user'@'%' IDENTIFIED BY 'CHANGEME_STRONG_PASSWORD';
GRANT SELECT, RELOAD, SHOW VIEW, EVENT, TRIGGER, LOCK TABLES, REPLICATION SLAVE, REPLICATION CLIENT
  ON *.* TO 'dms_user'@'%';
FLUSH PRIVILEGES;
SQL

Expected outcome: user created.


Step 5: Allow private connectivity from DMS to the source VM (firewall)

If you use a private connectivity model, you typically allocate an IP range for DMS connectivity and then allow that range to access the source database port (3306). The exact IP range depends on how you set up DMS private connectivity.

For now, create a firewall rule that you will later scope to the DMS connectivity range. In a real environment, keep this rule as narrow as possible.

Example firewall rule (you will replace the source range with the actual DMS range you allocate):

# Example only. Replace SOURCE_RANGE with the DMS private connectivity CIDR you allocate (e.g., 10.10.0.0/24).
export SOURCE_RANGE="10.10.0.0/24"

gcloud compute firewall-rules create allow-mysql-from-dms \
  --network "$NETWORK" \
  --allow tcp:3306 \
  --source-ranges "$SOURCE_RANGE" \
  --target-tags "mysql-source"

Expected outcome: Firewall rule created.

If you are not using DMS private connectivity and instead use public connectivity, you must control exposure tightly (source authorized networks, restricted firewall rules) and enable TLS.


Step 6: Create the Cloud SQL for MySQL target instance

Create a Cloud SQL instance. Choose the smallest suitable configuration for a lab, but ensure it supports your required features.

You can create it via Console (recommended for beginners) or CLI.

Console approach: 1. Go to Cloud SQL: https://console.cloud.google.com/sql 2. Create instance → MySQL 3. Choose region, set root password, configure networking (private IP recommended for production) 4. Create the instance

CLI approach (example; flags vary—verify current gcloud SQL syntax):

export SQL_INSTANCE="mysql-target-cloudsql"
export SQL_ROOT_PASSWORD="CHANGEME_STRONG_PASSWORD"

gcloud sql instances create "$SQL_INSTANCE" \
  --database-version=MYSQL_8_0 \
  --region="$REGION" \
  --root-password="$SQL_ROOT_PASSWORD"

Expected outcome: Cloud SQL instance is created and running.

Verification:

gcloud sql instances describe "$SQL_INSTANCE" --format="value(state)"

Step 7: Create Database Migration Service connection profiles

Database Migration Service setup is often easiest in the Google Cloud Console, because it guides you through connectivity options and validations.

Go to Database Migration Service: https://console.cloud.google.com/database-migration

7A) (If needed) Create private connectivity for DMS

In the DMS console, look for Connectivity / Private connectivity setup (terminology can vary). You typically: – Choose a region – Select a VPC network – Provide a dedicated, non-overlapping IP range (CIDR) for peering

Expected outcome: Private connectivity resource is created and in a ready state.

If you created a firewall rule in Step 5, ensure the allowed source CIDR matches the DMS connectivity CIDR you actually allocated.

7B) Create a source connection profile (MySQL on VM)

You will need: – Host: the private IP of mysql-source-vm (find it in Compute Engine → VM details) – Port: 3306 – Username: dms_user – Password: the password you set – TLS settings: configure if required by your security policy

Expected outcome: DMS validates connectivity and saves the profile.

7C) Create a destination connection profile (Cloud SQL)

Select the Cloud SQL instance as the destination endpoint.

Expected outcome: Destination profile created.


Step 8: Create and run a migration job

In Database Migration Service: 1. Create Migration job 2. Select: – Source connection profile – Destination connection profile 3. Choose migration type: – Continuous (preferred for low downtime) if available for your MySQL source/target – Otherwise choose One-time 4. Configure additional options (dump settings, objects to include/exclude, etc.) as prompted. 5. Start the job.

Expected outcome: The job enters a running state and begins initial load, then (for continuous) starts replicating ongoing changes.


Step 9: Validate data in Cloud SQL

Connect to Cloud SQL and check that the dms_lab database and customers table exist and contain rows.

One convenient approach is:

gcloud sql connect "$SQL_INSTANCE" --user=root

Then in the SQL prompt:

SHOW DATABASES;
SELECT * FROM dms_lab.customers;

Expected outcome: You see the two seeded rows.

Test ongoing replication (for continuous migrations)

On the source VM, insert another row:

sudo mysql -e "INSERT INTO dms_lab.customers (email) VALUES ('carol@example.com');"

Then query Cloud SQL again:

SELECT * FROM dms_lab.customers;

Expected outcome: The new row appears on the target after a short delay (replication lag depends on workload and network).


Step 10: Cutover (conceptual)

For a continuous migration, a typical cutover sequence is:

  1. Put the application into maintenance/read-only mode (stop writes to source)
  2. Wait until replication catches up (lag approaches zero)
  3. Finalize/promote the migration (DMS workflow)
  4. Point applications to the Cloud SQL instance (update connection strings, secrets, DNS)
  5. Monitor closely and keep a rollback plan for a limited window

Expected outcome: Application writes to Cloud SQL, and the source is no longer the system of record.

Cutover steps can vary by engine and DMS workflow updates—follow the current official cutover guide for your migration type.


Validation

Use this checklist:

  • DMS migration job status shows running/healthy (or completed for one-time)
  • Cloud SQL contains:
  • dms_lab database
  • customers table
  • expected row counts
  • Insert on source appears on target (continuous migration)
  • Cloud SQL instance CPU/memory are within acceptable bounds during migration
  • Cloud Logging shows no repeated connectivity/authentication errors

Troubleshooting

Common errors and practical fixes:

1) “Cannot connect to source” – Check source VM firewall rule allows port 3306 from the correct DMS connectivity range. – Verify routing (if hybrid): VPC routes, VPN status, non-overlapping CIDRs. – Confirm MySQL binds to the right interface (e.g., not only 127.0.0.1).

2) Authentication failures – Confirm username/password in the source connection profile. – Verify MySQL user host pattern ('dms_user'@'%' vs restricted). – If TLS is required, ensure certificates are correct and CA trust is configured as required.

3) Replication prerequisites not met – Verify MySQL binary logging is enabled and binlog format matches requirements. – Ensure required privileges are granted (replication-related privileges often required for continuous). – Check MySQL version compatibility with the target.

4) Migration is slow – Check source/target sizing (CPU, disk, IOPS). – Verify network throughput/latency. – Reduce load on the source during initial load if possible. – Consider running migration during off-peak hours.

5) Schema/object mismatches – DMS focuses on migrating data and supported objects for the chosen engine/method. Stored procedures, triggers, definers, or special plugins may require manual steps. Verify in docs and test rehearsals.


Cleanup

To avoid ongoing costs, delete lab resources when finished.

1) Delete the DMS migration job and connection profiles (Console is easiest): – Database Migration Service → Migration jobs → Delete – Connection profiles → Delete – Private connectivity resource → Delete (if created)

2) Delete Cloud SQL instance:

gcloud sql instances delete "$SQL_INSTANCE"

3) Delete Compute Engine VM:

gcloud compute instances delete mysql-source-vm --zone "$ZONE"

4) Delete firewall rule:

gcloud compute firewall-rules delete allow-mysql-from-dms

Expected outcome: No billable lab resources remain.

11. Best Practices

Architecture best practices

  • Rehearse migrations in dev/staging with production-like data volumes and network paths.
  • Choose private connectivity for production migrations whenever possible.
  • Keep source and target close (regionally) to reduce latency and replication lag.
  • Plan the cutover as an application change, not just a database event (config, secrets, DNS, pooling).

IAM/security best practices

  • Use least privilege roles for DMS operators.
  • Separate duties:
  • Network admins manage private connectivity and firewall rules
  • DBAs manage source grants
  • App teams validate functionality
  • Restrict who can view connection profile details, because they may contain sensitive info.

Cost best practices

  • Timebox rehearsal environments; delete idle targets.
  • Right-size Cloud SQL/AlloyDB after migration; don’t keep “migration sizing” forever.
  • Monitor Cloud Logging ingestion volume; reduce noisy logs.

Performance best practices

  • Ensure source DB has enough headroom for initial load (CPU, disk, IOPS).
  • Avoid running heavy maintenance jobs during initial load (index rebuilds, batch jobs).
  • Validate target parameter settings (buffer pool, max connections) appropriate to your workload (within Cloud SQL constraints).

Reliability best practices

  • Build a rollback plan:
  • Clear criteria for success/failure
  • Data divergence strategy
  • Duration you can keep the old system online
  • Use Cloud SQL HA where required and test failover behavior separately from migration.

Operations best practices

  • Define an on-call runbook for migration day:
  • Where to look for logs
  • How to pause/stop/retry (if supported)
  • Who to contact for network/DB/app
  • Use labels and naming conventions:
  • env=staging, app=orders, migration=2026q2

Governance/tagging/naming best practices

  • Standardize naming:
  • cp-src-<app>-<env>
  • cp-dst-<app>-<env>
  • mj-<app>-<env>-<date>
  • Use consistent labels for cost allocation and inventory tracking.

12. Security Considerations

Identity and access model

  • Control plane access is governed by IAM.
  • Use separate Google Cloud groups for:
  • Migration operators
  • Network operators
  • Read-only auditors

Encryption

  • In transit: Prefer TLS between DMS and source/target where supported; follow official engine-specific guidance.
  • At rest: Cloud SQL and AlloyDB encrypt data at rest by default; consider CMEK requirements for your org (verify feature availability per product/edition).

Network exposure

  • Prefer private IP for Cloud SQL and private connectivity to sources.
  • If public IP must be used:
  • Restrict source firewall to known ranges
  • Use TLS
  • Avoid broad 0.0.0.0/0 access at all costs

Secrets handling

  • Store DB passwords in Secret Manager and enforce rotation policies.
  • Limit who can retrieve secrets; avoid embedding credentials in scripts and tickets.
  • If connection profiles require direct password entry, use internal secure processes (screen sharing restrictions, redaction, access logs).

Audit/logging

  • Use Cloud Audit Logs to track:
  • Who created/modified migration jobs
  • Who changed networking or IAM
  • Ensure log retention matches compliance requirements.

Compliance considerations

  • Data residency: choose regions appropriately.
  • Access controls: enforce MFA and conditional access where applicable.
  • Change management: migrations should be change-ticketed with approvals.

Common security mistakes

  • Overly broad firewall rules
  • Using root/admin DB credentials for migrations
  • Publicly accessible source DB during migration
  • No audit trail for migration operators
  • Not testing TLS/certificates until cutover day

Secure deployment recommendations

  • Use private connectivity and private IP targets.
  • Create a dedicated migration user with minimal required grants.
  • Use dedicated projects or folders for migration operations if your org requires environment separation.

13. Limitations and Gotchas

These are common constraints and pitfalls; confirm engine-specific and version-specific details in official docs.

  • Engine/version compatibility: Not every source version is supported; verify supported versions and editions.
  • Homogeneous focus: DMS is primarily for same-engine migrations. Heterogeneous conversion may require additional tools/workflows.
  • Extensions and custom features: PostgreSQL extensions, MySQL plugins, and SQL Server features may not migrate cleanly.
  • Definers/security context: Views/triggers/procedures with definers can fail or behave differently on the target.
  • Networking is the #1 failure domain: routing, firewall, DNS, TLS, and IP overlap issues are common.
  • Replication lag surprises: high write volume can cause lag; plan for catch-up time.
  • Large objects/large tables: initial load can take longer than expected; test with realistic volumes.
  • Cutover is an application event: forgetting connection pools, caches, or DNS TTLs can extend downtime.

14. Comparison with Alternatives

Database Migration Service is one option in Google Cloud’s Databases and data movement toolbox. Here’s how it compares.

Option Best For Strengths Weaknesses When to Choose
Database Migration Service (Google Cloud) Homogeneous migrations into Cloud SQL/AlloyDB with guided workflows Managed orchestration, integrates with IAM/logging, supports continuous migration in supported cases Not a full transformation platform; support depends on engine/version; network setup can be non-trivial You want a Google-managed migration workflow into Cloud SQL/AlloyDB
Cloud SQL native tools (backup/restore, import/export, replicas) Simple moves, small DBs, or replication patterns within Cloud SQL Familiar DB-native methods; sometimes simplest More manual steps; less “migration workflow” visibility When your use case is straightforward and downtime is acceptable
Datastream (Google Cloud) Streaming change data capture (CDC) into analytics/pipelines CDC to targets like BigQuery/GCS and downstream processing Not primarily a “move my OLTP DB into Cloud SQL” tool When you need streaming CDC for analytics or event-driven pipelines
Self-managed tools (mysqldump/pg_dump, logical replication, rsync, custom scripts) Full control or niche requirements Maximum flexibility High operational burden; error-prone; harder to audit When you need custom behavior not supported by managed services
AWS Database Migration Service Migrations into AWS Mature service for AWS ecosystems Different cloud; not integrated into Google Cloud targets Choose when migrating into AWS, not Google Cloud
Azure Database Migration Service Migrations into Azure Azure ecosystem integration Different cloud; not integrated into Google Cloud targets Choose when migrating into Azure
3rd-party replication/migration (Qlik Replicate, Striim, etc.) Complex migrations, transformations, enterprise replication Broad source/target matrix; advanced features Licensing cost; operational overhead When you need cross-engine, advanced transforms, or multi-target replication

15. Real-World Example

Enterprise example: regulated hybrid migration to Cloud SQL

  • Problem: A financial services firm must migrate on‑prem MySQL databases into Google Cloud while meeting strict security and audit requirements.
  • Proposed architecture:
  • On‑prem MySQL sources connected via Cloud Interconnect
  • DMS configured with private connectivity
  • Targets are Cloud SQL for MySQL with private IP, HA (where required), automated backups, and centralized monitoring
  • IAM groups for migration operators with audit logging enabled
  • Why Database Migration Service was chosen:
  • Standardized workflow across many databases
  • Better auditability than ad-hoc scripts
  • Continuous migration to reduce downtime windows
  • Expected outcomes:
  • Reduced operational burden (patching/backups handled by Cloud SQL)
  • Repeatable migration runbooks
  • Improved security posture through private networking and IAM controls

Startup/small-team example: quick move from VM MySQL to Cloud SQL

  • Problem: A startup runs MySQL on a single VM with manual backups and wants managed reliability without extended downtime.
  • Proposed architecture:
  • Source: MySQL on VM
  • Target: Cloud SQL for MySQL (single-zone or HA based on budget)
  • DMS continuous migration (if supported) to minimize downtime
  • Why Database Migration Service was chosen:
  • Minimal team time to set up compared to building replication scripts
  • Clear status reporting during migration
  • Expected outcomes:
  • Faster recovery posture (managed backups)
  • Less ops toil
  • Easier scaling later with Cloud SQL features

16. FAQ

1) Is Database Migration Service the same as AWS DMS?
No. Google Cloud Database Migration Service is a Google Cloud product for migrating into Google Cloud managed databases. AWS DMS is an AWS product.

2) What databases can I migrate with Database Migration Service?
Supported sources/targets depend on the current product support matrix (engine and version). Common supported engines include MySQL and PostgreSQL into Cloud SQL, and some PostgreSQL paths into AlloyDB. Verify current support: https://cloud.google.com/database-migration/docs

3) Does Database Migration Service do schema conversion (heterogeneous migration)?
Database Migration Service is primarily focused on homogeneous migrations. For cross-engine conversions, you typically need additional tools and planning. Verify current Google Cloud guidance for your source/target pair.

4) Can I do near-zero downtime migrations?
Continuous migration (where supported) can reduce downtime significantly, but “near-zero” depends on application cutover steps, replication lag, and workload characteristics.

5) Do I need public IP access on my source database?
Not necessarily. Many migrations can be done using private connectivity over VPC and hybrid networking. Public IP should be avoided for production unless strongly controlled.

6) How do I secure credentials used in connection profiles?
Use least-privilege database users, protect access to DMS resources with IAM, and use Secret Manager for password governance. Follow your organization’s secret-handling policies.

7) Does DMS migrate users and permissions?
This can vary by engine and method. Even when users are migrated, you should plan to validate grants/roles and application connectivity. Verify behavior for your engine.

8) How do I validate that the target matches the source?
At minimum: row counts, checksums for key tables, application tests, and performance tests. Also validate indexes, constraints, triggers, and routines where applicable.

9) Can I pause and resume a migration job?
Job lifecycle actions depend on migration mode and current product capabilities. Check the DMS documentation for your migration type.

10) What if the migration fails halfway through?
You typically troubleshoot connectivity/auth/config issues, then retry or recreate the migration depending on failure mode. Plan rehearsals to reduce surprises in production.

11) Will migration impact source performance?
Yes, initial load and ongoing replication read the source database and can increase I/O and CPU. Provision headroom and schedule heavy steps off-peak.

12) Can I migrate across regions?
Often yes, but cross-region latency and data transfer costs can be significant. Prefer same-region migrations when possible.

13) How does DMS interact with Cloud SQL private IP?
Cloud SQL private IP requires VPC configuration (often Private Service Access). DMS connectivity must be designed to reach both source and target networks.

14) Is Database Migration Service suitable for very large databases (TB scale)?
It can be, but throughput, migration time, and operational risk increase with size. Test early with representative data and confirm performance constraints and recommended patterns.

15) What’s the best first step before using DMS in production?
Run an end-to-end rehearsal with production-like connectivity, validate prerequisites, measure timing, and write a cutover/rollback runbook.

16) Do I need a DBA to use DMS?
You can start without one for small labs, but production migrations benefit greatly from DBA involvement—especially for source configuration, replication prerequisites, performance tuning, and validation.

17. Top Online Resources to Learn Database Migration Service

Resource Type Name Why It Is Useful
Official documentation Database Migration Service docs Canonical setup guides, concepts, connectivity, troubleshooting: https://cloud.google.com/database-migration/docs
Official pricing Database Migration Service pricing Current pricing model and billing notes (verify): https://cloud.google.com/database-migration/pricing
Pricing tool Google Cloud Pricing Calculator Estimate total costs (target DB + network + logging): https://cloud.google.com/products/calculator
Official product page Database Migration Service overview High-level capabilities and supported paths: https://cloud.google.com/database-migration
Cloud SQL docs Cloud SQL documentation Target sizing, HA, networking, backups: https://cloud.google.com/sql/docs
AlloyDB docs AlloyDB documentation If migrating to AlloyDB for PostgreSQL: https://cloud.google.com/alloydb/docs
Observability docs Cloud Logging Understand log ingestion/retention and costs: https://cloud.google.com/logging/docs
Observability docs Cloud Monitoring Metrics and alerting for migration and target health: https://cloud.google.com/monitoring/docs
Video learning Google Cloud Tech (YouTube) Product explainers and demos (search for Database Migration Service): https://www.youtube.com/@GoogleCloudTech

18. Training and Certification Providers

Presented neutrally as training resources to explore (verify current offerings on each site).

1) DevOpsSchool.com
Suitable audience: DevOps engineers, SREs, cloud engineers
Likely learning focus: Google Cloud operations, migration workflows, DevOps tooling
Mode: check website
Website: https://www.devopsschool.com/

2) ScmGalaxy.com
Suitable audience: beginners to intermediate engineers
Likely learning focus: software configuration management, DevOps fundamentals, tooling
Mode: check website
Website: https://www.scmgalaxy.com/

3) CLoudOpsNow.in
Suitable audience: cloud operations and platform teams
Likely learning focus: cloud ops practices, reliability, automation
Mode: check website
Website: https://www.cloudopsnow.in/

4) SreSchool.com
Suitable audience: SREs, operations engineers, reliability-focused teams
Likely learning focus: SRE principles, monitoring, incident response
Mode: check website
Website: https://www.sreschool.com/

5) AiOpsSchool.com
Suitable audience: operations teams exploring AIOps approaches
Likely learning focus: observability, automation, operational analytics
Mode: check website
Website: https://www.aiopsschool.com/

19. Top Trainers

Listed as trainer platforms/sites to explore (verify current offerings and credentials directly).

1) RajeshKumar.xyz
Likely specialization: DevOps/cloud training content (verify)
Suitable audience: engineers looking for practical guidance
Website: https://rajeshkumar.xyz/

2) devopstrainer.in
Likely specialization: DevOps training programs (verify)
Suitable audience: DevOps engineers and students
Website: https://www.devopstrainer.in/

3) devopsfreelancer.com
Likely specialization: freelance DevOps services/training resources (verify)
Suitable audience: teams seeking hands-on help or mentoring
Website: https://www.devopsfreelancer.com/

4) devopssupport.in
Likely specialization: DevOps support and enablement (verify)
Suitable audience: ops teams needing implementation support
Website: https://www.devopssupport.in/

20. Top Consulting Companies

Presented neutrally as consulting providers to evaluate (verify services, references, and contracts directly).

1) cotocus.com
Likely service area: cloud/DevOps consulting (verify)
Where they may help: migration planning, architecture reviews, implementation assistance
Consulting use case examples: migration readiness assessment; VPC/hybrid connectivity design; operational runbooks
Website: https://cotocus.com/

2) DevOpsSchool.com
Likely service area: DevOps consulting and training (verify)
Where they may help: platform enablement, automation, cloud adoption support
Consulting use case examples: CI/CD enablement; cloud landing zone guidance; migration execution support
Website: https://www.devopsschool.com/

3) DEVOPSCONSULTING.IN
Likely service area: DevOps consulting services (verify)
Where they may help: deployment automation, operations, cloud migrations
Consulting use case examples: migration factory setup; monitoring/alerting integration; security baseline reviews
Website: https://www.devopsconsulting.in/

21. Career and Learning Roadmap

What to learn before this service

To use Database Migration Service effectively, build basics in:

  • Relational database fundamentals: backups, replication concepts, transactions, indexing
  • MySQL/PostgreSQL/SQL Server administration basics: users/privileges, logs, performance signals
  • Google Cloud fundamentals: projects, IAM, VPC networks, firewall rules
  • Cloud SQL basics: instance types, storage, HA, private IP networking
  • Hybrid networking basics: VPN/Interconnect concepts, routing, CIDR planning (for enterprise migrations)

What to learn after this service

  • Advanced Cloud SQL / AlloyDB operations: performance tuning, HA/DR design, backups/PITR, maintenance windows
  • Observability: Cloud Monitoring alerting and SLOs for database services
  • Security hardening: least privilege IAM, secret management, private service networking
  • Data modernization: Datastream, Dataflow, BigQuery analytics patterns (if your goal goes beyond lift-and-shift)

Job roles that use it

  • Cloud Solutions Architect
  • DevOps Engineer / Platform Engineer
  • Site Reliability Engineer (SRE)
  • Database Administrator (DBA)
  • Cloud Security Engineer (for reviews/approvals)
  • Technical Program Manager (migration programs)

Certification path (if available)

Google Cloud certifications don’t always map 1:1 to a single service, but relevant tracks typically include: – Associate Cloud Engineer (foundational operations) – Professional Cloud Architect (architecture and migration planning) – Professional Data Engineer (if migrations feed analytics) Verify current Google Cloud certification offerings: https://cloud.google.com/learn/certification

Project ideas for practice

  • Migrate a MySQL ecommerce schema from VM → Cloud SQL with continuous replication and cutover runbook
  • Run three rehearsals and measure migration time under different VM/Cloud SQL sizes
  • Build an internal “migration checklist” covering networking, IAM, source prerequisites, and validation queries
  • Add alerting for migration job failures and Cloud SQL health during migration windows

22. Glossary

  • AlloyDB: Google Cloud’s managed PostgreSQL-compatible database service optimized for performance (verify capabilities and migration support for your scenario).
  • Cloud SQL: Google Cloud managed relational database service for engines like MySQL, PostgreSQL, and SQL Server.
  • Connection profile: Database Migration Service resource defining how to connect to a source or destination database.
  • Continuous migration: A migration mode where changes on the source continue replicating to the target after initial load, enabling lower downtime cutover (when supported).
  • Cutover: The moment you switch application traffic/writes from the source database to the target database.
  • CIDR: IP address range notation (e.g., 10.10.0.0/24) used in VPC subnetting and firewall rules.
  • Cloud Audit Logs: Google Cloud logs that record administrative actions and access patterns for supported services.
  • Cloud Monitoring: Metrics, dashboards, and alerting service for Google Cloud.
  • Cloud Logging: Centralized log storage, search, and retention controls for Google Cloud.
  • Firewall rule (VPC): Rule controlling allowed inbound/outbound traffic at the network level in Google Cloud VPC.
  • GTID (MySQL): Global Transaction ID, used to uniquely identify transactions for replication (requirements vary).
  • Migration job: The Database Migration Service resource representing a specific migration execution.
  • Private connectivity: Network configuration that allows DMS to reach a source database without exposing it publicly (exact implementation varies; verify in docs).
  • Private IP (Cloud SQL): Cloud SQL networking mode where the instance is only reachable via internal IP in a VPC.
  • Replication lag: Delay between source writes and when those changes appear on the target during continuous replication.
  • WAL (PostgreSQL): Write-Ahead Log used for durability and replication.

23. Summary

Google Cloud Database Migration Service is a managed service in the Databases category that helps you migrate supported relational databases into Cloud SQL and (in supported cases) AlloyDB. It matters because database migrations combine networking, security, data movement, and cutover coordination—and DMS provides a structured, observable workflow to reduce risk.

From a cost perspective, the biggest drivers are usually the target database runtime, storage, and network egress, not just the migration tool itself—so rehearsals, right-sizing, and cleanup are essential. From a security perspective, prioritize private connectivity, strict firewall rules, least-privilege IAM, and careful secrets handling.

Use Database Migration Service when you want a Google-managed migration workflow for supported sources and targets, especially when you need low downtime via continuous replication (where supported). Next step: read the official DMS documentation for your specific engine/version support matrix and run at least one rehearsal migration end-to-end before planning production cutover.