Find the Best Cosmetic Hospitals

Explore trusted cosmetic hospitals and make a confident choice for your transformation.

“Invest in yourself — your confidence is always worth it.”

Explore Cosmetic Hospitals

Start your journey today — compare options in one place.

Data Platform Engineer: Role Blueprint, Responsibilities, Skills, KPIs, and Career Path

1) Role Summary

The Data Platform Engineer designs, builds, and operates the shared data platform capabilities that enable reliable ingestion, storage, transformation, governance, and access to data across the company. The role focuses on creating scalable, secure, and cost-effective “paved roads” (standard patterns, infrastructure, tooling, and automation) so data producers and consumers can move faster with less risk.

This role exists in a software or IT organization because modern products and business functions depend on high-quality, well-governed data for analytics, experimentation, AI/ML, customer insights, and operational decision-making. Without a dedicated platform engineering function for data, teams typically accumulate fragile pipelines, inconsistent definitions, unmanaged costs, and security gaps.

Business value created includes faster delivery of data products, higher trust in metrics, improved platform reliability, reduced operational toil, and demonstrably stronger compliance and security posture. This is a Current role with increasing strategic importance as companies scale data usage and adopt AI-enabled workflows.

Typical interaction surfaces include Data Engineering, Analytics Engineering, BI/Reporting, ML Engineering, Product Engineering, SRE/Platform Engineering, Security/GRC, Finance (FinOps), and Product Management.

Conservative seniority inference: This blueprint targets a mid-level individual contributor (often “Engineer II” equivalent): expected to own meaningful platform components end-to-end, contribute to architecture within established direction, and lead small initiatives without formal people management.


2) Role Mission

Core mission:
Deliver a secure, reliable, and self-service data platform that standardizes how the organization ingests, stores, transforms, governs, and serves data—reducing time-to-data while increasing trust, safety, and cost efficiency.

Strategic importance to the company: – Enables consistent, trusted analytics and product decision-making (single source of truth patterns). – Improves developer productivity by providing reusable frameworks and automation for pipelines and environments. – Reduces operational and compliance risk by embedding controls (access, lineage, retention, encryption, auditability) into the platform by default. – Makes data a scalable asset that supports product growth, experimentation, and AI/ML adoption.

Primary business outcomes expected: – Reduced cycle time from data source onboarding to usable datasets. – Improved data reliability (lower pipeline failures, faster recovery, stronger SLAs/SLOs). – Reduced cost per query / cost per pipeline through optimization and governance. – Higher stakeholder satisfaction and adoption of the standardized platform patterns. – Clearer visibility into lineage, access, data quality, and platform health.


3) Core Responsibilities

Strategic responsibilities

  1. Contribute to the data platform roadmap by identifying scalability, reliability, security, and usability gaps; propose prioritized improvements aligned to business outcomes.
  2. Define and implement “paved road” patterns for data ingestion, transformation orchestration, dataset publishing, and access provisioning.
  3. Standardize platform interfaces (templates, SDKs, pipeline frameworks, documentation) that enable consistent delivery across teams.
  4. Support data product strategy by enabling domain teams to publish governed datasets and metrics through repeatable platform capabilities.

Operational responsibilities

  1. Operate and support data platform services (workflow orchestration, compute clusters, warehouses/lakehouses, catalog, secrets, access control) with production-level hygiene.
  2. Participate in on-call/incident response for data platform components, including triage, mitigation, post-incident reviews, and prevention work.
  3. Drive operational excellence through runbooks, alerts, SLOs, capacity planning, and routine maintenance (upgrades, patching, dependency management).
  4. Manage platform cost and performance in partnership with FinOps—monitor usage, identify waste, implement guardrails, and tune workloads.

Technical responsibilities

  1. Build and maintain ingestion frameworks (batch and/or streaming) including connectors, schema management, error handling, and replay/backfill strategies.
  2. Implement infrastructure-as-code (IaC) for reproducible environments across dev/test/prod with secure defaults and consistent configuration.
  3. Develop and maintain CI/CD for data platform code and pipeline deployments, including automated testing, validation, and promotion workflows.
  4. Enable data quality capabilities (validation checks, anomaly detection, completeness/freshness monitoring) integrated into pipeline execution.
  5. Implement secure access patterns (least privilege, role-based access, data masking, tokenization where needed) and automate provisioning.
  6. Support metadata, lineage, and catalog integration so users can discover datasets, understand provenance, and trust definitions.
  7. Optimize platform performance by tuning compute, storage layouts, partitioning, clustering, caching, and query patterns where applicable.
  8. Design reliable change management for schemas, contracts, and platform components to minimize breaking changes and unplanned downtime.

Cross-functional or stakeholder responsibilities

  1. Partner with data producers and consumers to onboard new sources, define data contracts, and ensure platform adoption through enablement.
  2. Work with Security/GRC to implement audit requirements, retention policies, encryption, and controls for regulated data handling.
  3. Align with SRE/Cloud Platform teams on networking, identity, observability, and shared infrastructure patterns.

Governance, compliance, or quality responsibilities

  1. Embed governance in platform defaults: enforce tagging/classification, retention, access approval workflows, audit trails, and separation of duties where required.
  2. Document and socialize standards: naming conventions, dataset lifecycle, environment promotion rules, and incident procedures.

Leadership responsibilities (applicable without formal management)

  1. Lead small initiatives (1–2 engineers or cross-functional squad participation) by clarifying scope, sequencing work, and driving delivery.
  2. Mentor and unblock others by reviewing designs/PRs, sharing platform patterns, and improving documentation and developer experience.

4) Day-to-Day Activities

Daily activities

  • Monitor platform health dashboards and alerts (pipeline failures, queue backlogs, cluster saturation, warehouse credit spikes).
  • Triage ingestion or orchestration issues; coordinate fixes with data engineering or source system owners.
  • Implement small-to-medium enhancements: new connectors, schema evolution handling, improved retries, better logging, optimized configs.
  • Review pull requests for platform repositories; ensure testing, security, and operational readiness are met.
  • Support user requests: access provisioning, dataset publication guidance, troubleshooting query performance.

Weekly activities

  • Participate in sprint planning/refinement; estimate platform work and negotiate priorities with the Data & Analytics backlog owners.
  • Hold office hours or an enablement session for platform users (data engineers, analysts, scientists).
  • Review platform costs and usage trends; identify one or two optimization opportunities.
  • Improve reliability: add/adjust alerts, update runbooks, tune SLOs, and close top recurring incidents.
  • Partner with Security or IT to address any open findings related to access controls, secrets handling, or audit coverage.

Monthly or quarterly activities

  • Perform capacity planning and scaling reviews (storage growth, compute concurrency, streaming throughput, orchestration load).
  • Upgrade critical platform components (runtime versions, connector libraries, orchestration engines) with safe rollout plans.
  • Run disaster recovery (DR) and restore tests for critical metadata and platform state stores (catalog, orchestration DB, secrets vault).
  • Conduct a platform maturity review: adoption metrics, failure patterns, time-to-onboard sources, quality coverage, and tech debt backlog.
  • Evaluate vendor/platform changes (cloud service updates, deprecations, pricing model shifts) and propose adjustments.

Recurring meetings or rituals

  • Daily stand-up (team-level).
  • Weekly reliability review (top incidents, SLO breaches, error budgets).
  • Biweekly sprint rituals (planning, review, retro).
  • Monthly data governance working group (catalog, access, classification, retention).
  • Architecture review board (as needed for major changes).
  • FinOps review (monthly/quarterly depending on spend).

Incident, escalation, or emergency work

  • Production incident response when pipelines fail, SLAs are missed, or a platform component degrades.
  • Coordinated mitigation with SRE/Cloud Platform if the issue is infrastructure-related.
  • Emergency access reviews and revocation in case of suspected credential compromise or policy violation.
  • Rapid rollback of a platform release that introduces widespread pipeline or query failures.
  • Post-incident review (PIR): root cause analysis, corrective actions, prevention items, and updated runbooks.

5) Key Deliverables

Platform architecture and standards – Data platform reference architecture (current state + target state). – Standardized patterns (“golden paths”) for: – Batch ingestion – Streaming ingestion (if applicable) – Orchestration and scheduling – Data quality checks – Dataset publishing and versioning – Access provisioning and auditing – Naming conventions and tagging/classification standards.

Production systems and automation – IaC modules and environment blueprints (dev/test/prod). – CI/CD pipelines for platform and data pipeline deployments. – Ingestion connectors and templates (e.g., database CDC, SaaS API ingestion, object store ingestion). – Operational automation: – Auto-remediation scripts – Backfill/replay tools – Cost guardrails (quotas, workload management policies) – Access provisioning workflows

Operational readiness artifacts – Runbooks for platform components and common failure modes. – On-call playbooks and escalation paths. – Monitoring and alerting dashboards with defined SLOs. – Dependency and upgrade plans (version matrices, patch schedules).

Governance and security – Access control model documentation (roles, groups, policies). – Audit logging coverage and reporting hooks. – Data retention and deletion workflows (context-specific by regulation). – Evidence artifacts for internal audits (configuration exports, control mappings).

Enablement and adoption – Developer documentation (quickstarts, onboarding guides, troubleshooting). – Training sessions or recorded walkthroughs. – Platform change announcements and migration guides. – A curated backlog of platform improvements informed by user feedback.


6) Goals, Objectives, and Milestones

30-day goals (onboarding and stabilization)

  • Understand the current data platform architecture, ownership boundaries, and critical data flows.
  • Gain access to tooling, repos, environments, and observability dashboards.
  • Learn current SLAs/SLOs and top pain points (incidents, cost spikes, slow onboarding, data quality gaps).
  • Deliver one small production improvement (e.g., better alert, runbook, or connector fix).
  • Establish relationships with key partners: Data Engineering, Analytics Engineering, SRE/Platform, Security.

60-day goals (ownership and delivery)

  • Take operational ownership of at least one platform component (e.g., orchestration service, ingestion framework, warehouse workload management).
  • Implement 1–2 meaningful improvements:
  • CI/CD hardening for pipelines
  • Better schema evolution controls
  • Automated access provisioning enhancement
  • New data quality checks integrated into workflows
  • Reduce one recurring incident class or eliminate a top source of platform toil.

90-day goals (platform leverage and measurable impact)

  • Deliver a medium-sized initiative with measurable outcomes (e.g., reduce failed runs by X%, improve onboarding time by Y days).
  • Publish updated platform documentation and establish a feedback channel/office hours cadence.
  • Introduce or refine at least one SLO with measurement and alerting tied to action.
  • Demonstrate cost optimization impact (e.g., reduced warehouse spend, reduced wasted compute, improved job efficiency).

6-month milestones

  • Standardize an end-to-end “golden path” for onboarding a new data source through to a published, governed dataset.
  • Improve platform reliability:
  • Reduced MTTR through runbooks and automation
  • Reduced incident frequency through preventative engineering
  • Implement stronger governance automation:
  • Dataset classification/tagging enforcement
  • Automated lineage capture (where feasible)
  • Access workflow integration with IAM and ticketing
  • Establish a predictable upgrade cadence and deprecation policy for platform components.

12-month objectives

  • Materially improve platform adoption and developer experience:
  • Majority of new pipelines use platform templates/SDKs
  • Reduced bespoke patterns and “snowflake” pipelines
  • Achieve stable SLO attainment for core platform services (availability, freshness, latency).
  • Demonstrably improved data trust signals: broader data quality coverage, clearer lineage, and higher stakeholder satisfaction scores.
  • Establish cost controls that scale with growth (FinOps guardrails, chargeback/showback, budget alerts).

Long-term impact goals (beyond 12 months)

  • Make data platform capabilities a competitive advantage: faster experimentation, easier AI/ML enablement, and consistent metric governance.
  • Enable decentralization safely (domain-oriented data products) without sacrificing compliance and reliability.
  • Reduce total cost of ownership by continually automating operations and standardizing patterns.

Role success definition

Success is achieved when the data platform is reliably usable (low friction), measurably trustworthy (quality + lineage), secure by default, and cost-controlled, enabling teams to deliver data products quickly with minimal platform support.

What high performance looks like

  • Consistently delivers platform improvements that reduce toil and improve reliability.
  • Anticipates scaling and governance needs rather than reacting to failures.
  • Builds reusable solutions adopted across teams.
  • Communicates clearly with stakeholders; aligns work to measurable outcomes.
  • Maintains production discipline: testing, rollout safety, observability, and documentation.

7) KPIs and Productivity Metrics

The following framework balances output (what was delivered) with outcome (business impact), and includes quality, efficiency, reliability, innovation, and collaboration measures. Targets vary by maturity and scale; benchmarks below are realistic examples for a mid-sized software organization.

Metric name What it measures Why it matters Example target/benchmark Frequency
Time-to-onboard new data source Days from request approved to data available in governed zone Direct indicator of platform usability and standardization P50 ≤ 10 business days; P90 ≤ 20 Monthly
Pipeline deployment lead time Time from code merge to production deployment Reflects CI/CD maturity and release friction ≤ 1 day for standard pipelines Monthly
Change failure rate (platform) % of platform releases that cause incidents/rollback Key DevOps health measure < 10% Monthly
Platform incident rate # of P1/P2 incidents attributable to platform Reliability and operational burden Trend down QoQ Monthly
Mean time to detect (MTTD) Time to detect platform issues Observability effectiveness P50 < 10 minutes Monthly
Mean time to restore (MTTR) Time from incident start to service restoration Business continuity P50 < 60 minutes (context-specific) Monthly
SLO attainment (core services) % time SLOs met (or error budget burn) Reliability standard for critical services ≥ 99.5% for orchestration availability (example) Weekly/Monthly
Data freshness SLA attainment % critical datasets meeting freshness expectations Downstream trust in analytics/ops ≥ 95% of tier-1 datasets Daily/Weekly
Data quality coverage % tiered datasets with automated checks Trust and early detection Tier-1: ≥ 90%; Tier-2: ≥ 60% Monthly
Data quality incident rate # of incidents caused by platform gaps in validation Measures effectiveness of quality controls Trend down Monthly
Schema change success rate % schema changes handled without downstream breakage Platform resilience to evolution ≥ 95% (with contracts) Monthly
Reprocessing/backfill success rate % backfills completed within planned window Reliability and operational predictability ≥ 90% within SLA Monthly
Cost per TB processed (batch) Spend normalized by throughput Cost efficiency at scale Improve QoQ; set baseline first Monthly
Cost per 1,000 queries (warehouse) Normalized query cost Prevents spend runaway as usage grows Improve QoQ; guardrail thresholds Monthly
Idle/wasted compute percentage % compute spend with low utilization Concrete FinOps optimization lever < 15% Monthly
Top offender workload reduction Reduction in spend/latency for worst workloads Focuses optimization on biggest wins 1–3 workloads improved per quarter Quarterly
Access request cycle time Time to provision access (approved requests) Self-service and productivity P50 < 1 day Monthly
Access policy compliance rate % datasets correctly classified/tagged with proper ACLs Audit readiness and risk reduction ≥ 98% Monthly
Documentation freshness % platform docs updated within last N days Reduces support load and increases adoption ≥ 80% updated within 90 days Quarterly
Platform adoption rate % new pipelines using golden-path templates Evidence of standardization success ≥ 70% for new work Quarterly
Internal NPS / satisfaction Stakeholder rating for platform usability and support Captures perceived value ≥ +30 (or ≥ 4/5) Quarterly
PR review responsiveness Median time to first review on platform PRs Team flow efficiency < 1 business day Weekly/Monthly
Automation-toil reduction Hours of manual work eliminated by automation Keeps focus on leverage ≥ 20 hours/month eliminated (team-level) Monthly
Security findings closure time Time to remediate platform-related findings Risk management P50 < 30 days (severity-based) Monthly

Measurement notes – Establish baselines in the first 1–2 quarters if metrics are not currently tracked. – Use tiering (Tier-0 platform services, Tier-1 datasets) to avoid over-optimizing non-critical workloads. – Prefer trend-based targets initially; refine absolute targets as maturity grows.


8) Technical Skills Required

Must-have technical skills

  1. Cloud fundamentals (AWS/Azure/GCP)Critical
    Use: Provision and operate storage, compute, IAM, networking primitives supporting the data platform.
    Evidence: Comfortable with IAM concepts, VPC/VNet, security groups, managed services, cost levers.

  2. Data warehousing/lakehouse conceptsCritical
    Use: Design storage layouts, optimize query performance, manage workload concurrency, support curated layers.
    Evidence: Understand partitioning, clustering/sort keys, file formats (Parquet), table formats (Delta/Iceberg), and query planning basics.

  3. Workflow orchestration (e.g., Airflow/Dagster/Prefect) — Critical
    Use: Build reliable pipelines with retries, dependencies, backfills, and operational visibility.
    Evidence: Can design DAG patterns, handle idempotency, and avoid common failure modes.

  4. Infrastructure as Code (IaC) (Terraform/CloudFormation/Bicep) — Critical
    Use: Reproducible platform environments, policy enforcement, scalable provisioning.
    Evidence: Modules, state management, safe rollouts, reviewable change sets.

  5. CI/CD and software engineering practicesCritical
    Use: Automated testing, promotion, release management for data platform code and pipeline assets.
    Evidence: Branching strategies, pipeline stages, artifact/version management, rollback strategies.

  6. Python and/or JVM language proficiencyImportant to Critical
    Use: Build platform tooling, connectors, pipeline libraries, automation scripts.
    Evidence: Writes maintainable code with tests and packaging; understands performance and dependency management.

  7. SQL proficiency (advanced)Critical
    Use: Debug and optimize transformations and query workloads; validate data correctness.
    Evidence: Can analyze query plans, reduce scan costs, design incremental patterns.

  8. Observability fundamentals (logging/metrics/tracing) — Critical
    Use: Monitor platform and pipelines; set SLOs; accelerate troubleshooting.
    Evidence: Builds actionable dashboards and alerts tied to runbooks.

  9. Data security basicsCritical
    Use: IAM policies, secrets management, encryption, least-privilege patterns, audit logging.
    Evidence: Can implement secure defaults and review for risky configurations.

Good-to-have technical skills

  1. Streaming platforms (Kafka/Kinesis/Pub/Sub) — Important (context-specific)
    Use: Real-time ingestion, event-driven architectures, CDC streaming.
    Evidence: Understands partitions, offsets, exactly-once semantics tradeoffs, schema registry patterns.

  2. Containerization and orchestration (Docker/Kubernetes) — Important (context-specific)
    Use: Run platform services, job execution environments, scalable workers.
    Evidence: Builds images, manages configs/secrets, handles resource requests/limits.

  3. Data transformation frameworks (dbt/Spark) — Important
    Use: Provide standards and integration for transformations; performance tuning.
    Evidence: Understands incremental models, testing, packaging, cluster execution.

  4. Metadata/catalog toolingImportant
    Use: Lineage capture, discovery, stewardship workflows.
    Evidence: Can integrate catalog APIs and enforce tagging conventions.

  5. Access automationImportant
    Use: Automate provisioning (RBAC/ABAC), integrate with ticketing/approvals.
    Evidence: Policy-as-code thinking; understands group/role mapping.

Advanced or expert-level technical skills

  1. Distributed systems troubleshootingImportant for growth
    Use: Debug performance and reliability issues across orchestration, compute, storage, and networking.
    Evidence: Uses logs/metrics systematically; isolates bottlenecks; designs for failure.

  2. Performance engineering and cost optimizationImportant
    Use: Optimize compute sizing, concurrency, caching, file compaction, and workload management.
    Evidence: Demonstrates measurable cost savings without degrading SLAs.

  3. Data contracts and schema governance at scaleImportant
    Use: Reduce breaking changes, improve interoperability between producers/consumers.
    Evidence: Implements versioning, compatibility checks, and deprecation policies.

  4. Platform product thinking (DX/UX for engineers)Important
    Use: Build APIs/templates that are easy to adopt; reduce support demand.
    Evidence: Treats internal platform as a product with users, roadmap, and adoption metrics.

Emerging future skills for this role (next 2–5 years)

  1. Policy-as-code and automated complianceImportant (growing)
    – Use OPA-like patterns, automated evidence collection, continuous control monitoring.

  2. Semantic layer and metrics governanceImportant (growing)
    – More organizations centralize metric definitions and expose them via APIs to BI and AI agents.

  3. AI-assisted operations (AIOps) for data platformsOptional (emerging)
    – Using AI to correlate incidents, suggest remediations, detect anomalies in pipeline behavior.

  4. Data platform enablement for AI/LLM workloadsImportant (growing)
    – Managing vector data stores (context-specific), feature stores, training data governance, and lineage for model inputs.


9) Soft Skills and Behavioral Capabilities

  1. Systems thinkingWhy it matters: Data platform issues are rarely isolated; failures cascade across ingestion, storage, orchestration, and consumption layers. – How it shows up: Connects symptoms to upstream/downstream causes; designs preventative controls. – Strong performance: Reduces recurring incidents by addressing root causes and systemic gaps, not just symptoms.

  2. Operational ownership and accountabilityWhy it matters: The platform is production-critical; reliability and trust depend on disciplined operations. – How it shows up: Proactively monitors, responds, and improves runbooks and alerts; treats incidents as learning opportunities. – Strong performance: Lowers MTTR and incident recurrence; improves on-call experience through automation.

  3. Stakeholder empathy (producer/consumer orientation)Why it matters: Platform success depends on adoption; usability failures become shadow IT and fragmented patterns. – How it shows up: Runs office hours, gathers feedback, writes clear docs, and designs intuitive templates. – Strong performance: Increased adoption of golden paths and reduced “how-to” tickets.

  4. Clear technical communicationWhy it matters: The role spans multiple teams and disciplines; misalignment leads to rework and risk. – How it shows up: Writes concise design docs, explains tradeoffs, communicates incident updates calmly and clearly. – Strong performance: Faster approvals, fewer misunderstandings, and smoother cross-team delivery.

  5. Pragmatic prioritizationWhy it matters: Backlogs are often large (tech debt, reliability, new features). The engineer must focus on leverage. – How it shows up: Uses tiering, SLOs, and cost/impact estimates to prioritize. – Strong performance: Consistently delivers improvements that materially move reliability/cost/adoption metrics.

  6. Collaboration and influence without authorityWhy it matters: Platform teams often cannot force adoption; they must persuade and enable. – How it shows up: Facilitates standards discussions, aligns incentives, and negotiates migration plans. – Strong performance: Teams voluntarily adopt platform patterns and contribute improvements.

  7. Quality mindsetWhy it matters: Silent data corruption and unreliable pipelines cause business harm that is harder to detect than app failures. – How it shows up: Builds tests, validation checks, safe rollout plans, and versioned interfaces. – Strong performance: Fewer defects escape to production; faster detection when they do.

  8. Learning agilityWhy it matters: Data platforms evolve quickly; services, pricing, and best practices change. – How it shows up: Evaluates new features, deprecations, and tooling; upgrades thoughtfully. – Strong performance: Keeps the platform modern and maintainable without destabilizing operations.


10) Tools, Platforms, and Software

Tooling varies by organization; the table below lists realistic options for a software/IT context. Items are marked Common, Optional, or Context-specific.

Category Tool, platform, or software Primary use Common / Optional / Context-specific
Cloud platforms AWS / Azure / GCP Core infrastructure for storage, compute, IAM, networking Common
Data warehouse/lakehouse Snowflake Analytical warehouse, governance features, workload management Common
Data warehouse/lakehouse BigQuery Serverless warehouse on GCP Optional
Data warehouse/lakehouse Redshift AWS warehouse (provisioned/serverless) Optional
Data lake / object storage S3 / ADLS / GCS Raw and curated data storage, staging, archival Common
Table formats Delta Lake / Apache Iceberg / Hudi ACID tables, schema evolution, time travel Context-specific
Processing engines Apache Spark (Databricks/EMR/Synapse) Scalable ETL/ELT processing Common
Orchestration Apache Airflow (MWAA/Composer) Scheduling, dependency management, backfills Common
Orchestration Dagster / Prefect Modern orchestration with strong DX Optional
Streaming / messaging Kafka / MSK / Confluent Event streaming ingestion, CDC streams Context-specific
Streaming / messaging Kinesis / Pub/Sub / Event Hubs Managed streaming services Context-specific
CDC Debezium Change data capture from databases Context-specific
Data transformation dbt Analytics engineering, SQL transformations, testing Common
Data quality Great Expectations / Soda Data validation checks and reporting Optional
Catalog / metadata DataHub / Amundsen Dataset discovery, metadata management Optional
Catalog / governance Collibra / Alation Enterprise catalog and stewardship workflows Context-specific
Lineage OpenLineage / Marquez Standardized lineage capture Optional
Observability Datadog Metrics, logs, alerts, dashboards Common
Observability Prometheus / Grafana Metrics collection and visualization Optional
Logging ELK/OpenSearch Log aggregation and search Optional
Tracing OpenTelemetry Distributed tracing instrumentation Optional
Secrets management HashiCorp Vault / AWS Secrets Manager Secrets storage and rotation Common
Security / IAM Okta / Entra ID Identity provider, SSO, group management Common
Policy-as-code OPA / Conftest Enforce configuration policies in CI Optional
IaC Terraform Provision cloud resources and platform components Common
CI/CD GitHub Actions / GitLab CI / Azure DevOps Build/test/deploy pipelines and IaC Common
Source control GitHub / GitLab / Bitbucket Version control, PR reviews, code ownership Common
Containerization Docker Build and run consistent execution environments Common
Orchestration (containers) Kubernetes / EKS / AKS / GKE Run platform services and job workers Context-specific
Artifact management Artifactory / GH Packages Package and artifact hosting Optional
ITSM ServiceNow / Jira Service Management Incidents, requests, change management Context-specific
Work management Jira Sprint planning, backlog tracking Common
Collaboration Slack / Microsoft Teams Real-time communication Common
Documentation Confluence / Notion Platform docs, runbooks, ADRs Common
Query/Dev tools VS Code / IntelliJ Development environment Common
Notebooks Jupyter Exploration and debugging (often by consumers) Optional
FinOps CloudHealth / native cost tools Spend tracking, budgets, optimization Optional
Testing pytest / dbt tests Unit and data tests Common

11) Typical Tech Stack / Environment

Infrastructure environment

  • Cloud-first environment using managed services where practical.
  • Separation of environments (dev/test/prod) with controlled promotion, especially for shared data assets.
  • Infrastructure defined via IaC with code review requirements and automated policy checks.
  • Network segmentation (private subnets, VPC endpoints/private links) for sensitive data access and exfiltration control.

Application environment

  • Product applications generate operational data via:
  • Event streams (context-specific)
  • Application databases (PostgreSQL/MySQL/etc.)
  • Logs/telemetry pipelines
  • Data ingestion patterns often include CDC for relational systems and API ingestion for SaaS sources.

Data environment

  • Common architecture patterns:
  • Lake + Warehouse: object storage for raw/bronze + curated/silver + warehouse/gold marts.
  • Lakehouse: unified table format with ACID and a compute engine + semantic layer.
  • Standard layers and controls:
  • Landing/raw zones with restricted access
  • Curated zones with validated schemas and quality checks
  • Published datasets with documentation, ownership, and access policies
  • Frequent usage patterns:
  • Batch pipelines scheduled hourly/daily
  • Incremental models in dbt
  • Streaming for near-real-time metrics (where needed)

Security environment

  • Central identity provider with SSO and group-based access management.
  • Secrets managed via vaulting services; no long-lived secrets in code.
  • Encryption in transit and at rest; key management via KMS/HSM (context-specific).
  • Audit logging enabled for platform services and data access; retention policies per regulatory needs.

Delivery model

  • Agile team delivery (Scrum/Kanban hybrid).
  • Platform work blends roadmap features, reliability work, and support/enablement.
  • Production changes follow change management discipline appropriate to company maturity:
  • PR reviews and CI gating
  • Staged rollouts
  • Backward-compatible schema changes where possible

Scale or complexity context

  • Typical mid-sized SaaS data scale (illustrative, varies widely):
  • 10–200 TB in analytical storage
  • 50–500 pipelines
  • 100–2,000 data consumers (analysts, PMs, engineers)
  • Complexity grows with:
  • Multiple domains and teams contributing data
  • Regulatory constraints (PII/PCI/health data)
  • Mixed batch + streaming requirements
  • International data residency needs (context-specific)

Team topology

  • Data Platform Engineering team (ICs + lead/manager) provides shared services.
  • Embedded data engineers or analytics engineers build domain pipelines on the platform.
  • Strong collaboration with Cloud Platform/SRE for shared infrastructure and operational standards.

12) Stakeholders and Collaboration Map

Internal stakeholders

  • Head of Data & Analytics / Director of Data Engineering (executive stakeholder)
  • Align platform priorities to business objectives, risk posture, and scaling needs.
  • Data Platform Engineering Manager (likely direct manager)
  • Provides roadmap direction, prioritization, and operational accountability.
  • Data Engineers (domain teams)
  • Primary platform users and contributors; collaborate on onboarding sources and standard patterns.
  • Analytics Engineers / BI Developers
  • Depend on curated datasets, semantic consistency, and reliable transformations.
  • Data Scientists / ML Engineers
  • Need discoverable, high-quality datasets; may require feature pipelines and reproducibility.
  • Product Engineering teams
  • Provide source system context; align on event instrumentation and data contracts.
  • SRE / Cloud Platform Engineering
  • Shared responsibility for infrastructure, observability, security baselines, and incident processes.
  • Security / GRC / Privacy
  • Requirements for access controls, retention, auditability, and regulatory compliance.
  • Finance / FinOps
  • Cost governance, budgets, chargeback/showback, spend anomaly investigations.
  • Product Management / Operations
  • Consumer of analytics; influences priority of data availability and quality improvements.

External stakeholders (if applicable)

  • Vendors and cloud providers
  • Support cases, roadmap alignment, incident escalations, contract/pricing discussions (often via procurement).
  • Third-party data providers
  • API stability, data quality, delivery SLAs.

Peer roles

  • Platform Engineer (general)
  • Site Reliability Engineer (SRE)
  • Security Engineer (IAM, cloud security)
  • Analytics Engineer
  • ML Platform Engineer (context-specific)

Upstream dependencies

  • Source system availability and schema stability.
  • Identity provider group/role hygiene.
  • Network connectivity and private endpoints.
  • Vendor API reliability and rate limits.

Downstream consumers

  • Executive dashboards and KPI reporting.
  • Product analytics, experimentation platforms.
  • Customer-facing analytics features (context-specific).
  • ML training pipelines and feature stores (context-specific).

Nature of collaboration

  • Consultative + enablement-heavy: Platform engineers provide patterns and guardrails, not bespoke delivery for every use case.
  • Shared operational responsibility: Domain teams own their pipelines; platform team owns platform services, templates, and systemic reliability.

Typical decision-making authority

  • Platform team can decide internal implementation details and standards within agreed architecture guardrails.
  • Cross-team decisions (contracts, ownership, tiering, SLAs) typically require consensus with Data & Analytics leadership and impacted teams.

Escalation points

  • Persistent SLO breaches, major incidents, or repeated policy violations escalate to:
  • Data Platform Engineering Manager
  • Head of Data & Analytics
  • Security leadership (for sensitive data incidents)
  • SRE leadership (for infrastructure-wide issues)

13) Decision Rights and Scope of Authority

Can decide independently (typical)

  • Implementation details of platform components within established architecture.
  • Code-level changes: connector improvements, orchestration DAG patterns, internal libraries.
  • Dashboards/alerts configuration and runbook updates.
  • Minor cost optimizations (e.g., right-sizing, scheduling changes) within agreed guardrails.
  • Documentation structure, developer guides, and enablement materials.

Requires team approval (peer review / architecture review)

  • Introduction of new shared libraries or templates that affect multiple teams.
  • Changes to default pipeline frameworks (e.g., retries, error handling, data quality gates).
  • Significant changes to orchestration patterns or job scheduling strategy.
  • Changes affecting SLO definitions, incident severity definitions, or on-call process changes.

Requires manager/director approval

  • Major platform roadmap reprioritization or multi-quarter initiatives.
  • Vendor evaluations and tool selection proposals.
  • Changes with meaningful cost impact (e.g., new clusters, new service tiers) beyond defined thresholds.
  • Changes that affect organizational policy (e.g., retention defaults, classification requirements).

Requires executive / security / compliance approval

  • Changes impacting regulated data handling (PII/PCI/PHI) policies.
  • Cross-border data residency decisions (if applicable).
  • Material contract commitments, procurement decisions, or platform migrations with broad business impact.
  • Exceptions to security standards (temporary break-glass access policies).

Budget, architecture, vendor, delivery, hiring, compliance authority

  • Budget: Usually influence-only; may propose spend and optimizations. Approval sits with manager/director and finance.
  • Architecture: Contributes and can lead design for platform subsystems; enterprise architecture alignment may be required for large decisions.
  • Vendors: Can evaluate and recommend; procurement approval required.
  • Delivery: Owns delivery of assigned initiatives end-to-end, including release and operational readiness.
  • Hiring: Participates in interviews and calibration; final decisions by manager/director.
  • Compliance: Implements controls; policy definition and acceptance by Security/GRC.

14) Required Experience and Qualifications

Typical years of experience

  • 3–6 years in software engineering, data engineering, platform engineering, or SRE-related roles, with at least 1–3 years working directly with data infrastructure or analytical platforms.

Education expectations

  • Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent experience.
  • Strong candidates may come from non-traditional backgrounds with demonstrable platform engineering outcomes.

Certifications (relevant but not mandatory)

Marked as Optional unless required by company policy. – Cloud certifications (Optional, common): – AWS Certified Solutions Architect / Developer / SysOps – Microsoft Azure Data Engineer Associate – Google Professional Data Engineer – Security certifications (Optional, context-specific): – Security+ (baseline) – Cloud security specialty certifications – Kubernetes certifications (Optional, context-specific): – CKA/CKAD for orgs running k8s extensively

Prior role backgrounds commonly seen

  • Data Engineer with strong DevOps/IaC exposure
  • Platform Engineer with data warehouse/lake experience
  • Analytics Engineer who moved toward platform tooling and operations
  • SRE/DevOps Engineer who specialized in data systems

Domain knowledge expectations

  • Generally cross-industry; domain expertise is helpful but not required.
  • Must understand how product and business teams use data (metrics, dashboards, experimentation, ML features).
  • In regulated environments, familiarity with privacy and compliance concepts is important (PII handling, retention, audit trails).

Leadership experience expectations

  • No formal people management required.
  • Expected to demonstrate initiative leadership: leading small cross-functional efforts, mentoring peers, and improving team practices.

15) Career Path and Progression

Common feeder roles into this role

  • Data Engineer (ETL/ELT focused) moving into shared platform enablement
  • DevOps/Platform Engineer moving into data infrastructure
  • SRE with interest in data reliability engineering
  • Analytics Engineer expanding into orchestration, observability, and governance tooling

Next likely roles after this role

  • Senior Data Platform Engineer (broader ownership, more architectural leadership)
  • Staff Data Platform Engineer (cross-domain strategy, platform vision, high-impact technical leadership)
  • Data Engineering Tech Lead (domain + platform interface ownership)
  • Data Reliability Engineer (specialized reliability/SLO and incident reduction focus)
  • ML Platform Engineer (context-specific; data-to-model pipelines, feature platforms)

Adjacent career paths

  • Security Engineering (Data Security / Cloud Security): focus on access, policy-as-code, compliance automation.
  • Solutions Architect (Data): stakeholder-facing design, migration and modernization leadership.
  • Product Management (Data Platform): internal platform as a product, roadmap, adoption, and UX focus.
  • Engineering Management: team leadership, operating model ownership, budgeting, vendor strategy.

Skills needed for promotion (to Senior)

  • Designs platform subsystems with clear tradeoffs and long-term maintainability.
  • Leads multi-sprint initiatives with multiple stakeholders and measurable outcomes.
  • Demonstrates reliability and cost stewardship (owns SLOs and error budget improvements).
  • Creates reusable assets adopted widely (templates, libraries, automation).
  • Raises the standard on documentation, testing, and operational readiness.

How the role evolves over time

  • Early stage: build core platform foundations and standard patterns; reduce fragility.
  • Growth: scale governance, automation, and reliability; enable self-service onboarding.
  • Mature stage: optimize cost/performance at scale; formalize product thinking for platform; enable domain data products and AI workloads with strong controls.

16) Risks, Challenges, and Failure Modes

Common role challenges

  • Ambiguous ownership boundaries: unclear division of responsibility between platform, domain data engineers, and SRE.
  • Competing priorities: roadmap improvements vs urgent incidents vs stakeholder requests.
  • Legacy debt and inconsistent patterns: inherited pipelines and bespoke code paths.
  • Tool sprawl: multiple orchestration tools, warehouses, catalogs, and inconsistent standards.
  • Invisible failures: data correctness issues that don’t trigger obvious operational alarms.
  • Cost volatility: warehouse spend spikes due to new usage patterns or inefficient queries.

Bottlenecks

  • Manual approvals for access or dataset publication.
  • Lack of automated testing/validation leading to slow releases.
  • Insufficient observability across pipelines and platform services.
  • Limited source system support for CDC or stable schemas.
  • Over-centralization: platform team becomes a ticket queue instead of enabling self-service.

Anti-patterns

  • Building bespoke solutions for each team instead of reusable golden paths.
  • Over-engineering governance that blocks delivery (controls without usability).
  • Treating data incidents as “one-off” rather than fixing systemic root causes.
  • Allowing uncontrolled schema changes with no contracts or compatibility checks.
  • Incomplete separation of environments leading to accidental production impact.
  • Cost optimization done without measuring user impact (breaking SLAs or usability).

Common reasons for underperformance

  • Strong coding skills but weak operational ownership (poor incident response, lack of monitoring).
  • Poor stakeholder communication, leading to mistrust and low adoption.
  • Focus on tools rather than outcomes (shipping tech with no measurable reliability/usability gain).
  • Avoiding hard tradeoffs; failing to prioritize high-leverage initiatives.

Business risks if this role is ineffective

  • Persistent pipeline failures causing unreliable dashboards and poor decision-making.
  • Data leaks or unauthorized access due to weak controls.
  • Runaway cloud/data spend due to lack of cost guardrails.
  • Slow onboarding of new sources, delaying product and business initiatives.
  • Fragmented “shadow” data stacks proliferating across teams, increasing risk and cost.

17) Role Variants

This role changes meaningfully by organization size, operating model, and regulatory context.

By company size

  • Startup / early scale
  • Heavier “full-stack” ownership: ingestion + transformations + warehouse + dashboards sometimes.
  • More hands-on building foundational components quickly.
  • Less formal governance; focus on pragmatic security and reliability basics.
  • Mid-sized software company (typical target for this blueprint)
  • Dedicated platform responsibilities with defined consumers and SLOs.
  • Strong emphasis on self-service, templates, and reducing toil.
  • Governance exists but must be automated to avoid bottlenecks.
  • Large enterprise
  • More specialized platform components and formal change management.
  • Higher complexity in identity, network, data residency, and multi-region operations.
  • Tooling may include enterprise catalog/governance suites and strict audit evidence requirements.

By industry

  • Non-regulated SaaS / consumer tech
  • Faster iteration; focus on cost/performance, experimentation, and product analytics.
  • Financial services / payments
  • Stronger controls: encryption, audit trails, SoD, retention, extensive access reviews.
  • More formal incident handling and DR requirements.
  • Healthcare / life sciences
  • Privacy and governance are central; de-identification, retention, and access justification may be strict.
  • Public sector
  • Procurement constraints, strict compliance, potentially slower change cadence.

By geography

  • Multi-region operations
  • Data residency, cross-border access restrictions, region-specific encryption and key management.
  • More complex replication and DR patterns.
  • Single-region
  • Simpler operations; fewer compliance-driven architectural constraints.

Product-led vs service-led company

  • Product-led
  • Platform is optimized for product analytics, customer insights, and embedded analytics features.
  • Strong emphasis on experimentation velocity and metric governance.
  • Service-led / IT organization
  • More emphasis on centralized governance, ITSM processes, and SLA reporting.
  • Data platform may serve multiple business units with different priorities.

Startup vs enterprise operating model

  • Startup
  • “Do the thing” orientation: deliver quickly, accept some manual steps initially.
  • Platform engineer may double as data engineer.
  • Enterprise
  • “Design for scale and audit”: formal architecture reviews, standardized controls, evidence collection.
  • Stronger separation of duties and more complex stakeholder landscape.

Regulated vs non-regulated environment

  • Regulated
  • Mandatory: classification, retention, audit logging, access recertification, incident reporting requirements.
  • Platform engineer spends more time on controls automation and documentation.
  • Non-regulated
  • More discretion; still must maintain good security hygiene and cost governance.

18) AI / Automation Impact on the Role

Tasks that can be automated (increasingly)

  • Pipeline scaffolding and template generation: AI-assisted creation of standardized ingestion/transformation pipelines.
  • Automated documentation drafts: generating dataset descriptions, runbook outlines, and change logs from metadata and code.
  • Alert correlation and noise reduction: AIOps tools can group related alerts and suggest likely root causes.
  • Query optimization suggestions: AI can recommend partitioning, clustering, or rewrite patterns based on workload telemetry.
  • Access request triage: automating approvals for low-risk requests based on policy rules (with audit trails).
  • Data quality anomaly detection: automated detection of drift, freshness anomalies, and unusual distributions.

Tasks that remain human-critical

  • Architecture and tradeoff decisions: reliability vs cost vs usability vs security tradeoffs require context and accountability.
  • Risk management and compliance interpretation: mapping controls to company-specific policies and regulator expectations.
  • Incident leadership: real-time coordination, prioritization, and clear communication during outages.
  • Stakeholder alignment: negotiating standards adoption and migrations across teams.
  • Product thinking for platform UX: understanding developer workflows and designing intuitive paved roads.

How AI changes the role over the next 2–5 years

  • The role shifts further from “manual build and debug” toward platform product management + reliability engineering + policy automation.
  • Engineers will be expected to:
  • Integrate AI-assisted tooling into CI/CD and operations safely (guardrails, verification, auditability).
  • Support new consumption patterns: AI agents querying governed data, automated insight generation, and AI-driven dashboards.
  • Strengthen metadata and semantic consistency so AI systems can interpret datasets correctly (ownership, definitions, lineage, quality signals).

New expectations caused by AI, automation, or platform shifts

  • Higher bar for metadata quality: AI consumers need accurate dataset descriptions, owners, tiers, and definitions.
  • Governed self-service at scale: automation reduces manual tickets; policies must be clear and enforceable.
  • Provenance and reproducibility: stronger lineage and dataset versioning expectations for AI/ML and audit requirements.
  • Security posture for AI access: ensuring AI tools and agents inherit least-privilege access, with strong audit trails.

19) Hiring Evaluation Criteria

What to assess in interviews

Assess candidates across platform engineering fundamentals, data systems knowledge, operational maturity, and collaboration.

  1. Data platform design competence – Can they design ingestion/orchestration/storage patterns with reliability and scale in mind?
  2. Operational excellence – How they monitor, respond to incidents, define SLOs, and reduce toil.
  3. Security and governance – Least privilege, secrets management, auditability, and safe data handling patterns.
  4. Cost/performance optimization – Ability to reason about spend drivers and performance levers (compute, storage, query patterns).
  5. Software engineering quality – Testing, CI/CD, code structure, maintainability, documentation discipline.
  6. Stakeholder collaboration – Ability to influence adoption, communicate tradeoffs, and work across teams.

Practical exercises or case studies (recommended)

  1. Architecture case study (60–90 minutes) – Prompt: “Design a data platform onboarding path for a new source system (Postgres + event stream). Include schema evolution, retries, backfill, access control, and monitoring.” – Evaluate: tradeoffs, completeness, operational thinking, security defaults, clarity.

  2. Debugging exercise (45–60 minutes) – Provide logs/metrics for a failing pipeline and ask the candidate to diagnose root cause and propose fixes. – Evaluate: structured troubleshooting, hypotheses, prioritization, and remediation plan.

  3. IaC/CI review (take-home or live, 60 minutes) – Review a Terraform module and CI pipeline; identify risks and suggest improvements. – Evaluate: safety, state management awareness, policy enforcement, secrets handling.

  4. SQL/performance scenario (30–45 minutes) – Provide a slow query and table schema; ask how to reduce cost/latency. – Evaluate: pragmatic optimization strategies, understanding of partitioning/clustering and query patterns.

Strong candidate signals

  • Has owned a production data platform component end-to-end (or a meaningful subsystem).
  • Can articulate SLOs and show how they improved reliability using metrics.
  • Demonstrates secure-by-default thinking (IAM, secrets, audit logs).
  • Understands schema evolution and data contracts, not just happy-path ingestion.
  • Shows evidence of building reusable frameworks/templates that others adopted.
  • Communicates clearly with structured design docs and incident narratives.

Weak candidate signals

  • Only batch ETL experience with limited production operations or observability.
  • Treats security and governance as an afterthought or “someone else’s job.”
  • Can’t explain how to manage backfills, replays, idempotency, or failure isolation.
  • Over-indexes on a single tool without understanding underlying principles.
  • Limited collaboration examples; prefers building bespoke solutions.

Red flags

  • Proposes storing secrets in code or using broad admin roles routinely.
  • Dismisses incident process rigor (“we just rerun jobs”) without root cause focus.
  • Lacks respect for data correctness risks and audit requirements.
  • Cannot explain tradeoffs or justify design choices with reliability/cost/security reasoning.
  • History of making breaking changes without migrations, versioning, or communication.

Scorecard dimensions (structured)

Use a consistent rubric (1–5) across interviewers.

Dimension What “5” looks like Common evidence
Data platform architecture Designs scalable, resilient patterns with clear tradeoffs Strong case study design
Reliability & operations SLO-driven, reduces toil, strong incident handling Real examples, metrics
Security & governance Secure defaults, least privilege, auditability IAM patterns, controls
Cost/performance Identifies spend drivers, optimizes safely Optimization stories
Software engineering Tests, CI/CD, maintainable code, reviews PR discussions, sample code
Collaboration & communication Influences adoption, clear docs, stakeholder alignment Examples, writing clarity
Learning agility Quickly absorbs new systems; stays current Past transitions/upskilling
Execution Delivers iteratively with measurable outcomes Project narratives

20) Final Role Scorecard Summary

Category Summary
Role title Data Platform Engineer
Role purpose Build and operate the shared data platform (ingestion, orchestration, storage, governance, access, observability) to deliver reliable, secure, cost-effective, self-service data capabilities for the organization.
Top 10 responsibilities 1) Build paved-road patterns for ingestion and orchestration 2) Operate core platform services with production discipline 3) Implement IaC for reproducible environments 4) Build CI/CD for platform and pipeline deployments 5) Improve observability (dashboards, alerts, SLOs, runbooks) 6) Enable secure access provisioning and auditing 7) Implement schema evolution and data contract safeguards 8) Integrate data quality checks into workflows 9) Optimize cost and performance with FinOps 10) Enable stakeholders through docs, office hours, and standards
Top 10 technical skills 1) Cloud (AWS/Azure/GCP) 2) SQL (advanced) 3) Orchestration (Airflow/Dagster/Prefect) 4) IaC (Terraform) 5) CI/CD practices 6) Python (and/or JVM) 7) Warehousing/lakehouse concepts 8) Observability fundamentals 9) Data security (IAM, secrets, encryption) 10) Spark/dbt integration (common in practice)
Top 10 soft skills 1) Systems thinking 2) Operational ownership 3) Stakeholder empathy 4) Clear technical communication 5) Pragmatic prioritization 6) Influence without authority 7) Quality mindset 8) Learning agility 9) Incident calmness and coordination 10) Documentation discipline
Top tools or platforms Cloud platform (AWS/Azure/GCP), Snowflake/BigQuery/Redshift (context), S3/ADLS/GCS, Airflow (or equivalent), Spark/Databricks (context), dbt, Terraform, GitHub/GitLab, Datadog/Grafana, Vault/Secrets Manager, Jira/Confluence
Top KPIs Time-to-onboard new source, SLO attainment, incident rate, MTTR/MTTD, data freshness SLA attainment, data quality coverage, change failure rate, cost per TB processed, access request cycle time, stakeholder satisfaction/adoption rate
Main deliverables IaC modules, CI/CD pipelines, ingestion frameworks/connectors, orchestration templates, runbooks, monitoring dashboards and alerts, access control automation, governance standards and documentation, upgrade/migration plans, platform architecture artifacts
Main goals 30/60/90-day onboarding-to-impact delivery; 6–12 month standardization of golden paths, improved reliability and governance automation, measurable cost optimization, increased adoption and reduced toil
Career progression options Senior Data Platform Engineer → Staff Data Platform Engineer; Data Reliability Engineer; Data Engineering Tech Lead; ML Platform Engineer (context-specific); Platform Engineering/SRE track; Engineering Management or Data Platform Product Management (internal platform)

Find Trusted Cardiac Hospitals

Compare heart hospitals by city and services — all in one place.

Explore Hospitals
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Certification Courses

DevOpsSchool has introduced a series of professional certification courses designed to enhance your skills and expertise in cutting-edge technologies and methodologies. Whether you are aiming to excel in development, security, or operations, these certifications provide a comprehensive learning experience. Explore the following programs:

DevOps Certification, SRE Certification, and DevSecOps Certification by DevOpsSchool

Explore our DevOps Certification, SRE Certification, and DevSecOps Certification programs at DevOpsSchool. Gain the expertise needed to excel in your career with hands-on training and globally recognized certifications.

0
Would love your thoughts, please comment.x
()
x