Find the Best Cosmetic Hospitals

Explore trusted cosmetic hospitals and make a confident choice for your transformation.

โ€œInvest in yourself โ€” your confidence is always worth it.โ€

Explore Cosmetic Hospitals

Start your journey today โ€” compare options in one place.

Dynatrace Administration Professional Certification Master Guide

Last verified: April 24, 2026
Audience: Dynatrace administrators, platform owners, observability platform teams, SRE leads, operations teams, and anyone preparing for Dynatrace Administration Professional Certification.


0. What this guide is

This is a one-stop master guide for Dynatrace Administration Professional Certification preparation.

It focuses on the skills an administrator needs to manage the Dynatrace SaaS platform for an organization: access, governance, settings, monitoring configuration, data ingestion, cost/retention, alerting, automation, privacy, platform operations, and configuration at scale.

This is not an exam dump. It is a structured study and operations guide that prepares you for scenario-based certification questions and real administration work.


1. Current certification snapshot

1.1 What the certification validates

The Dynatrace Administration Professional Certification focuses on practitioners who manage the Dynatrace SaaS platform at their organization. The certification validates that a person can maintain Dynatrace environments so daily users have correct access and the platform functions properly.

In practical terms, this means you should be able to:

  • Manage users, groups, permissions, IAM policies, and access scopes.
  • Understand account-level vs environment-level administration.
  • Configure SAML/SCIM identity integration concepts.
  • Manage platform settings and understand settings hierarchy.
  • Use tags, management zones, segments, and security context appropriately.
  • Understand ActiveGate, network zones, OneAgent connectivity, and data routing concepts.
  • Configure alerting profiles, notifications, anomaly detection, and maintenance windows.
  • Understand Grail, buckets, data retention, logs, OpenPipeline, and cost control.
  • Understand DPS/classic licensing concepts and consumption monitoring.
  • Understand API tokens, platform tokens, OAuth clients, and secure automation access.
  • Understand configuration as code using Monaco and related permission requirements.
  • Apply privacy, masking, credential vault, and audit concepts.
  • Support Dynatrace users by keeping the environment organized, secure, cost-aware, and reliable.

1.2 Publicly visible learning path status

The current public Dynatrace University catalog lists:

  • Dynatrace Administration Professional Certification Learning Path
  • 2 courses
  • 3h 15m total duration
  • Free learning path

The detailed certification registration, exam scheduling, pass mark, retake rules, and exact exam format may require Dynatrace University login.

1.3 What to verify before booking

Before booking the exam, verify these inside Dynatrace University:

  • Current question count
  • Exam duration
  • Whether the exam includes practical tasks
  • Whether practical tasks are open book
  • Passing score
  • Retake waiting period
  • Retake cost or voucher rules
  • Whether ProctorU or another provider is used
  • Whether online proctoring language is English only
  • Whether partner and customer tracks differ
  • Required or recommended prerequisite certifications
  • Whether Associate certification is required or only recommended
  • Whether the exam uses latest Dynatrace, Dynatrace Classic, or both
  • Whether hands-on tasks require screenshots, tenant access, or answers entered directly into an exam form

1.4 Recommended prerequisite knowledge

Before studying Administration Professional, you should already understand Associate-level concepts:

  • OneAgent
  • ActiveGate
  • Dynatrace Platform / Tenant
  • Grail
  • OpenPipeline
  • Smartscape
  • PurePath / distributed traces
  • Davis / Dynatrace Intelligence
  • Infrastructure Observability
  • Application Observability
  • Log Management and Analytics
  • Digital Experience Monitoring
  • Dashboards and notebooks
  • Problems, events, alerts, and workflows

If these are weak, revise the Associate guide first.


2. Administrator mental model

A Dynatrace administrator is not only a user who can click settings. A good administrator owns the operating model of the platform.

2.1 The administratorโ€™s responsibility map

Identity and access:
  Users, groups, policies, SAML, SCIM, OAuth, tokens

Environment organization:
  Accounts, environments, management zones, segments, tags, ownership model

Monitoring governance:
  OneAgent modes, ActiveGate, network zones, host groups, updates, cloud/Kubernetes integrations

Data governance:
  Grail buckets, retention, OpenPipeline, logs, masking, ingestion rules, storage access

Alert governance:
  Anomaly detection, metric events, alerting profiles, notifications, maintenance windows

Automation governance:
  Workflows, integrations, Monaco, APIs, platform tokens, OAuth clients

Security and privacy:
  Credential vault, data masking, audit logs, access controls, sensitive data handling

Cost and license control:
  DPS/classic consumption, usage reporting, budgets, cost allocation, high-volume data sources

User enablement:
  Dashboards, notebooks, documentation, training, support, troubleshooting

2.2 The admin operating principle

Every administration decision should answer four questions:

  1. Who should have access?
    Use groups, policies, management zones, segments, security context, bucket permissions, and least privilege.
  2. What data should be collected and stored?
    Use OneAgent settings, log ingest rules, OpenPipeline, buckets, retention, and masking.
  3. Who should be notified and when?
    Use anomaly detection, alerting profiles, problem notifications, workflows, and maintenance windows.
  4. How will this scale safely?
    Use naming standards, tags, automation, Monaco, API governance, and cost controls.

3. Dynatrace account and environment model

3.1 Account

A Dynatrace account is the account-level administrative container for users, groups, environments, policies, licenses/subscriptions, OAuth clients, SAML/SCIM, and account-level access.

An account can contain one or more environments.

3.2 Environment / tenant

An environment or tenant is where users monitor, analyze, configure, and operate Dynatrace for a specific scope.

Examples:

  • Production environment
  • Non-production environment
  • Region-specific environment
  • Business-unit environment
  • Sandbox/training environment

3.3 Account Management

Account Management is where administrators commonly handle:

  • People / users
  • Groups
  • Policies
  • Domain verification
  • SAML configuration
  • SCIM configuration
  • OAuth clients
  • Subscription/license views
  • Cost and usage views
  • Environment access

3.4 Latest Dynatrace vs Dynatrace Classic

Administrators should understand that Dynatrace documentation and UI often distinguish between:

  • Latest Dynatrace: the modern platform experience with Grail, Apps, IAM policies, platform tokens, OpenPipeline, Workflows, and new platform services.
  • Dynatrace Classic: older environment settings, classic permissions, classic dashboards, classic problem notifications, and classic APIs.

Certification questions may use both concepts, especially when asking about migration, legacy vs current access models, or classic features still widely used.

3.5 Admin exam focus

Expect scenario questions such as:

  • A new team needs access only to a specific application. What should you configure?
  • A group needs to manage settings but not users. Which access mechanism is appropriate?
  • You need SSO with corporate identity. What must be verified first?
  • You need automation access for Monaco. What credential type and permissions are needed?
  • Users are querying too much log data. Which cost-control or retention options help?

4. Identity and Access Management overview

IAM is one of the most important Administration Professional topics.

4.1 IAM purpose

Dynatrace IAM controls:

  • Who can sign in
  • Which account they belong to
  • Which groups they are in
  • Which environments they can access
  • Which platform resources they can use
  • Which data they can query
  • Which settings they can view or modify
  • Which APIs or automations can act on their behalf

4.2 IAM components

Key IAM components:

  • Users
  • Groups
  • Policies
  • Policy boundaries
  • Default policies
  • Role-based permissions / classic permissions
  • SAML federation
  • SCIM provisioning
  • Domain verification
  • Platform tokens
  • OAuth clients
  • Access tokens classic
  • Service users
  • Effective policies

4.3 Authentication vs authorization

Authentication

Authentication proves who the user or system is.

Examples:

  • Dynatrace local login
  • SAML SSO through a corporate identity provider
  • OAuth client credentials for automation
  • Platform token used by a script

Authorization

Authorization decides what the authenticated user or system can do.

Examples:

  • View dashboards
  • Manage settings
  • Manage users
  • Query Grail data
  • Manage buckets
  • Create workflows
  • Access only one management zone

4.4 Least privilege principle

Administrators should assign the minimum permissions needed.

Bad example:

  • Everyone gets admin access because it is easier.

Good example:

  • Developers can view their application, logs, traces, dashboards, and problems.
  • SREs can manage alerting and workflows for owned services.
  • Platform admins manage IAM, policies, buckets, OpenPipeline, and global settings.
  • Finance/license admins can view subscription and usage information.

5. Users

5.1 What users are

Users are people who access Dynatrace. They can be managed directly in Dynatrace or federated through an external identity provider.

5.2 User administration tasks

You should know how to:

  • Invite users
  • Assign users to groups
  • Remove users from groups
  • Deactivate or remove users
  • Export user lists
  • Identify user group memberships
  • Manage emergency contacts
  • Understand non-federated vs federated users

5.3 User lifecycle

A recommended lifecycle:

Request access
  โ†“
Determine role/team/application/environment
  โ†“
Add to correct IdP group or Dynatrace group
  โ†“
Validate effective permissions
  โ†“
User performs required task
  โ†“
Periodic access review
  โ†“
Remove access when no longer needed

5.4 Common mistakes

  • Giving direct access without group-based governance.
  • Forgetting to remove users when they leave a team.
  • Mixing local and federated access without clear ownership.
  • Not maintaining emergency/fallback admin users for SAML outage scenarios.
  • Assigning users to too many overlapping groups without reviewing effective permissions.

6. Groups

6.1 What groups are

Groups are collections of users. In Dynatrace, users inherit access permissions through group membership.

6.2 Why groups matter

Groups allow administrators to manage access at scale.

Example group model:

  • dt-admins-global
  • dt-platform-admins
  • dt-sre-prod-viewers
  • dt-sre-prod-operators
  • dt-dev-payments-readonly
  • dt-dev-checkout-operators
  • dt-finops-license-viewers
  • dt-security-appsec-admins
  • dt-automation-service-users

6.3 Group design patterns

Role-based group pattern

Groups are based on job function:

  • Observability admins
  • Developers
  • SREs
  • Executives
  • Security analysts
  • Finance/license viewers

Application-based group pattern

Groups are based on ownership:

  • Payments team
  • Checkout team
  • Search team
  • Mobile team
  • Platform team

Environment-based group pattern

Groups are based on environment:

  • Production viewers
  • Production operators
  • Non-production admins
  • Sandbox users

Combined pattern

Best for large organizations:

Role + Scope + Environment

Examples:
  dt-payments-prod-viewer
  dt-payments-prod-operator
  dt-payments-nonprod-admin
  dt-platform-global-admin
Code language: PHP (php)

6.4 Exam focus

Know that users inherit access from groups, and groups are bound to permissions or policies.


7. Policies and permissions

7.1 What policies do

Policies define whether actions in Dynatrace are allowed. When policies are bound to user groups, they describe an access pattern that is enforced at runtime.

7.2 Policy-based access

Modern Dynatrace uses fine-grained IAM policies to control platform access.

Policies can define access to:

  • Apps
  • Settings
  • Dashboards
  • Documents
  • Notebooks
  • Workflows
  • Buckets
  • DQL query execution
  • OpenPipeline
  • Segments
  • SLOs
  • Account-level operations
  • Platform services

7.3 Default policies

Dynatrace provides default policies such as standard user and pro user patterns. These are useful starting points, but enterprise administrators often need custom policies.

7.4 Classic role-based permissions

Classic permissions are older permissions used by Dynatrace Classic functionality. Administrators may need to understand both modern policies and classic role-based permissions because many environments still use a mix.

7.5 Policy boundaries

Policy boundaries further restrict what a policy can do. They are useful when users need broad function access but limited scope.

Example:

  • A user may be allowed to view logs only in selected storage buckets.
  • A team may manage settings only for entities with a specific security context.
  • A group may query data only for allowed segments.

7.6 Effective permissions

Effective permissions are the actual permissions a user receives after all group memberships, policies, classic permissions, and boundaries are evaluated.

Always validate effective permissions when troubleshooting access.

7.7 Policy design best practices

  • Use least privilege.
  • Use groups, not one-off user assignments.
  • Separate read, operate, and admin roles.
  • Separate production and non-production access.
  • Name policies clearly.
  • Document why each policy exists.
  • Review effective permissions after changes.
  • Avoid broad admin rights except for platform administrators.
  • Use service users and scoped tokens for automation.

7.8 Exam focus

Expect questions like:

  • Which mechanism grants permissions to users? Groups and policies.
  • How do you create fine-grained access? IAM policies and boundaries.
  • How do you troubleshoot unexpected access? Check group membership and effective policies.
  • How do you support automation? OAuth clients, platform tokens, service users, and correct scopes/policies.

8. SAML, SCIM, and domain verification

8.1 Why SAML matters

SAML allows Dynatrace SaaS users to authenticate through a corporate identity provider.

Examples of IdPs:

  • Microsoft Entra ID / Azure AD
  • Okta
  • Google Workspace
  • Ping Identity
  • ADFS

8.2 Why SCIM matters

SCIM automates user and group provisioning from an identity provider into Dynatrace.

SCIM helps with:

  • User provisioning
  • User deprovisioning
  • Group synchronization
  • Reduced manual user management
  • Better access governance

8.3 Domain verification

Before configuring SAML or SCIM for an email domain, administrators must prove ownership of the domain, usually by adding a DNS TXT record.

Important concept:

  • Domain verification is required to prove the organization owns the email domain used for federated identity.
  • A domain verified for SAML may also be valid for SCIM.

8.4 SAML configuration flow

Typical flow:

Create fallback admin account
  โ†“
Verify domain ownership
  โ†“
Create SAML configuration in Account Management
  โ†“
Configure IdP with Dynatrace metadata
  โ†“
Configure Dynatrace with IdP metadata
  โ†“
Map attributes and groups if needed
  โ†“
Test with pilot users
  โ†“
Roll out to more users
  โ†“
Monitor sign-in and access issues
Code language: JavaScript (javascript)

8.5 Fallback admin account

Before enabling SAML, maintain an emergency/fallback admin account outside the federated domain if supported by your governance rules.

Why:

  • If SAML breaks, you need a way to recover access.
  • If the IdP is unavailable, administrators may be locked out.

8.6 SAML authorization patterns

SAML can be used only for authentication, or also for authorization through group mapping.

Common patterns:

  • Authenticate via SAML, manage groups in Dynatrace.
  • Authenticate via SAML, map IdP groups to Dynatrace groups.
  • Use SCIM for group provisioning and keep authorization in IdP.

8.7 SCIM constraints to know

Key SCIM concepts:

  • Users must belong to verified email domains.
  • User identifiers should be persistent.
  • Email changes may not be supported in some workflows.
  • Group provisioning should be planned carefully to avoid accidental permission changes.

8.8 Exam focus

Know:

  • SAML = federated authentication / SSO.
  • SCIM = automated user/group provisioning.
  • Domain verification is required.
  • Keep fallback access.
  • IdP group mapping can simplify authorization.

9. Tokens, OAuth clients, and API access

9.1 Why token governance matters

Administrators often need programmatic access for:

  • Automation
  • Scripts
  • CI/CD
  • Monaco
  • Terraform
  • Data export
  • API integrations
  • Account management
  • Settings deployment
  • Workflow management

Incorrect token management can create security and audit risks.

9.2 Token and credential types

Platform tokens

Platform tokens are long-lived tokens for programmatic access to Dynatrace platform services. They operate within the permissions of the assigned user or service user.

Good for:

  • Scripts
  • Direct API integrations
  • Scheduled Grail queries
  • Dashboard sync scripts
  • Business metric or event ingestion

OAuth clients

OAuth clients use client credentials and are suitable for service-to-service integrations and account-management automation.

Good for:

  • External system integrations
  • Monaco automation
  • Account Management API
  • CI/CD deployments
  • Enterprise automation

Access tokens classic

Classic access tokens are used for older Dynatrace Environment API and Configuration API scenarios. In latest Dynatrace, prefer platform tokens or OAuth clients where supported.

Service users

Service users are non-human identities used by applications, services, or automation.

9.3 Token best practices

  • Use least privilege scopes.
  • Prefer service users for automation.
  • Avoid personal tokens for production automation where possible.
  • Rotate tokens regularly.
  • Set expirations where feasible.
  • Store secrets in a password manager or secure vault.
  • Never put tokens in scripts, source control, wiki pages, or screenshots.
  • Disable tokens that are no longer used.
  • Document token owner, purpose, scopes, and expiry.
  • Use OAuth clients for automation requiring specific scopes and governance.

9.4 Common scopes and permissions to recognize

You do not need to memorize every scope, but you should understand scope categories:

  • Read settings
  • Write settings
  • Read schemas
  • Manage workflows
  • Manage calendars/rules
  • Manage buckets
  • Manage documents
  • Manage OpenPipeline
  • Manage segments
  • Manage SLOs
  • Query Grail data
  • Access account management

9.5 Monaco authentication

Monaco supports platform tokens and OAuth clients. Each configuration type requires the correct scopes and permissions.

For example, Monaco may need permissions to:

  • Read/write Settings 2.0 objects
  • Read schemas
  • Manage workflows
  • Manage buckets
  • Manage OpenPipeline
  • Manage segments
  • Manage SLOs

9.6 Exam focus

Expect scenario questions like:

  • A CI/CD pipeline needs to deploy settings. Which access type is appropriate?
  • A script needs long-lived platform API access. Which token type may fit?
  • A personal access token was committed to Git. What should you do? Revoke/rotate immediately.
  • A Monaco deployment fails with unauthorized. What should you check? OAuth/platform token scopes and user/group policies.

10. Settings app and settings framework

10.1 What the Settings app is

Settings is a preinstalled Dynatrace app that centralizes environment configuration. It controls how data is collected, processed, stored, and analyzed.

10.2 What administrators use Settings for

Common settings areas:

  • Preferences
  • Data privacy
  • OneAgent behavior
  • Log monitoring
  • OpenPipeline
  • Storage management
  • Anomaly detection
  • Alerting
  • Tags
  • Management zones
  • Synthetic settings
  • RUM settings
  • API/token settings
  • Integration settings
  • Extension settings

10.3 Settings access

Access to settings is controlled by IAM policies. Some users may only read settings; others may modify settings.

10.4 Settings scope and hierarchy

Many settings can exist at different scopes:

  • Global/environment level
  • Host group level
  • Host level
  • Process group level
  • Service level
  • Application level
  • Entity-specific scope

Important rule:

The most specific setting usually takes precedence.

Example:

Host-level setting
  overrides
Host-group-level setting
  overrides
Environment-level setting

10.5 Settings objects and schemas

The Settings framework uses:

  • Settings object: an actual configuration instance.
  • Settings schema: the structure that defines which parameters the object supports.

Schemas are managed by Dynatrace. Objects are controlled by administrators.

10.6 Programmatic settings management

Administrators can manage settings through:

  • Settings app
  • Settings API
  • Monaco configuration as code
  • Terraform provider in some use cases

10.7 Exam focus

Know:

  • Settings is the central place for environment configuration.
  • Settings can be scoped.
  • More specific settings override broader settings.
  • Settings access is controlled by IAM policies.
  • Settings can be managed through UI, API, and Monaco.

11. Configuration as Code with Monaco

11.1 What Monaco is

Monaco is Dynatraceโ€™s native Configuration as Code CLI. It enables administrators to manage monitoring configuration through files, version control, reviews, and deployment pipelines.

11.2 Why Monaco matters

Monaco helps with:

  • Standardizing configuration across environments
  • Promoting settings from dev to staging to prod
  • Backing up configurations
  • Version-controlling changes
  • Reviewing configuration changes through pull requests
  • Reusing templates
  • Reducing manual UI drift

11.3 Typical Monaco workflow

Export/download existing configuration
  โ†“
Store in Git
  โ†“
Review and edit YAML/config files
  โ†“
Deploy to target environment
  โ†“
Validate configuration
  โ†“
Promote to additional environments

11.4 What can be managed through Monaco

Depending on current support and permissions, Monaco can manage many settings and platform resources, such as:

  • Settings 2.0 objects
  • Dashboards/documents
  • Workflows
  • Buckets
  • OpenPipeline
  • Segments
  • SLOs
  • Alerting-related settings
  • Some classic configuration objects

11.5 Monaco authentication

Monaco needs platform tokens or OAuth clients with correct scopes and matching user/group permissions.

Common failure causes:

  • Missing OAuth scope
  • User lacks policy permission
  • Service user lacks group membership
  • Wrong environment/account target
  • Missing schema read permission
  • Token expired or disabled

11.6 Monaco best practices

  • Store configuration in Git.
  • Use separate folders for environments.
  • Use variables for environment-specific values.
  • Review changes through pull requests.
  • Use service users rather than personal accounts.
  • Use least privilege OAuth scopes.
  • Test in non-production before production.
  • Keep naming conventions consistent.
  • Document ownership of each configuration package.

11.7 Exam focus

Know that Monaco is for Configuration as Code and helps manage Dynatrace configuration at scale. Understand why OAuth scopes and policies matter.


12. Tags and metadata

12.1 What tags are

Tags are labels applied to monitored entities. They are used to organize, filter, alert, analyze, and scope data.

Examples:

  • environment:production
  • team:payments
  • application:checkout
  • owner:sre
  • criticality:high
  • region:apac
  • cost-center:1234

12.2 Why tags matter

Tags are used for:

  • Filtering dashboards
  • Filtering problems
  • Alerting profiles
  • Maintenance windows
  • Ownership mapping
  • Team-based views
  • Search and navigation
  • Cost attribution patterns
  • Management zone rules

12.3 Manual tags

Manual tags are useful for a small number of static entities.

Use manual tags when:

  • You only need to tag a few entities.
  • The tag does not follow predictable metadata.
  • The entity is temporary or special.

12.4 Automatic tags

Automatic tags are created based on rules.

Use automatic tags when:

  • You need consistency at scale.
  • You can match entity metadata.
  • You want tags derived from cloud tags, Kubernetes labels, host names, process group names, or custom properties.

12.5 Tag governance

Recommended naming model:

key:value

Examples:
  app:checkout
  env:prod
  owner:platform
  service-tier:gold
Code language: CSS (css)

Avoid inconsistent tags like:

Prod
production
prd
PROD
Environment Production

12.6 Exam focus

Know:

  • Tags organize and filter entities.
  • Tags can be manual or automatic.
  • Tags are important for alerting, dashboards, maintenance windows, and management zone rules.

13. Management zones, segments, and security context

This is a high-value administration topic.

13.1 Management zones

Management zones organize Dynatrace environments and control user access to specific data.

Management zones are defined by rules that determine which entities and dimensional data are included.

Examples:

  • Production services only
  • Payments application
  • APAC region
  • Kubernetes platform team
  • Database team

13.2 What management zones affect

Management zones can affect:

  • Entity visibility
  • Dashboards
  • Problems
  • Alerting scopes
  • Maintenance windows
  • Classic Dynatrace access control
  • Team-specific views

13.3 Management zone rule examples

Examples:

  • Include services tagged app:checkout.
  • Include hosts with host group prod-linux.
  • Include Kubernetes workloads with label team=payments.
  • Include entities belonging to a process group naming pattern.

13.4 Management zones and problems

When a problem spans multiple zones, users may see an end-to-end view of the problem, but detailed analysis is limited to entities they are allowed to view.

13.5 Segments

Latest Dynatrace introduces segments as a more modern way to define data visibility and access patterns for cloud-native and AI-native environments.

Administrators should understand that organizations may need to migrate or map use cases from classic management zones to newer access-control models such as segments and security context.

13.6 Security context

Security context helps grant access to entities and related data based on context. In latest Dynatrace, access control for Grail-powered data such as logs, spans, metrics, and events may use storage fields and policy-based access rather than classic management-zone inheritance.

13.7 Management zones vs Grail data access

Important concept:

Classic management zones do not automatically solve all Grail data access cases. Logs, spans, metrics, and events in Grail may require IAM policy controls, storage fields, buckets, segments, or security context.

13.8 Exam focus

Know:

  • Management zones organize entities and control access in classic-style scopes.
  • Segments/security context are important in latest Dynatrace access control.
  • Grail data access may require IAM/policy/storage-based controls.
  • Do not assume management zones automatically protect every log, span, metric, or event.

14. Host groups

14.1 What host groups are

Host groups are used to group monitored hosts logically, often by environment, application, or operational ownership.

Examples:

  • prod-payments
  • nonprod-checkout
  • linux-web-tier
  • k8s-platform

14.2 Why host groups matter

Host groups can influence:

  • Monitoring configuration
  • Settings scope
  • Alerting behavior
  • Naming and organization
  • OneAgent behavior
  • Entity relationships

14.3 Host group best practices

  • Define host groups before broad OneAgent rollout.
  • Use consistent naming.
  • Avoid overly broad host groups.
  • Align host groups with operational ownership.
  • Understand that changing host groups after installation may require process or service restarts to fully apply some changes.

14.4 Exam focus

Host groups are an important scoping mechanism for OneAgent-monitored hosts and settings.


15. ActiveGate and network zones

15.1 ActiveGate refresher

ActiveGate is a secure communication gateway/proxy used between monitored environments and Dynatrace.

Common uses:

  • OneAgent traffic routing
  • Private network communication
  • Cloud integrations
  • Kubernetes support
  • Extensions execution
  • Private synthetic locations
  • Reducing outbound firewall rules
  • Connectivity isolation

15.2 Network zones

Network zones represent network structure in Dynatrace. They help route traffic efficiently and avoid unnecessary cross-data-center or cross-region communication.

15.3 OneAgent connectivity in network zones

Network zones can influence which ActiveGates OneAgents use. A network zone can prioritize ActiveGates in the same zone.

15.4 Why administrators care

Administrators should understand:

  • Which hosts can reach which ActiveGates.
  • Which ActiveGates can reach Dynatrace.
  • How to minimize cross-region traffic.
  • How to design zones for data centers, cloud regions, and network boundaries.
  • How ActiveGate high availability works conceptually.

15.5 Network zone design example

Network zones:
  nz-aws-us-east-1
  nz-aws-eu-west-1
  nz-onprem-dc1
  nz-onprem-dc2

ActiveGates:
  ag-us-east-1-a, ag-us-east-1-b
  ag-eu-west-1-a, ag-eu-west-1-b
  ag-dc1-a, ag-dc1-b

OneAgents:
  Configured to prefer ActiveGates in their local network zone

15.6 Exam focus

Know:

  • ActiveGate routes/proxies monitoring traffic and supports integrations.
  • Network zones model network structure and prioritize connectivity.
  • Network zones help reduce unnecessary traffic across data centers or regions.
  • ActiveGate does not replace OneAgent.

16. OneAgent administration

16.1 What administrators need to know

Even though OneAgent installation may be more implementation-focused, administrators need to manage ongoing OneAgent behavior.

16.2 Key admin areas

  • Deployment status
  • Monitoring modes
  • Auto-update configuration
  • Host group assignment
  • Network zone assignment
  • Exclusions
  • Log collection settings
  • Sensitive data masking
  • Remote configuration
  • Health and connectivity
  • Environment-wide settings

16.3 Monitoring modes

Common OneAgent modes:

  • Full-stack monitoring
  • Infrastructure monitoring
  • Discovery mode
  • Application-only monitoring in some deployment patterns

16.4 Auto-update governance

Administrators should decide:

  • Should OneAgents auto-update automatically?
  • Are there maintenance windows for updates?
  • Are some environments updated before others?
  • Are production updates delayed or staged?
  • Who owns update validation?

16.5 Deployment health checklist

For each environment:

  • Are all expected hosts monitored?
  • Are OneAgents healthy?
  • Are OneAgents current or within supported version range?
  • Are network zones configured correctly?
  • Are ActiveGates reachable?
  • Are host groups correct?
  • Are unwanted hosts excluded?
  • Are process groups and services discovered as expected?

16.6 Exam focus

Know how OneAgent administration connects to host groups, network zones, ActiveGate, auto-update, monitoring modes, and log ingestion.


17. Log administration

17.1 Why logs are an admin topic

Logs can create high value and high cost. Administrators must manage ingestion, filtering, retention, masking, access, and query behavior.

17.2 Log ingestion sources

Logs can enter Dynatrace through:

  • OneAgent
  • APIs
  • OpenTelemetry
  • Cloud integrations
  • Extensions
  • Log shippers / integrations

17.3 OneAgent log ingestion

OneAgent can automatically discover logs and offers central management options.

Admins may configure:

  • Which logs are collected
  • Which logs are excluded
  • Log ingest rules
  • Sensitive data masking
  • Host/host group/environment-level log settings

17.4 Grail log storage

In latest Dynatrace, logs are stored in Grail. Administrators manage retention and access using bucket and policy concepts.

17.5 Log retention

Retention should match use case:

  • Short retention for debugging
  • Medium retention for operational investigations
  • Long retention for compliance/audit
  • Dedicated buckets for high-value or regulated logs

17.6 Log cost governance

Cost drivers can include:

  • Ingested log volume
  • Retained log volume
  • Query/scanned data volume
  • Frequent dashboard refreshes over log data
  • Broad DQL queries
  • Long retention on high-volume buckets

17.7 Log governance best practices

  • Filter noisy logs before storage.
  • Route logs to appropriate buckets.
  • Use OpenPipeline to drop, enrich, route, or mask data.
  • Create dedicated buckets for high-volume sources.
  • Set retention by business need.
  • Avoid running broad log queries over long time periods.
  • Control access to buckets.
  • Use dashboards carefully when based on log data.
  • Mask sensitive data before it leaves the environment when required.

17.8 Exam focus

Know:

  • Logs are stored in Grail.
  • Retention can be configured through bucket strategy.
  • Logs can be filtered on ingest using OneAgent or OpenPipeline.
  • DQL queries can impact query consumption/cost depending on licensing model.
  • Sensitive data should be masked early where possible.

18. Grail buckets, retention, and storage governance

18.1 What buckets are

Buckets are logical storage containers in Grail. They help administrators control retention, access, and storage organization.

18.2 Bucket strategy

A good bucket strategy considers:

  • Data type
  • Business owner
  • Retention requirement
  • Compliance requirement
  • Access requirement
  • Query cost pattern
  • Environment
  • Criticality

18.3 Example bucket design

logs_prod_critical_90d
logs_prod_debug_14d
logs_nonprod_7d
logs_security_audit_365d
events_prod_90d
traces_prod_30d
business_events_orders_180d

18.4 Retention strategy

Ask:

  • How long do teams need this data for troubleshooting?
  • Is there a compliance requirement?
  • How often will users query historical data?
  • Can high-volume low-value data be dropped or shortened?
  • Should data be split into separate buckets?

18.5 Access strategy

Do not allow every team to query every bucket by default.

Use IAM policies and storage permissions to control:

  • Who can query which buckets
  • Who can manage bucket definitions
  • Who can change retention
  • Who can modify OpenPipeline routing to buckets

18.6 Exam focus

Know:

  • Buckets are important for Grail storage and retention.
  • Retention is configured according to business/compliance needs.
  • Bucket access should be governed with IAM policies.
  • Bucket strategy is part of cost control.

19. OpenPipeline administration

19.1 What OpenPipeline does

OpenPipeline processes telemetry data. It can route, filter, transform, enrich, mask, and contextualize data.

19.2 Why administrators use OpenPipeline

Use cases:

  • Drop noisy logs
  • Route logs to custom buckets
  • Mask sensitive data
  • Extract fields
  • Normalize attributes
  • Convert logs to business events
  • Enrich records with team/application metadata
  • Apply different retention strategies through routing

19.3 OpenPipeline concepts

  • Source
  • Pipeline
  • Processor
  • Matcher/filter condition
  • Record transformation
  • Routing
  • Bucket assignment
  • Data enrichment

19.4 Example scenarios

Scenario 1: Drop debug logs from production

If loglevel == DEBUG and environment == production
  then drop record

Scenario 2: Route payment logs to a dedicated bucket

If app == payments
  then route to logs_prod_payments_90d

Scenario 3: Mask credit card-like patterns

If content contains sensitive payment pattern
  then mask matching value

Scenario 4: Extract order ID

If content contains order_id
  then parse and add attribute order.id
Code language: CSS (css)

19.5 OpenPipeline best practices

  • Test changes in non-production first.
  • Document why processors exist.
  • Keep processors simple and readable.
  • Use naming conventions.
  • Monitor data volume before and after changes.
  • Avoid dropping data required for compliance.
  • Coordinate with data owners before changing routing.
  • Validate DQL queries after field extraction.

19.6 Exam focus

Know:

  • OpenPipeline controls ingestion and processing.
  • It can filter, mask, enrich, transform, and route data.
  • It is central to cost, privacy, and data-quality governance.

20. Licensing, subscription, cost, and consumption

20.1 Why licensing matters for administrators

Administrators must understand how Dynatrace usage affects cost and how to monitor consumption.

20.2 Dynatrace Platform Subscription

Dynatrace Platform Subscription, or DPS, is the current strategic licensing model for the latest Dynatrace platform. It provides a single commitment model across platform capabilities, with consumption accruing based on capability usage.

20.3 Classic licensing

Classic licensing may include concepts such as:

  • Host units
  • Host unit hours
  • DEM units
  • Davis Data Units
  • Application Security units

Administrators may need to understand classic licensing if their organization still uses it or is migrating.

20.4 Account Management subscription views

License administrators can view:

  • Consumption
  • Forecasts
  • Cost allocation
  • Historical usage
  • Budget summaries
  • Cost and usage breakdowns by environment or capability

20.5 Cost governance areas

High-impact cost areas:

  • Full-stack host monitoring scale
  • Log ingest volume
  • Log retention duration
  • Log query volume
  • Trace retention
  • Business events
  • Synthetic monitor frequency
  • RUM traffic volume
  • Custom metrics
  • Platform extensions
  • Automation/workflows depending on usage model

20.6 Cost control practices

  • Monitor usage regularly.
  • Create budgets or internal cost guardrails.
  • Use cost allocation for teams/products when available.
  • Optimize log ingest and retention.
  • Filter noisy data.
  • Use separate buckets by retention and access need.
  • Educate users on query cost.
  • Avoid uncontrolled dashboard refreshes over huge datasets.
  • Review new integrations before enabling large-scale ingestion.
  • Set ownership for high-volume sources.

20.7 Exam focus

Know:

  • DPS is the strategic subscription model for latest Dynatrace.
  • Classic licensing may still exist.
  • Administrators use Account Management to view subscription/license usage.
  • Cost governance is tied to ingestion, retention, query behavior, monitoring scale, and feature usage.

21. Alerting and notifications

21.1 Alerting flow

Telemetry collected
  โ†“
Dynatrace detects anomaly or threshold violation
  โ†“
Event is created
  โ†“
Davis correlates related events into a problem
  โ†“
Alerting profile decides whether notification should be sent
  โ†“
Notification integration or workflow delivers action

21.2 Alerting profiles

Alerting profiles control which problems generate notifications. They can filter by:

  • Severity
  • Duration
  • Custom events
  • Tags
  • Management zones/scopes in some use cases
  • Problem type or event type depending on configuration

21.3 Notification integrations

Notifications can go to:

  • Email
  • Slack
  • Microsoft Teams
  • PagerDuty
  • Opsgenie
  • ServiceNow
  • Jira
  • Webhooks
  • Ansible Tower
  • Custom integrations

21.4 Default alerting profile

Each environment has a default alerting profile. Administrators often create team-specific profiles to reduce noise and route alerts correctly.

21.5 Alerting profile design

Recommended pattern:

Critical production availability
  โ†’ page on-call immediately

Performance degradation lasting 15+ minutes
  โ†’ notify team channel

Non-production issue
  โ†’ create ticket or send lower-priority notification

Maintenance-tagged entities
  โ†’ suppress or delay notifications

21.6 Common mistakes

  • Sending all problems to all teams.
  • Not using duration filters.
  • Not filtering by tags or ownership.
  • Creating duplicate notifications through overlapping profiles.
  • Forgetting to test notification integrations.
  • Alerting on symptoms instead of root-cause problems.

21.7 Exam focus

Know:

  • Alerting profiles filter problem notifications.
  • Notifications integrate with third-party tools.
  • Problems still appear in Dynatrace even without external notification integrations.
  • Avoid alert noise through severity, duration, tags, and team-based routing.

22. Anomaly detection and metric events

22.1 Anomaly detection purpose

Dynatrace continuously monitors applications, services, and infrastructure, learns baselines, and detects abnormal behavior.

22.2 Types of anomaly detection

Examples:

  • Service response time degradation
  • Service failure rate increase
  • Traffic drops or spikes
  • Host CPU saturation
  • Memory outage
  • Disk problems
  • Network problems
  • Missing data alerts
  • Custom metric events
  • DQL-based advanced custom alerts

22.3 Baselines

Dynatrace can use automatic baselining to detect deviations from normal behavior.

22.4 Static thresholds

Static thresholds trigger when a metric crosses a fixed value.

Good for:

  • Hard capacity limits
  • Compliance thresholds
  • Well-known SLO boundaries

22.5 Auto-adaptive thresholds

Auto-adaptive thresholds adjust based on behavior and are useful for metrics with changing patterns.

22.6 Metric events

Metric events allow administrators to create custom events based on metric thresholds.

22.7 DQL-based anomaly detection

Advanced custom alerts can be based on DQL queries. These need careful design because the query may execute regularly and should be efficient.

22.8 Exam focus

Know:

  • Dynatrace uses baselines and anomaly detection.
  • Admins can adjust sensitivity.
  • Custom metric events can be configured.
  • DQL-based alerts require efficient queries.
  • Missing data alerts are useful when the absence of telemetry is itself a problem.

23. Maintenance windows

23.1 What maintenance windows do

Maintenance windows define periods when planned or unplanned maintenance occurs.

They can:

  • Suppress or modify alerting behavior
  • Exclude maintenance periods from baseline calculations
  • Prevent planned changes from polluting anomaly baselines
  • Filter by tags or management zones

23.2 Planned vs unplanned

Planned maintenance

Defined in advance.

Example:

  • Database upgrade Saturday 01:00โ€“03:00.

Unplanned maintenance

Defined retroactively or for an ongoing outage.

Example:

  • Emergency network outage started 20 minutes ago.

23.3 Best practices

  • Use maintenance windows for planned releases and infrastructure work.
  • Scope windows narrowly using tags or zones.
  • Do not use broad all-environment windows unless necessary.
  • Document maintenance ownership.
  • Align with change-management processes.
  • Validate alert behavior before planned production maintenance.

23.4 Exam focus

Know:

  • Maintenance windows can affect alerting and baseline calculation.
  • They should be scoped carefully.
  • They can be planned or unplanned.

24. Workflows and automation

24.1 What workflows do

Workflows automate operational actions in Dynatrace.

They can be triggered by:

  • Problems
  • Events
  • Schedules
  • Manual execution
  • API calls
  • Other platform events depending on configuration

24.2 Workflow use cases

  • Notify team channels
  • Create incidents
  • Enrich problem context
  • Run remediation actions
  • Query data with DQL
  • Send reports
  • Trigger webhooks
  • Perform checks after deployment
  • Coordinate follow-up actions

24.3 Workflow governance

Administrators should control:

  • Who can create workflows
  • Who can run workflows
  • Which external endpoints workflows can call
  • Which credentials workflows can use
  • Which workflows run automatically on production problems
  • How workflow failures are monitored

24.4 Best practices

  • Start with notification/enrichment workflows.
  • Test in non-production.
  • Use credential vault for secrets.
  • Avoid hardcoding secrets.
  • Add ownership and descriptions.
  • Use rate limits and safety conditions.
  • Log or document workflow output.
  • Avoid automation that can make incidents worse without guardrails.

24.5 Exam focus

Know that workflows are the automation layer and can help streamline alerting, notifications, and response.


25. Credential vault

25.1 What Credential vault is

Credential vault stores credentials used by Dynatrace features such as Synthetic Monitoring, extensions, and integrations.

25.2 Why administrators care

Credentials must be secured because they may allow access to internal applications, APIs, cloud services, or third-party systems.

25.3 Credential types

Examples:

  • Username/password
  • Token
  • Certificate
  • AWS credential configurations
  • External vault references

25.4 External vault integration

Dynatrace can integrate with external vaults for some credential types, such as:

  • Azure Key Vault
  • HashiCorp Vault
  • CyberArk

25.5 Best practices

  • Use Credential vault instead of hardcoding credentials.
  • Limit owner access where appropriate.
  • Rotate credentials regularly.
  • Use external vault integration when required by policy.
  • Remove unused credentials.
  • Track which monitors or integrations use a credential.
  • Avoid screenshots or documentation that expose secrets.

25.6 Exam focus

Know that Credential vault securely stores secrets for monitors and integrations, and can be integrated with external vault systems.


26. Data privacy, masking, and sensitive data

26.1 Why privacy matters

Dynatrace can collect URLs, request attributes, logs, user/session data, traces, exception messages, and metadata. Some of this may contain personal or sensitive data.

Administrators must configure controls so sensitive data is not exposed unnecessarily.

26.2 Masking approaches

Mask at capture

Sensitive data is masked before it leaves the monitored environment.

Best when:

  • Data must never leave the customer environment.
  • Compliance requires strong privacy controls.
  • Logs or URLs may contain personal data.

Mask at storage or processing

Data is transformed during processing or ingestion.

Best when:

  • Data can be processed but should not be stored in raw form.

Mask at display

Data is stored but hidden from users unless they have permission.

Best when:

  • Some privileged users need access but most users should not see personal data.

26.3 OneAgent-side masking

OneAgent can mask certain sensitive data at first contact. This helps ensure selected sensitive data is not sent to Dynatrace servers.

26.4 Log masking

Log masking can be configured for log data, including at-capture masking through OneAgent and processing/masking through OpenPipeline.

26.5 Privacy checklist

  • Identify data that may contain personal information.
  • Decide whether to mask at capture, processing, or display.
  • Use least-privilege access to sensitive data.
  • Validate log collection before production rollout.
  • Avoid collecting secrets, tokens, passwords, or full PII.
  • Configure RUM privacy settings for user data.
  • Review request attributes that may capture sensitive values.
  • Audit who can view sensitive data.

26.6 Exam focus

Know:

  • Masking at capture is strongest because data does not leave the environment.
  • OneAgent-side masking can be used for sensitive data.
  • Log masking and privacy settings are key admin controls.
  • Display masking is not the same as capture masking.

27. Audit logs and governance

27.1 Why audit logs matter

Audit logs help administrators track who changed what and when.

They support:

  • Security investigations
  • Compliance
  • Access review
  • Change management
  • Troubleshooting misconfiguration

27.2 What to audit

Examples:

  • User/group changes
  • Policy changes
  • Token creation/deletion
  • SAML/SCIM configuration changes
  • Settings changes
  • Bucket/retention changes
  • OpenPipeline changes
  • Workflow changes
  • Alerting changes
  • Integration changes

27.3 Audit practice

  • Keep admin changes traceable.
  • Use named accounts or service users, not shared accounts.
  • Use change tickets or pull requests for important changes.
  • Use Monaco/Git for configuration where possible.
  • Review high-risk changes regularly.

28. Dashboards, notebooks, documents, and user enablement

28.1 Why dashboards are an admin topic

Administrators often define platform standards for dashboards:

  • Who can create dashboards
  • Who can share dashboards
  • Which dashboards are official
  • Which dashboards use expensive queries
  • Which dashboards are deprecated
  • How naming and ownership work

28.2 Dashboard governance

Recommended dashboard naming:

[Team] [Environment] [Use case]

Examples:
  Payments Prod Service Health
  Platform Kubernetes Overview
  Security Runtime Vulnerabilities
  Executive Business Impact Overview
Code language: CSS (css)

28.3 Notebooks

Notebooks are used for investigation and analysis. Admins may govern:

  • Who can create and share notebooks
  • Which notebooks are official runbooks
  • How DQL query cost is managed
  • How sensitive data is handled

28.4 Documents

Documents can be used for operational notes, runbooks, or analysis depending on the app capabilities enabled.

28.5 Exam focus

Know:

  • Dashboards are for ongoing visual monitoring.
  • Notebooks are for analysis and investigation.
  • Admins should govern sharing, access, query cost, and ownership.

29. Service-level objectives

29.1 What SLOs are

Service-level objectives define reliability targets.

Example:

  • Checkout API availability should be 99.9% over 30 days.
  • Login latency should meet threshold 95% of the time.

29.2 SLO concepts

  • SLI: Service-level indicator, the measurement.
  • SLO: Service-level objective, the target.
  • Error budget: Allowed unreliability within the target window.

29.3 Administrator role

Administrators may control:

  • Who can create SLOs
  • Which teams own SLOs
  • Standard naming
  • Dashboarding
  • Alerting on SLO burn rate or target violation
  • Integration with governance and reporting

29.4 Exam focus

Know the basic purpose of SLOs and the difference between SLI, SLO, and error budget.


30. Extensions and integrations

30.1 Why integrations matter

Dynatrace environments often ingest or interact with systems beyond OneAgent monitoring.

Examples:

  • AWS
  • Azure
  • Google Cloud
  • Kubernetes
  • VMware
  • Databases
  • Network devices
  • Messaging systems
  • CI/CD systems
  • Incident-management tools
  • ChatOps tools

30.2 Extension governance

Administrators should manage:

  • Who can install extensions
  • Which extensions are approved
  • Which credentials are used
  • Which ActiveGate executes the extension
  • Ingest volume generated by the extension
  • Ownership and support model

30.3 Cloud integration governance

For cloud integrations:

  • Use least-privilege cloud permissions.
  • Avoid collecting unnecessary services.
  • Monitor usage/consumption impact.
  • Tag cloud resources consistently.
  • Align cloud tags with Dynatrace tags and cost allocation.

30.4 Exam focus

Know that integrations and extensions can increase observability coverage but require permission, credential, ActiveGate, and cost governance.


31. Platform health and operational support

31.1 What admins should monitor

A Dynatrace administrator should monitor the monitoring platform itself.

Checklist:

  • OneAgent health
  • ActiveGate health
  • Data ingest status
  • Token and OAuth client usage
  • Log ingestion volume
  • OpenPipeline errors
  • Workflow failures
  • Synthetic location status
  • Subscription/license usage
  • User access requests
  • Failed SAML/SCIM provisioning
  • Unused dashboards and configurations
  • Deprecated settings or migration tasks

31.2 Operating cadence

Daily

  • Review critical platform health issues.
  • Check major ingest failures.
  • Review urgent access problems.
  • Review alerting or workflow failures.

Weekly

  • Review usage/cost anomalies.
  • Review OneAgent/ActiveGate update status.
  • Review failed workflows and integrations.
  • Review new high-volume log sources.
  • Review open admin tickets.

Monthly

  • Access review for privileged groups.
  • Token review.
  • Dashboard/report cleanup.
  • Bucket/retention review.
  • OpenPipeline rule review.
  • License/subscription forecast review.
  • SAML/SCIM sync health review.

Quarterly

  • Review admin policies.
  • Review management zones/segments.
  • Review tagging standards.
  • Review cost allocation.
  • Review operating model with teams.
  • Test fallback access and emergency procedures.

32. Common admin scenarios and best answers

Scenario 1: A new application team needs access only to its own services

Best approach:

  • Ensure entities are tagged consistently.
  • Create or reuse a management zone / segment / security context for the application.
  • Create a group for the team.
  • Bind policies/permissions to that group with the correct scope.
  • Validate effective permissions with a test user.

Scenario 2: Users can see dashboards but not logs

Check:

  • User group membership
  • IAM policies for logs and Grail query access
  • Bucket access permissions
  • Management zone/segment/security context behavior
  • Whether the dashboard uses a data source the user cannot query

Scenario 3: Monaco deployment fails

Check:

  • Token/OAuth credentials
  • Required scopes
  • User/service user group permissions
  • Environment/account target
  • Settings schema availability
  • Object scope
  • API errors

Scenario 4: Log costs suddenly increase

Check:

  • New log source or cloud integration
  • OpenPipeline routing changes
  • Log ingest rules
  • Bucket retention changes
  • Dashboard query refreshes
  • DQL queries over broad timeframes
  • New high-volume Kubernetes workloads

Scenario 5: Alerts are too noisy

Check:

  • Alerting profiles
  • Problem severity filters
  • Duration filters
  • Tags/ownership filters
  • Duplicate integrations
  • Anomaly detection sensitivity
  • Missing maintenance windows
  • Custom metric events with too-sensitive thresholds

Scenario 6: Users are locked out after SAML change

Check:

  • Domain verification
  • IdP metadata
  • Dynatrace SAML configuration
  • Attribute mapping
  • Group mapping
  • IdP certificate expiry
  • Fallback admin access

Scenario 7: Sensitive data appears in logs

Actions:

  • Determine source and field.
  • Decide whether masking must happen at capture.
  • Configure OneAgent sensitive data masking if appropriate.
  • Configure OpenPipeline masking/transformation as needed.
  • Restrict bucket access.
  • Review retention and purge requirements with compliance/security.

Scenario 8: A private synthetic monitor cannot reach an internal app

Check:

  • Private location configuration
  • ActiveGate health
  • Network/firewall routing
  • DNS resolution
  • Credential vault entries
  • Authentication configuration
  • Monitor location assignment

33. DQL for administrators

Administration Professional does not require being a DQL expert, but you should be comfortable with basic admin queries.

33.1 Basic log query

fetch logs, from:now()-1h
| filter loglevel == "ERROR"
| fields timestamp, loglevel, content, dt.entity.host, dt.entity.service
| sort timestamp desc
| limit 100
Code language: JavaScript (javascript)

33.2 Count logs by level

fetch logs, from:now()-24h
| summarize count(), by:{loglevel}
| sort `count()` desc
Code language: JavaScript (javascript)

33.3 Search for token-like or sensitive patterns conceptually

fetch logs, from:now()-24h
| search "password"
| fields timestamp, content, dt.entity.host
Code language: JavaScript (javascript)

33.4 Find events

fetch events, from:now()-24h
| fields timestamp, event.kind, event.type, event.name, dt.entity.host
| sort timestamp desc

33.5 Query admin-style data carefully

Guidelines:

  • Start with short time windows.
  • Add filters early.
  • Select only needed fields.
  • Summarize when appropriate.
  • Avoid broad queries over long retention unless necessary.
  • Be aware of query-cost models.

34. Hands-on lab checklist

Use this checklist before taking the exam.

34.1 IAM labs

You should be able to:

  • Find Account Management.
  • Invite a user.
  • Create a group.
  • Add a user to a group.
  • Review group permissions.
  • Understand policy binding.
  • View effective policies conceptually.
  • Explain SAML setup steps.
  • Explain SCIM setup steps.
  • Explain domain verification.
  • Create or explain OAuth client creation.
  • Create or explain platform token creation.

34.2 Settings labs

You should be able to:

  • Open the Settings app.
  • Search for a setting.
  • Explain settings scope.
  • Explain hierarchy and override behavior.
  • Identify read vs write access needs.
  • Explain Settings API and Monaco usage.

34.3 Organization labs

You should be able to:

  • Create or explain automatic tags.
  • Create or explain management zone rules.
  • Explain management zones vs segments/security context.
  • Explain tag naming standards.
  • Use tags in alerting or dashboards.

34.4 Connectivity labs

You should be able to:

  • Explain ActiveGate use cases.
  • Explain network zones.
  • Identify why OneAgents might use local ActiveGates.
  • Explain private synthetic location concept.
  • Explain ActiveGate health troubleshooting.

34.5 Logs and storage labs

You should be able to:

  • Open Logs.
  • Run a DQL query.
  • Filter logs.
  • Explain buckets.
  • Explain retention.
  • Explain OpenPipeline routing.
  • Explain log masking.
  • Explain cost-control strategy.

34.6 Alerting labs

You should be able to:

  • Explain problem lifecycle.
  • Find alerting profiles.
  • Explain alerting profile filters.
  • Explain maintenance windows.
  • Explain metric events.
  • Explain anomaly detection sensitivity.
  • Explain notification integration flow.

34.7 Automation labs

You should be able to:

  • Explain workflows.
  • Explain workflow triggers.
  • Explain secure credential usage.
  • Explain Monaco deployment flow.
  • Explain API/token troubleshooting.

34.8 Licensing labs

You should be able to:

  • Find subscription/license view.
  • Explain DPS vs classic concepts.
  • Identify usage/cost breakdown.
  • Explain budget and cost allocation concepts.
  • Explain how logs and queries may affect cost.

35. Study plan: 21 days

Day 1: Certification orientation

  • Review current Dynatrace University learning path.
  • Verify exam details.
  • Review Associate concepts.
  • Set up tenant or playground access.

Day 2: Account and environment model

  • Study Account Management.
  • Understand accounts, environments, users, groups, policies, subscription.

Day 3: Users and groups

  • Practice user/group concepts.
  • Design sample group model.
  • Understand group inheritance.

Day 4: IAM policies and permissions

  • Study default policies, custom policies, policy boundaries, effective permissions.
  • Practice least-privilege scenarios.

Day 5: SAML and SCIM

  • Study domain verification.
  • Study SAML setup flow.
  • Study SCIM provisioning flow.
  • Review fallback admin concept.

Day 6: Tokens and OAuth

  • Study platform tokens, OAuth clients, access tokens classic, service users.
  • Review token governance and rotation.

Day 7: Settings app and settings hierarchy

  • Study Settings app.
  • Understand scopes, schemas, objects, overrides.
  • Practice finding settings.

Day 8: Monaco and configuration as code

  • Study Monaco purpose.
  • Review deploy/download workflow.
  • Understand OAuth scopes and service users.

Day 9: Tags and metadata

  • Study manual vs automatic tags.
  • Create naming convention examples.
  • Map tags to ownership, alerting, dashboards.

Day 10: Management zones, segments, security context

  • Study access scoping.
  • Understand classic vs latest access patterns.
  • Review Grail data access considerations.

Day 11: ActiveGate and network zones

  • Study ActiveGate use cases.
  • Study network zone design.
  • Practice troubleshooting connectivity scenarios.

Day 12: OneAgent admin operations

  • Study monitoring modes, host groups, updates, health, remote configuration.

Day 13: Logs and Grail buckets

  • Study log ingestion, bucket strategy, retention, access.
  • Practice DQL basics.

Day 14: OpenPipeline

  • Study filtering, routing, enrichment, masking.
  • Write sample scenarios.

Day 15: Licensing and cost

  • Study DPS vs classic concepts.
  • Review usage/cost drivers.
  • Build a cost-control checklist.

Day 16: Alerting profiles and notifications

  • Study problem notifications.
  • Design team-based alert routing.

Day 17: Anomaly detection and maintenance windows

  • Study anomaly detection, metric events, baselines, thresholds, maintenance windows.

Day 18: Workflows and automation

  • Study workflow triggers, credentials, governance, response automation.

Day 19: Privacy, credential vault, audit

  • Study masking, credential vault, external vault, audit logs, security governance.

Day 20: Scenario review

  • Work through all scenarios in this guide.
  • Practice hands-on lab checklist.

Day 21: Final mock review

  • Complete practice questions.
  • Review weak areas.
  • Re-verify exam format in Dynatrace University.

36. Final revision cheat sheets

36.1 Component cheat sheet

AreaAdmin must know
Account ManagementUsers, groups, policies, environments, SAML, SCIM, OAuth, license/subscription
UsersInvite, assign, remove, federated vs non-federated
GroupsUsers inherit permissions from group membership
PoliciesFine-grained access control enforced at runtime
SAMLFederated SSO authentication
SCIMAutomated user/group provisioning
Domain verificationRequired for SAML/SCIM domain ownership proof
Platform tokensLong-lived programmatic access within user/service user permissions
OAuth clientsService-to-service / automation authentication
Settings appCentral configuration entry point
Settings hierarchyMore specific settings override broader settings
MonacoConfiguration as Code CLI
TagsEntity organization and filtering
Management zonesClassic entity/data access scoping
Segments/security contextLatest platform access scoping concepts
ActiveGateSecure gateway/proxy and integration point
Network zonesConnectivity routing model for network structure
BucketsGrail storage/retention/access containers
OpenPipelineIngest filtering, routing, transformation, enrichment, masking
Alerting profilesFilter which problems create notifications
Maintenance windowsSuppress/adjust alerts and protect baselines during maintenance
WorkflowsAutomation and operational response
Credential vaultSecure storage for credentials
DPSCurrent strategic platform subscription model
Classic licensingHU, DEM, DDU, ASU concepts in older licensing

36.2 Access troubleshooting cheat sheet

SymptomCheck
User cannot log inSAML/IdP, domain, user status, fallback path
User sees no environmentAccount/environment access, group membership
User cannot see dataPolicies, management zones, segments, bucket access
User cannot edit settingsPro/admin policy, settings-specific policy
API call unauthorizedToken scope, OAuth scopes, service user policies
Monaco deployment failsToken/OAuth, scopes, schema access, write permissions
Dashboard tile failsData source permission, DQL/bucket access, app permission
User sees too muchGroup memberships, broad policies, missing boundaries

36.3 Cost troubleshooting cheat sheet

SymptomCheck
Log ingest spikeNew source, OneAgent rules, cloud logs, OpenPipeline changes
Log query cost spikeBroad DQL queries, dashboards, notebooks, API queries
Retention cost spikeBucket retention changes, new buckets, high-volume data
Host monitoring cost spikeNew OneAgents, autoscaling, full-stack enabled broadly
Synthetic cost spikeMonitor frequency, locations, clickpaths
RUM cost spikeTraffic volume, session replay, applications added
Custom metric cost spikeNew integrations, cardinality, metric dimensions

36.4 Alerting troubleshooting cheat sheet

SymptomCheck
Too many alertsAlerting profile filters, duration, severity, tags
No notificationIntegration, alerting profile, maintenance window, problem severity
Duplicate notificationsOverlapping profiles/integrations/workflows
Alerts during deploymentMissing maintenance window or tags
Alerts not relevant to teamOwnership tags or scopes missing
Custom alert noisyThreshold too sensitive, DQL query too broad

37. Practice questions

Question 1

What is the primary focus of Dynatrace Administration Professional Certification?

A. Writing application code
B. Managing and maintaining the Dynatrace SaaS platform for users
C. Designing logos
D. Replacing cloud providers

Answer: B. Managing and maintaining the Dynatrace SaaS platform for users

Question 2

Where are users, groups, policies, SAML, SCIM, OAuth clients, and subscription views commonly managed?

A. Account Management
B. PurePath only
C. Smartscape only
D. Browser DevTools

Answer: A. Account Management

Question 3

In Dynatrace, users commonly inherit permissions through what?

A. Group membership
B. Browser cookies only
C. Host CPU limits
D. Synthetic locations

Answer: A. Group membership

Question 4

What do IAM policies define?

A. Whether actions in Dynatrace are allowed
B. Which font dashboards use
C. Which host has the highest CPU
D. Which application is slow

Answer: A. Whether actions in Dynatrace are allowed

Question 5

What is the purpose of SAML in Dynatrace?

A. Federated authentication / SSO
B. Log storage
C. Trace analysis
D. Dashboard coloring

Answer: A. Federated authentication / SSO

Question 6

What is the purpose of SCIM?

A. Automated user and group provisioning
B. Distributed tracing
C. Data masking only
D. Host CPU monitoring

Answer: A. Automated user and group provisioning

Question 7

Before configuring SAML or SCIM for an email domain, what must typically be completed?

A. Domain verification
B. Dashboard creation
C. Synthetic clickpath recording
D. Log query execution

Answer: A. Domain verification

Question 8

What is a platform token used for?

A. Programmatic access to Dynatrace platform services
B. Restarting a host physically
C. Drawing topology manually
D. Creating user sessions

Answer: A. Programmatic access to Dynatrace platform services

Question 9

Which credential type is commonly suitable for service-to-service automation?

A. OAuth client
B. Manual tag
C. User action
D. Process group

Answer: A. OAuth client

Question 10

What is Monaco?

A. Dynatrace Configuration as Code CLI
B. A tracing span
C. A synthetic location
D. A license unit

Answer: A. Dynatrace Configuration as Code CLI

Question 11

What is the Settings app used for?

A. Centralized environment configuration
B. Only viewing user sessions
C. Only viewing dashboards
D. Only editing browser bookmarks

Answer: A. Centralized environment configuration

Question 12

If a host-level setting and environment-level setting conflict, which generally takes precedence?

A. The more specific host-level setting
B. The less specific environment-level setting
C. Neither setting
D. The oldest setting

Answer: A. The more specific host-level setting

Question 13

What are tags used for?

A. Organizing, filtering, alerting, and scoping entities
B. Encrypting all traffic
C. Replacing OneAgent
D. Running OAuth flows

Answer: A. Organizing, filtering, alerting, and scoping entities

Question 14

What are management zones used for?

A. Organizing environments and controlling access to scoped entities/data
B. Installing the browser extension
C. Replacing ActiveGate
D. Creating API tokens only

Answer: A. Organizing environments and controlling access to scoped entities/data

Question 15

Which latest-platform concepts are important when moving beyond classic management zones?

A. Segments and security context
B. Browser bookmarks
C. Local printer groups
D. Social media tags

Answer: A. Segments and security context

Question 16

What is ActiveGate commonly used for?

A. Secure routing/proxying and integrations
B. Replacing OneAgent
C. Editing dashboards only
D. Increasing alert noise

Answer: A. Secure routing/proxying and integrations

Question 17

What are network zones used for?

A. Modeling network structure and optimizing routing to ActiveGates
B. Creating user passwords
C. Querying logs only
D. Changing dashboard fonts

Answer: A. Modeling network structure and optimizing routing to ActiveGates

Question 18

What is a Grail bucket used for?

A. Storage, retention, and access organization for data
B. Drawing topology
C. Restarting ActiveGate
D. Creating users

Answer: A. Storage, retention, and access organization for data

Question 19

What does OpenPipeline help administrators do?

A. Filter, route, transform, enrich, and mask telemetry
B. Replace SAML
C. Create user accounts only
D. Install operating systems

Answer: A. Filter, route, transform, enrich, and mask telemetry

Question 20

What is an alerting profile used for?

A. Controlling which problems trigger notifications
B. Creating users
C. Installing OneAgent
D. Querying all logs

Answer: A. Controlling which problems trigger notifications

Question 21

What do maintenance windows help with?

A. Suppressing/adjusting alerting and protecting baselines during maintenance
B. Creating OAuth clients
C. Storing credentials
D. Replacing dashboards

Answer: A. Suppressing/adjusting alerting and protecting baselines during maintenance

Question 22

What is the Credential vault used for?

A. Secure storage of credentials for monitors and integrations
B. Storing all logs
C. Replacing IAM policies
D. Creating process groups

Answer: A. Secure storage of credentials for monitors and integrations

Question 23

Which masking approach is strongest when data must never leave the monitored environment?

A. Mask at capture
B. Mask only at display
C. Do not mask
D. Mask in a dashboard title

Answer: A. Mask at capture

Question 24

Which licensing model is the current strategic model for latest Dynatrace platform consumption?

A. Dynatrace Platform Subscription
B. Printer page count
C. Manual invoices only
D. Hostnames only

Answer: A. Dynatrace Platform Subscription

Question 25

A Monaco deployment fails because it cannot write settings. What should you check first?

A. Token/OAuth scopes and user/service-user policies
B. Dashboard color
C. User session duration
D. Browser clickpath screenshots

Answer: A. Token/OAuth scopes and user/service-user policies

Question 26

A team can see services but cannot query logs. What should you check?

A. Grail/bucket/log query permissions and IAM policies
B. Only CPU usage
C. Only host restart time
D. Only browser version

Answer: A. Grail/bucket/log query permissions and IAM policies

Question 27

What should you use to reduce noisy production logs before storage?

A. OneAgent log ingest rules or OpenPipeline
B. User profile settings
C. Browser font settings
D. Manual screenshots

Answer: A. OneAgent log ingest rules or OpenPipeline

Question 28

Which feature can automate responses to problems or schedules?

A. Workflows
B. Tags only
C. Hostnames only
D. User avatars

Answer: A. Workflows

Question 29

What is the best practice for production automation credentials?

A. Use service users / OAuth or scoped platform tokens with least privilege
B. Use a shared admin password in a script
C. Put tokens in Git
D. Use personal tokens without expiration for everything

Answer: A. Use service users / OAuth or scoped platform tokens with least privilege

Question 30

A sudden log cost increase is reported. Which is a likely admin investigation path?

A. Check new sources, ingest volume, retention, OpenPipeline, and query activity
B. Only check dashboard titles
C. Only check user profile photos
D. Only restart a browser

Answer: A. Check new sources, ingest volume, retention, OpenPipeline, and query activity


38. Final readiness checklist

You are ready for Administration Professional when you can explain and apply all of these without notes:

  • Account vs environment
  • Account Management purpose
  • Users and groups
  • IAM policies
  • Default vs custom policies
  • Policy boundaries
  • Effective permissions
  • SAML setup flow
  • SCIM setup flow
  • Domain verification
  • Fallback admin strategy
  • Platform tokens
  • OAuth clients
  • Access tokens classic
  • Service users
  • Settings app
  • Settings objects and schemas
  • Settings scope and hierarchy
  • Settings API
  • Monaco
  • Tags
  • Management zones
  • Segments
  • Security context
  • Host groups
  • ActiveGate
  • Network zones
  • OneAgent monitoring modes
  • OneAgent auto-update governance
  • Log ingestion
  • Grail buckets
  • Retention
  • OpenPipeline
  • Sensitive data masking
  • Credential vault
  • DPS licensing
  • Classic licensing concepts
  • Subscription/license usage views
  • Cost allocation
  • Alerting profiles
  • Problem notifications
  • Maintenance windows
  • Anomaly detection
  • Metric events
  • DQL-based custom alerts
  • Workflows
  • Audit logs
  • Dashboard and notebook governance
  • SLO basics
  • Extension and integration governance
  • Platform health operations

39. Last-minute memory map

Admin foundation:
  Account โ†’ Environment โ†’ Users โ†’ Groups โ†’ Policies โ†’ Effective access

Identity:
  SAML = SSO
  SCIM = provisioning
  Domain verification = prove ownership
  Fallback admin = recovery path

Automation access:
  Platform token = programmatic platform access
  OAuth client = service-to-service automation
  Classic token = older API access

Configuration:
  Settings app = central config
  Scope hierarchy = most specific wins
  Monaco = config as code

Organization:
  Tags = labels
  Management zones = scoped classic entity access
  Segments/security context = latest access scoping

Connectivity:
  ActiveGate = secure gateway/proxy
  Network zones = localize routing

Data governance:
  Grail = storage/query
  Buckets = retention/access
  OpenPipeline = process/filter/route/mask/enrich

Alerting:
  Anomaly โ†’ event โ†’ problem โ†’ alerting profile โ†’ notification/workflow

Privacy:
  Mask at capture when data must never leave environment
  Credential vault for secrets

Cost:
  Monitor ingest, retain, query, host scale, synthetic/RUM, custom metrics, extensions
Code language: JavaScript (javascript)

40. Final advice

For Administration Professional, do not study only individual features. Study how the platform is governed.

A strong admin thinks like this:

  • Who owns this data?
  • Who should access it?
  • Which group and policy should grant access?
  • Which tags, segments, or zones define scope?
  • Which settings apply, and at what level?
  • How is data collected, processed, stored, retained, and masked?
  • How much will this cost?
  • Who gets notified when something breaks?
  • How can this be automated safely?
  • How can we audit and recover from mistakes?

That operating mindset is the core of Dynatrace Administration Professional readiness.

Find Trusted Cardiac Hospitals

Compare heart hospitals by city and services โ€” all in one place.

Explore Hospitals
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Certification Courses

DevOpsSchool has introduced a series of professional certification courses designed to enhance your skills and expertise in cutting-edge technologies and methodologies. Whether you are aiming to excel in development, security, or operations, these certifications provide a comprehensive learning experience. Explore the following programs:

DevOps Certification, SRE Certification, and DevSecOps Certification by DevOpsSchool

Explore our DevOps Certification, SRE Certification, and DevSecOps Certification programs at DevOpsSchool. Gain the expertise needed to excel in your career with hands-on training and globally recognized certifications.

0
Would love your thoughts, please comment.x
()
x