Find the Best Cosmetic Hospitals

Explore trusted cosmetic hospitals and make a confident choice for your transformation.

โ€œInvest in yourself โ€” your confidence is always worth it.โ€

Explore Cosmetic Hospitals

Start your journey today โ€” compare options in one place.

dotnet-counters: Demo & Lab


1. What this screen is showing

How to install?

$ dotnet tool install –global dotnet-counters

Your command:

$ dotnet-counters ps

$ dotnet-counters monitor -p 30420 System.Runtime Microsoft.AspNetCore.Hosting Microsoft-AspNetCore-Server-Kestrel Microsoft.AspNetCore.Http.Connections System.Net.Http Microsoft.EntityFrameworkCore

And the section you pasted is only the [System.Runtime] provider โ€“ i.e. core .NET runtime health.

This is the โ€œvitals panelโ€ of the CLR:

  • CPU
  • GC
  • JIT
  • ThreadPool
  • Memory usage

And right now the numbers show a mostly idle app (no meaningful load).


2. Key metrics in your output (with meaning)

Iโ€™ll go through the most important ones and tell you how to explain them.

๐Ÿ”น CPU Usage (%)

CPU Usage (%)  0.198
Code language: CSS (css)
  • How to say it: โ€œOverall process CPU usage as seen by the .NET runtime.โ€
  • Here itโ€™s ~0.2%, so the app is basically idle.

๐Ÿ”น Allocation Rate (B / 1 sec)

Allocation Rate (B / 1 sec)  16,400
  • New managed memory allocated per second on the managed heap.
  • Under real load, this can be MB/sec or GB/sec.
  • For your naive vs optimized demo:
    • Naive code โ†’ higher allocation rate (lots of short-lived objects)
    • Optimized โ†’ lower allocation rate โ†’ less GC pressure.

๐Ÿ”น GC Heap Size (MB) & GC Committed Bytes (MB)

GC Heap Size (MB)         3.192
GC Committed Bytes (MB)  10.277
Code language: CSS (css)
  • GC Heap Size: How much managed heap is currently in use.
  • GC Committed Bytes: Total memory the GC has reserved/committed from the OS.
  • Story for slides:
    • โ€œIf Heap Size keeps growing and never comes down, we may have a memory leak or a workload that allocates a lot and doesnโ€™t release.โ€

๐Ÿ”น % Time in GC & Time paused by GC

% Time in GC since last GC (%)   0
Time paused by GC (ms / 1 sec)   0
  • % Time in GC: Fraction of time the runtime is doing GC work.
  • Time paused by GC: Time the app is stopped for GC per second.
  • Under heavy allocation:
    • High % Time in GC and non-zero Time paused by GC โ†’ GC is impacting throughput / latency.
    • In your demo you could show:
      โ€œNaive bulk insert โ†’ higher % time in GC; optimized version โ†’ less.โ€

๐Ÿ”น Gen 0 / Gen 1 / Gen 2 counts & sizes

Gen 0 GC Count (Count / 1 sec)  0
Gen 0 Size (B)                  646,848

Gen 1 GC Count (Count / 1 sec)  0
Gen 1 Size (B)                  832

Gen 2 GC Count (Count / 1 sec)  0
Gen 2 Size (B)                  1,989,064
  • Gen 0/1/2 Sizes: How much of each generation is currently occupied.
  • Gen 0/1/2 GC Count (/sec): How often GC runs for each generation.
  • How to narrate:
    • โ€œGen 0 is for short-lived objects; Gen 2 is long-lived (e.g., caches, static data).โ€
    • โ€œToo many Gen 2 collections โ†’ expensive, can cause noticeable pauses.โ€

For an idle app, counts/sec = 0 is normal.


๐Ÿ”น LOH Size & POH Size

LOH Size (B)        98,384
POH Size (B)       130,680
  • LOH (Large Object Heap): Objects โ‰ฅ 85 KB (arrays, big strings, etc.).
  • POH (Pinned Object Heap): Objects that canโ€™t be moved (e.g. for interop).
  • Slide point:
    • โ€œLarge or pinned heaps that keep growing can cause memory fragmentation and larger GCs.โ€

๐Ÿ”น Monitor Lock Contention Count

Monitor Lock Contention Count (Count / 1 sec)  0
  • Number of times threads block waiting for lock/Monitor.Enter.
  • Under contention, this will be non-zero.
  • Good line for your talk: โ€œIf this counter spikes during load, weโ€™re likely hitting lock contention โ€“ too many threads fighting for the same lock.โ€

๐Ÿ”น ThreadPool metrics

ThreadPool Completed Work Item Count (Count / 1 sec)  0
ThreadPool Queue Length                              0
ThreadPool Thread Count                              0
  • Completed Work Item Count: How many ThreadPool tasks are completing per second.
  • Queue Length: Pending work items waiting for threads.
  • Thread Count: Number of ThreadPool threads (this being 0 is likely a momentary state; usually youโ€™ll see > 1 when load comes in).

What to say:

  • โ€œIf Queue Length grows but Thread Count doesnโ€™t grow fast enough, we have ThreadPool starvation.โ€
  • โ€œUnder load, we expect some reasonable thread count and completed work items/sec.โ€

๐Ÿ”น JIT metrics

IL Bytes Jitted (B)           528,863
Number of Methods Jitted      6,639
Time spent in JIT (ms / 1 sec) 0
Code language: JavaScript (javascript)
  • IL Bytes Jitted / Methods Jitted: Total IL compiled to machine code.
  • Time spent in JIT: JIT cost per second.
  • Early after startup:
    • These numbers grow quickly.
    • Later they flatten once code is โ€œwarmed up.โ€
  • Slide story: โ€œJIT happens mostly at startup and first-time execution; after that the cost should be negligible.โ€

๐Ÿ”น Working Set (MB)

Working Set (MB)  106.906
Code language: JavaScript (javascript)
  • Total physical memory in use by the process (managed + native).
  • Use it to show overall footprint.
  • In your story:
    • โ€œNaive version might cause working set to grow more due to excessive allocations or caching.โ€

๐Ÿ”น Assemblies Loaded & Active Timers

Number of Assemblies Loaded  137
Number of Active Timers      2
Code language: JavaScript (javascript)
  • Assemblies loaded โ†’ footprint of the appโ€™s codebase.
  • Active Timers โ†’ number of timers scheduled (e.g., background jobs, periodic tasks).

Not usually your first troubleshooting metric, but nice context.


3. How to use this in training/demo

For your perf engineering session, you can say:

โ€œThis is our baseline view with System.Runtime:

  • CPU is ~0% โ†’ our app is idle.
  • Allocation rate is tiny, GC time is 0 โ†’ no GC pressure.
  • Heap is ~3 MB, working set ~107 MB โ†’ small footprint.

Now letโ€™s send load and watch:

  • CPU Usage
  • Allocation Rate
  • % Time in GC / Time paused by GC
  • ThreadPool Queue Length
  • Working Set (MB)

and compare naive vs optimized implementation.โ€

If youโ€™d like, next I can:

  • Take these exact counters and write 2โ€“3 slide pages:
    • โ€œTop 8 System.Runtime counters every .NET engineer should knowโ€
    • Each with 1-line explanation + โ€œHow to interpret when itโ€™s highโ€.

CREATE LOAD

$ curl -k -X POST “https://localhost:5001/api/orders/bulk-optimized?count=100000”


Nice, now weโ€™re seeing real activity โ€“ this is exactly the kind of snapshot you want to show in your perf session as โ€œapp under loadโ€.

Letโ€™s decode it in a focused way and then Iโ€™ll give you slide-ready talking points.


1๏ธโƒฃ What changed vs earlier (idle) snapshot?

Earlier:

  • Allocation Rate: ~16 KB/sec
  • GC Heap Size: ~3 MB
  • Working Set: ~107 MB
  • CPU: ~0.2%

Now:

Allocation Rate (B / 1 sec)   50,067,464   โ‰ˆ 50 MB/sec
CPU Usage (%)                  5.159
GC Heap Size (MB)            239.699
GC Committed Bytes (MB)      361.464
Working Set (MB)             491.52
Gen 2 Size (B)               1.1923e+08 โ‰ˆ 113.7 MB
LOH Size (B)                 27,713,016 โ‰ˆ 26.4 MB
ThreadPool Thread Count      6
ThreadPool Completed Work
    Item Count (/sec)        609
Monitor Lock Contention
    Count (/sec)             2
Time spent in JIT (ms/sec)   12.602

Story in one line:
๐Ÿ‘‰ โ€œUnder load, the app is allocating ~50 MB/sec, heap has grown to ~240 MB, working set to ~490 MB, some JIT work still happening, and the ThreadPool is actively processing ~600 work items/sec with a little lock contention.โ€


2๏ธโƒฃ Key counters & how to interpret them

๐Ÿš€ Allocation Rate โ€“ 50 MB/sec

  • This is very high compared to your idle state.
  • Great demo point:

โ€œOur workload allocates ~50 MB of managed objects every second. If this stays high, GC will eventually need to work harder, potentially increasing GC pauses.โ€

For naive implementation youโ€™d expect:

  • Higher allocation rate
  • More frequent GCs later
  • Possible % Time in GC and Time paused by GC increasing when pressure rises

๐Ÿง  Heap & Memory Footprint

GC Heap Size (MB)       โ‰ˆ 240 MB
GC Committed Bytes (MB) โ‰ˆ 361 MB
Working Set (MB)        โ‰ˆ 492 MB
Gen 2 Size              โ‰ˆ 114 MB
LOH Size                โ‰ˆ 26 MB
Code language: JavaScript (javascript)

Points:

  • โ€œThe managed heap alone is about 240 MB now.โ€
  • โ€œThe GC has reserved ~360 MB from the OS to manage this heap.โ€
  • โ€œTotal working set (managed + native) is about 490 MB.โ€
  • โ€œA big part of memory is in Gen 2 (long-lived objects, ~114 MB) and some in LOH (~26 MB, large arrays/buffers).โ€

โ€œIf GC Heap Size and Gen 2 Size keep growing and rarely shrink, we may be trending toward a memory leak or a very heavy long-lived cache.โ€


โฑ GC & Pauses

% Time in GC (%)         0
Time paused by GC (ms/s) 0
Gen 0 / Gen 1 / Gen 2
    GC Count (/sec)      0
  • Despite high allocations, at the moment of this sample:
    • No collections in that 1-second window.
    • No GC pause time in that exact second.

Important nuance:

โ€œThis is a 1-second snapshot. Weโ€™re seeing high allocations, but GC didnโ€™t happen in this particular second. Over time, weโ€™d expect this to eventually trigger GCs; when that happens, % Time in GC and Time paused by GC will start to show non-zero values.โ€


๐Ÿงต ThreadPool & Concurrency

ThreadPool Thread Count                 6
ThreadPool Completed Work Item (/sec)  609
ThreadPool Queue Length                 0
Monitor Lock Contention Count (/sec)    2

How to narrate:

  • โ€œWe have 6 ThreadPool threads currently handling ~600 work items per second.โ€
  • โ€œQueue length is 0 โ†’ threads are keeping up with the load.โ€
  • โ€œLock contention count is 2/sec โ†’ a small but non-zero sign that some threads occasionally wait on locks.โ€

In a problematic scenario youโ€™d see:

  • High Queue Length + low Thread Count โ†’ ThreadPool starvation.
  • High Lock Contention Count/sec โ†’ contention on lock/critical sections.

๐Ÿงฉ JIT Activity

IL Bytes Jitted (B)      โ‰ˆ 1.2 MB
Number of Methods Jitted 15,635
Time spent in JIT (ms/s) 12.602
Code language: JavaScript (javascript)
  • Still some JIT happening (12.6 ms/sec).
  • You can say:

โ€œAs the workload exercises more code paths, we see JIT still compiling methods. Once warm, Time spent in JIT should drop close to 0.โ€

Useful to highlight โ€œwarm-upโ€ behavior vs โ€œsteady stateโ€.


3๏ธโƒฃ summary

Title: Example โ€“ System.Runtime under Load

  • CPU Usage: ~5% โ€“ app is doing real work but not CPU-bound yet.
  • Allocation Rate: ~50 MB/sec โ€“ high allocation pressure; GC will need to work harder as load continues.
  • GC Heap Size: ~240 MB; GC Committed: ~360 MB โ€“ significant managed memory footprint.
  • Gen 2 & LOH: ~114 MB (Gen 2), ~26 MB (LOH) โ€“ many long-lived / large objects.
  • GC Time / Pauses: 0% and 0 ms in this snapshot โ€“ no GC happening in this particular second.
  • ThreadPool: 6 threads, ~600 work items/sec, queue length 0 โ€“ threads are keeping up with request load.
  • Lock Contention: 2/sec โ€“ minor contention, not yet alarming.
  • Working Set: ~490 MB โ€“ overall process memory usage.

โ€œThis snapshot shows our app under load: high allocation rate, large heap and working set, active ThreadPool, and some lock contention. Right now GC isnโ€™t pausing us, but if we keep allocating at ~50 MB/sec, we will eventually see more GC activity and potential pauses.โ€


Find Trusted Cardiac Hospitals

Compare heart hospitals by city and services โ€” all in one place.

Explore Hospitals
Iโ€™m a DevOps/SRE/DevSecOps/Cloud Expert passionate about sharing knowledge and experiences. I have worked at <a href="https://www.cotocus.com/">Cotocus</a>. I share tech blog at <a href="https://www.devopsschool.com/">DevOps School</a>, travel stories at <a href="https://www.holidaylandmark.com/">Holiday Landmark</a>, stock market tips at <a href="https://www.stocksmantra.in/">Stocks Mantra</a>, health and fitness guidance at <a href="https://www.mymedicplus.com/">My Medic Plus</a>, product reviews at <a href="https://www.truereviewnow.com/">TrueReviewNow</a> , and SEO strategies at <a href="https://www.wizbrand.com/">Wizbrand.</a> Do you want to learn <a href="https://www.quantumuting.com/">Quantum Computing</a>? <strong>Please find my social handles as below;</strong> <a href="https://www.rajeshkumar.xyz/">Rajesh Kumar Personal Website</a> <a href="https://www.youtube.com/TheDevOpsSchool">Rajesh Kumar at YOUTUBE</a> <a href="https://www.instagram.com/rajeshkumarin">Rajesh Kumar at INSTAGRAM</a> <a href="https://x.com/RajeshKumarIn">Rajesh Kumar at X</a> <a href="https://www.facebook.com/RajeshKumarLog">Rajesh Kumar at FACEBOOK</a> <a href="https://www.linkedin.com/in/rajeshkumarin/">Rajesh Kumar at LINKEDIN</a> <a href="https://www.wizbrand.com/rajeshkumar">Rajesh Kumar at WIZBRAND</a> <a href="https://www.rajeshkumar.xyz/dailylogs">Rajesh Kumar DailyLogs</a>

Related Posts

Top 10 Model Benchmarking Suites: Features, Pros, Cons & Comparison

Introduction Model Benchmarking Suites help AI teams test, compare, and validate machine learning models, large language models, multimodal models, and AI agents before they are deployed in…

Read More

Top 10 Model Compression Toolkits: Features, Pros, Cons & Comparison

Introduction Model compression toolkits help AI teams reduce the size, memory usage, latency, and serving cost of machine learning models while keeping useful performance as high as…

Read More

Top 10 Model Quantization Tooling: Features, Pros, Cons & Comparison

Introduction Model quantization tooling helps AI teams make models smaller, faster, and cheaper to run by reducing numerical precision. Instead of running every model weight or activation…

Read More

Top 10 Model Distillation Toolkits: Features, Pros, Cons & Comparison

Introduction Model distillation toolkits help AI teams transfer knowledge from a larger, more capable model into a smaller, faster, and cheaper model. In simple terms, the larger…

Read More

Top 10 RLHF / RLAIF Training Platforms: Features, Pros, Cons & Comparison

Introduction RLHF and RLAIF training platforms help AI teams improve model behavior using structured feedback. RLHF, or reinforcement learning from human feedback, uses human preference signals, ratings,…

Read More

Certified FinOps Architect: The Ultimate Roadmap for Cloud Financial Engineering

Introduction The journey to becoming a Certified FinOps Architect is a strategic move for any technical professional looking to bridge the gap between engineering excellence and financial…

Read More
Subscribe
Notify of
guest
2 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Skylar Bennett
Skylar Bennett
5 months ago

Very practical and handsโ€‘on demo of dotnetโ€‘counters! I appreciate how the tutorial walks through installation, listing running .NET processes, selecting appropriate providers (like System.Runtime, Kestrel, EF Core) and realโ€‘time monitoring vs offline collection. The breakdown of key metrics โ€” CPU usage, allocation rate, GC heap size, working set, threadโ€‘pool activity, JIT times โ€” and how to interpret them under idle vs load gives a clear picture of runtime performance and memory/GC behavior. For developers, SREs, and performance engineers working with .NET apps, this lab makes performance diagnostics accessible and actionable. ๐Ÿ‘Œ

Jason Mitchell
Jason Mitchell
5 months ago

This lab is a strong handsโ€‘on complement to learning realโ€‘time diagnostics in .NET, showing how to use dotnetโ€‘counters in a live scenario rather than just theory. It guides you through attaching the tool to a running process, selecting meaningful counters (e.g., CPU usage, GC heap size, threadโ€‘pool metrics) and interpreting what the numbers mean in a real application stress context. For anyone training interns or building up team capability in observability and performance engineering, these kinds of demo labs bridge the gap between โ€œtool installedโ€ and โ€œI understand what the results mean and what I should do nextโ€.

2
0
Would love your thoughts, please comment.x
()
x