We can do this with one self-contained minimal API that has:
- A “bad” endpoint showing threading anti-patterns (blocking,
.Result,Thread.Sleep) - A “good” endpoint using proper async/await
- A /stats endpoint to see concurrency behavior
- Then we’ll use dotnet-counters + simple load to see the impact.
1️⃣ Create the project
dotnet new web -n ThreadingDemo
cd ThreadingDemo
Code language: JavaScript (javascript)
Replace all contents of Program.cs with this:
This single file gives you:
/bad→ blocked threads / ThreadPool pressure/good→ healthy async behavior/stats→ concurrency snapshot
2️⃣ Run the app
From the ThreadingDemo folder:
dotnet run
It will listen on:
http://localhost:5000(default)
Test quickly:
curl http://localhost:5000/bad
curl http://localhost:5000/good
curl http://localhost:5000/stats
Code language: JavaScript (javascript)
3️⃣ Experience the problem: BAD threading under load
The /bad endpoint:
- Uses
Thread.Sleep()→ blocks the thread - Uses
Task.Delay(100).Result→ sync-over-async, blocks the thread - Under concurrent load, the ThreadPool threads get stuck, new requests wait ⇒ latency goes up, throughput drops.
🧪 Simulate load on /bad
Option A – PowerShell (parallel-ish)
# Fire 50 requests, roughly in parallel
1..50 | ForEach-Object {
Start-Job { curl "http://localhost:5000/bad" }
}
Code language: PHP (php)
Or more aggressively:
1..200 | ForEach-Object {
Start-Job { curl "http://localhost:5000/bad" }
}
Code language: JavaScript (javascript)
Then check:
curl "http://localhost:5000/stats"
Code language: JavaScript (javascript)
You’ll see:
maxBadgrow- Responses from
/badwill be slow (hundreds or thousands of ms)
4️⃣ Compare with GOOD endpoint under same load
Do the same with /good:
1..200 | ForEach-Object {
Start-Job { curl "http://localhost:5000/good" }
}
curl "http://localhost:5000/stats"
Code language: JavaScript (javascript)
You should observe:
GOOD handled in ... msis more stablemaxGoodmay be higher (more concurrency successfully handled)- Latency is smoother because threads are not blocked — they’re freed while awaiting.
5️⃣ Debug / Observe Threading Issues with Tools
Now let’s add tools on top, so you can show this in training.
🔹 Step 1: Find the process ID
In a new terminal:
dotnet-counters ps
Look for ThreadingDemo / dotnet with the project path.
Note the PID (e.g., 12345).
🔹 Step 2: Monitor runtime with dotnet-counters
Run:
dotnet-counters monitor --process-id <PID> System.Runtime Microsoft.AspNetCore.Hosting
Code language: CSS (css)
Watch these metrics while hitting /bad vs /good:
Key ones:
ThreadPool Thread CountThreadPool Queue LengthCPU UsageRequests / sec(from hosting)gc-heap-size
What you should see
When hammering /bad:
- ThreadPool Thread Count goes up
- ThreadPool Queue Length might stay elevated
- CPU can be high due to lots of blocking and context switching
- Requests/sec typically lower than expected
When hammering /good:
- ThreadPool threads are reused efficiently
- Queue Length often stays low
- CPU usage is better for same number of requests
- Requests/sec improves, latency is lower
🔹 Step 3: Visual Studio Diagnostic Tools (optional)
If you run from Visual Studio:
- Start the app with Debug → Start Debugging.
- Open Debug → Windows → Parallel Stacks / Parallel Tasks.
- Watch the number of running threads and tasks as you hammer
/badand/good.
You’ll see:
- For
/bad: more stuck threads, longer lifetimes - For
/good: tasks start/complete quickly, threads not held hostage
6️⃣ How to explain this behavior conceptually
In /bad:
Thread.Sleep→ the worker thread is doing nothing but cannot process other requests.Task.Delay(...).Result→ the operation is asynchronous internally, but you force it to be synchronous, so:- The thread blocks until delay finishes
- Under load: too many blocked threads → ThreadPool grows → context switching overhead → queue length and latency grow.
In /good:
await Task.Delay(...)→ the thread returns to the pool while waiting- When the delay completes, the continuation resumes (on a thread pool worker)
- The same set of threads can handle many more in-flight requests.
So it’s mostly a programming practice issue, not a “.NET design flaw”:
- ❌ Bad code: blocking, sync-over-async, Thread.Sleep on server
- ✅ Good code: async all the way, no
.Result, no.Wait()
7️⃣ Quick summary for your training slide
You can summarize this demo as:
/bad: Blocking calls (Thread.Sleep,.Result) → ThreadPool starvation → high latency, low throughput/good: Proper async/await → threads freed → better scalability and responsiveness- Tools:
dotnet-counters+/statsendpoint give direct visibility into concurrency & runtime behavior.
I’m a DevOps/SRE/DevSecOps/Cloud Expert passionate about sharing knowledge and experiences. I have worked at Cotocus. I share tech blog at DevOps School, travel stories at Holiday Landmark, stock market tips at Stocks Mantra, health and fitness guidance at My Medic Plus, product reviews at TrueReviewNow , and SEO strategies at Wizbrand.
Do you want to learn Quantum Computing?
Please find my social handles as below;
Rajesh Kumar Personal Website
Rajesh Kumar at YOUTUBE
Rajesh Kumar at INSTAGRAM
Rajesh Kumar at X
Rajesh Kumar at FACEBOOK
Rajesh Kumar at LINKEDIN
Rajesh Kumar at WIZBRAND