We can do this with one self-contained minimal API that has:
- A “bad” endpoint showing threading anti-patterns (blocking,
.Result,Thread.Sleep) - A “good” endpoint using proper async/await
- A /stats endpoint to see concurrency behavior
- Then we’ll use dotnet-counters + simple load to see the impact.
1️⃣ Create the project
dotnet new web -n ThreadingDemo
cd ThreadingDemo
Code language: JavaScript (javascript)
Replace all contents of Program.cs with this:
This single file gives you:
/bad→ blocked threads / ThreadPool pressure/good→ healthy async behavior/stats→ concurrency snapshot
2️⃣ Run the app
From the ThreadingDemo folder:
dotnet run
It will listen on:
http://localhost:5000(default)
Test quickly:
curl http://localhost:5000/bad
curl http://localhost:5000/good
curl http://localhost:5000/stats
Code language: JavaScript (javascript)
3️⃣ Experience the problem: BAD threading under load
The /bad endpoint:
- Uses
Thread.Sleep()→ blocks the thread - Uses
Task.Delay(100).Result→ sync-over-async, blocks the thread - Under concurrent load, the ThreadPool threads get stuck, new requests wait ⇒ latency goes up, throughput drops.
🧪 Simulate load on /bad
Option A – PowerShell (parallel-ish)
# Fire 50 requests, roughly in parallel
1..50 | ForEach-Object {
Start-Job { curl "http://localhost:5000/bad" }
}
Code language: PHP (php)
Or more aggressively:
1..200 | ForEach-Object {
Start-Job { curl "http://localhost:5000/bad" }
}
Code language: JavaScript (javascript)
Then check:
curl "http://localhost:5000/stats"
Code language: JavaScript (javascript)
You’ll see:
maxBadgrow- Responses from
/badwill be slow (hundreds or thousands of ms)
4️⃣ Compare with GOOD endpoint under same load
Do the same with /good:
1..200 | ForEach-Object {
Start-Job { curl "http://localhost:5000/good" }
}
curl "http://localhost:5000/stats"
Code language: JavaScript (javascript)
You should observe:
GOOD handled in ... msis more stablemaxGoodmay be higher (more concurrency successfully handled)- Latency is smoother because threads are not blocked — they’re freed while awaiting.
5️⃣ Debug / Observe Threading Issues with Tools
Now let’s add tools on top, so you can show this in training.
🔹 Step 1: Find the process ID
In a new terminal:
dotnet-counters ps
Look for ThreadingDemo / dotnet with the project path.
Note the PID (e.g., 12345).
🔹 Step 2: Monitor runtime with dotnet-counters
Run:
dotnet-counters monitor --process-id <PID> System.Runtime Microsoft.AspNetCore.Hosting
Code language: CSS (css)
Watch these metrics while hitting /bad vs /good:
Key ones:
ThreadPool Thread CountThreadPool Queue LengthCPU UsageRequests / sec(from hosting)gc-heap-size
What you should see
When hammering /bad:
- ThreadPool Thread Count goes up
- ThreadPool Queue Length might stay elevated
- CPU can be high due to lots of blocking and context switching
- Requests/sec typically lower than expected
When hammering /good:
- ThreadPool threads are reused efficiently
- Queue Length often stays low
- CPU usage is better for same number of requests
- Requests/sec improves, latency is lower
🔹 Step 3: Visual Studio Diagnostic Tools (optional)
If you run from Visual Studio:
- Start the app with Debug → Start Debugging.
- Open Debug → Windows → Parallel Stacks / Parallel Tasks.
- Watch the number of running threads and tasks as you hammer
/badand/good.
You’ll see:
- For
/bad: more stuck threads, longer lifetimes - For
/good: tasks start/complete quickly, threads not held hostage
6️⃣ How to explain this behavior conceptually
In /bad:
Thread.Sleep→ the worker thread is doing nothing but cannot process other requests.Task.Delay(...).Result→ the operation is asynchronous internally, but you force it to be synchronous, so:- The thread blocks until delay finishes
- Under load: too many blocked threads → ThreadPool grows → context switching overhead → queue length and latency grow.
In /good:
await Task.Delay(...)→ the thread returns to the pool while waiting- When the delay completes, the continuation resumes (on a thread pool worker)
- The same set of threads can handle many more in-flight requests.
So it’s mostly a programming practice issue, not a “.NET design flaw”:
- ❌ Bad code: blocking, sync-over-async, Thread.Sleep on server
- ✅ Good code: async all the way, no
.Result, no.Wait()
7️⃣ Quick summary for your training slide
You can summarize this demo as:
/bad: Blocking calls (Thread.Sleep,.Result) → ThreadPool starvation → high latency, low throughput/good: Proper async/await → threads freed → better scalability and responsiveness- Tools:
dotnet-counters+/statsendpoint give direct visibility into concurrency & runtime behavior.
I’m a DevOps/SRE/DevSecOps/Cloud Expert passionate about sharing knowledge and experiences. I have worked at Cotocus. I share tech blog at DevOps School, travel stories at Holiday Landmark, stock market tips at Stocks Mantra, health and fitness guidance at My Medic Plus, product reviews at TrueReviewNow , and SEO strategies at Wizbrand.
Do you want to learn Quantum Computing?
Please find my social handles as below;
Rajesh Kumar Personal Website
Rajesh Kumar at YOUTUBE
Rajesh Kumar at INSTAGRAM
Rajesh Kumar at X
Rajesh Kumar at FACEBOOK
Rajesh Kumar at LINKEDIN
Rajesh Kumar at WIZBRAND
Find Trusted Cardiac Hospitals
Compare heart hospitals by city and services — all in one place.
Explore Hospitals
Awesome write‑up — this article does a great job showing how easy it is for .NET developers to accidentally fall into threading anti‑patterns if they don’t understand the difference between blocking and asynchronous design. The contrast between the “bad” endpoint (blocking calls like
Thread.Sleep()or.Result) and the “good” endpoint using properasync/awaitreally illustrates how blocking threads under load can lead to ThreadPool starvation, high latency and poor scalability. I also like how the accompanying/statsendpoint +dotnet‑countersdemo makes the consequences of bad design visible — that’s a powerful lesson for anyone building scalable HTTP services. For modern web APIs or high‑traffic services, adopting async patterns over synchronous/blocking calls is not just best practice — it’s essential for responsiveness and throughput. 👏This article gives a very clear and practical illustration of the risks of “blocking‑thread” anti‑patterns and why embracing proper async/await concurrency is essential in .NET applications. The contrast between the
/badendpoint (with blocking calls likeThread.Sleep()or.Result) and the/goodendpoint (usingawait Task.Delay(...)) shows in a concrete, measurable way how blocking threads under load kills throughput and raises latency, while async code keeps threads free, improves scalability and responsiveness. Using tools likedotnet‑countersto monitor thread‑pool usage and request‑per‑second under load adds real‑world observability — a great teaching aid for developers or when training teams. Overall, it’s a highly useful resource for anyone designing concurrent .NET services, particularly in web/server contexts.this article gives a very clear and practical illustration of the risks of “blocking‑thread” anti‑patterns and why embracing proper async/await concurrency is essential in .NET applications. The contrast between the
/badendpoint (with blocking calls likeThread.Sleep()or.Result) and the/goodendpoint (usingawait Task.Delay(...)) shows in a concrete, measurable way how blocking threads under load kills throughput and raises latency, while async code keeps threads free, improves scalability and responsiveness. Using tools likedotnet-countersto monitor thread‑pool usage and request‑per‑second under load adds real‑world observability — a great teaching aid for developers or when training teams. Overall, it’s a highly useful resource for anyone designing concurrent .NET services, particularly in web/server contexts.