We can do this with one self-contained minimal API that has:
- A โbadโ endpoint showing threading anti-patterns (blocking,
.Result,Thread.Sleep) - A โgoodโ endpoint using proper async/await
- A /stats endpoint to see concurrency behavior
- Then weโll use dotnet-counters + simple load to see the impact.
1๏ธโฃ Create the project
dotnet new web -n ThreadingDemo
cd ThreadingDemo
Code language: JavaScript (javascript)
Replace all contents of Program.cs with this:
This single file gives you:
/badโ blocked threads / ThreadPool pressure/goodโ healthy async behavior/statsโ concurrency snapshot
2๏ธโฃ Run the app
From the ThreadingDemo folder:
dotnet run
It will listen on:
http://localhost:5000(default)
Test quickly:
curl http://localhost:5000/bad
curl http://localhost:5000/good
curl http://localhost:5000/stats
Code language: JavaScript (javascript)
3๏ธโฃ Experience the problem: BAD threading under load
The /bad endpoint:
- Uses
Thread.Sleep()โ blocks the thread - Uses
Task.Delay(100).Resultโ sync-over-async, blocks the thread - Under concurrent load, the ThreadPool threads get stuck, new requests wait โ latency goes up, throughput drops.
๐งช Simulate load on /bad
Option A โ PowerShell (parallel-ish)
# Fire 50 requests, roughly in parallel
1..50 | ForEach-Object {
Start-Job { curl "http://localhost:5000/bad" }
}
Code language: PHP (php)
Or more aggressively:
1..200 | ForEach-Object {
Start-Job { curl "http://localhost:5000/bad" }
}
Code language: JavaScript (javascript)
Then check:
curl "http://localhost:5000/stats"
Code language: JavaScript (javascript)
Youโll see:
maxBadgrow- Responses from
/badwill be slow (hundreds or thousands of ms)
4๏ธโฃ Compare with GOOD endpoint under same load
Do the same with /good:
1..200 | ForEach-Object {
Start-Job { curl "http://localhost:5000/good" }
}
curl "http://localhost:5000/stats"
Code language: JavaScript (javascript)
You should observe:
GOOD handled in ... msis more stablemaxGoodmay be higher (more concurrency successfully handled)- Latency is smoother because threads are not blocked โ theyโre freed while awaiting.
5๏ธโฃ Debug / Observe Threading Issues with Tools
Now letโs add tools on top, so you can show this in training.
๐น Step 1: Find the process ID
In a new terminal:
dotnet-counters ps
Look for ThreadingDemo / dotnet with the project path.
Note the PID (e.g., 12345).
๐น Step 2: Monitor runtime with dotnet-counters
Run:
dotnet-counters monitor --process-id <PID> System.Runtime Microsoft.AspNetCore.Hosting
Code language: CSS (css)
Watch these metrics while hitting /bad vs /good:
Key ones:
ThreadPool Thread CountThreadPool Queue LengthCPU UsageRequests / sec(from hosting)gc-heap-size
What you should see
When hammering /bad:
- ThreadPool Thread Count goes up
- ThreadPool Queue Length might stay elevated
- CPU can be high due to lots of blocking and context switching
- Requests/sec typically lower than expected
When hammering /good:
- ThreadPool threads are reused efficiently
- Queue Length often stays low
- CPU usage is better for same number of requests
- Requests/sec improves, latency is lower
๐น Step 3: Visual Studio Diagnostic Tools (optional)
If you run from Visual Studio:
- Start the app with Debug โ Start Debugging.
- Open Debug โ Windows โ Parallel Stacks / Parallel Tasks.
- Watch the number of running threads and tasks as you hammer
/badand/good.
Youโll see:
- For
/bad: more stuck threads, longer lifetimes - For
/good: tasks start/complete quickly, threads not held hostage
6๏ธโฃ How to explain this behavior conceptually
In /bad:
Thread.Sleepโ the worker thread is doing nothing but cannot process other requests.Task.Delay(...).Resultโ the operation is asynchronous internally, but you force it to be synchronous, so:- The thread blocks until delay finishes
- Under load: too many blocked threads โ ThreadPool grows โ context switching overhead โ queue length and latency grow.
In /good:
await Task.Delay(...)โ the thread returns to the pool while waiting- When the delay completes, the continuation resumes (on a thread pool worker)
- The same set of threads can handle many more in-flight requests.
So itโs mostly a programming practice issue, not a โ.NET design flawโ:
- โ Bad code: blocking, sync-over-async, Thread.Sleep on server
- โ
Good code: async all the way, no
.Result, no.Wait()
7๏ธโฃ Quick summary for your training slide
You can summarize this demo as:
/bad: Blocking calls (Thread.Sleep,.Result) โ ThreadPool starvation โ high latency, low throughput/good: Proper async/await โ threads freed โ better scalability and responsiveness- Tools:
dotnet-counters+/statsendpoint give direct visibility into concurrency & runtime behavior.
Iโm a DevOps/SRE/DevSecOps/Cloud Expert passionate about sharing knowledge and experiences. I have worked at Cotocus. I share tech blog at DevOps School, travel stories at Holiday Landmark, stock market tips at Stocks Mantra, health and fitness guidance at My Medic Plus, product reviews at TrueReviewNow , and SEO strategies at Wizbrand.
Do you want to learn Quantum Computing?
Please find my social handles as below;
Rajesh Kumar Personal Website
Rajesh Kumar at YOUTUBE
Rajesh Kumar at INSTAGRAM
Rajesh Kumar at X
Rajesh Kumar at FACEBOOK
Rajesh Kumar at LINKEDIN
Rajesh Kumar at WIZBRAND
Find Trusted Cardiac Hospitals
Compare heart hospitals by city and services โ all in one place.
Explore Hospitals
Awesome writeโup โ this article does a great job showing how easy it is for .NET developers to accidentally fall into threading antiโpatterns if they donโt understand the difference between blocking and asynchronous design. The contrast between the โbadโ endpoint (blocking calls like
Thread.Sleep()or.Result) and the โgoodโ endpoint using properasync/awaitreally illustrates how blocking threads under load can lead to ThreadPool starvation, high latency and poor scalability. I also like how the accompanying/statsendpoint +dotnetโcountersdemo makes the consequences of bad design visible โ thatโs a powerful lesson for anyone building scalable HTTP services. For modern web APIs or highโtraffic services, adopting async patterns over synchronous/blocking calls is not just best practice โ itโs essential for responsiveness and throughput. ๐This article gives a very clear and practical illustration of the risks of โblockingโthreadโ antiโpatterns and why embracing proper async/await concurrency is essential in .NET applications. The contrast between the
/badendpoint (with blocking calls likeThread.Sleep()or.Result) and the/goodendpoint (usingawait Task.Delay(...)) shows in a concrete, measurable way how blocking threads under load kills throughput and raises latency, while async code keeps threads free, improves scalability and responsiveness. Using tools likedotnetโcountersto monitor threadโpool usage and requestโperโsecond under load adds realโworld observability โ a great teaching aid for developers or when training teams. Overall, itโs a highly useful resource for anyone designing concurrent .NET services, particularly in web/server contexts.this article gives a very clear and practical illustration of the risks of โblockingโthreadโ antiโpatterns and why embracing proper async/await concurrency is essential in .NET applications. The contrast between the
/badendpoint (with blocking calls likeThread.Sleep()or.Result) and the/goodendpoint (usingawait Task.Delay(...)) shows in a concrete, measurable way how blocking threads under load kills throughput and raises latency, while async code keeps threads free, improves scalability and responsiveness. Using tools likedotnet-countersto monitor threadโpool usage and requestโperโsecond under load adds realโworld observability โ a great teaching aid for developers or when training teams. Overall, itโs a highly useful resource for anyone designing concurrent .NET services, particularly in web/server contexts.