Here’s a single, self-contained console app you can run to see and feel the difference between:
- 🚫 Without Object Pooling (allocate new arrays every time)
- ✅ With Object Pooling (reuse arrays via
ArrayPool<byte>)
It includes:
- Full
Program.cscode - Optional
.csproj(if you want a drop-in project) - Step-by-step how to run
- How to interpret the results
Works on any modern .NET (7/8/9/10+).
1️⃣ Full Code – Program.cs
2️⃣ Optional – Minimal .csproj (if you want full project file)
Create a file named ObjectPoolingDemo.csproj:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
</Project>
Code language: HTML, XML (xml)
You can also change
net8.0tonet9.0or later (e.g.,net10.0) depending on your installed SDK.
3️⃣ Step-by-Step: How to Run This Demo
Step 1 – Make a new console project
dotnet new console -n ObjectPoolingDemo
cd ObjectPoolingDemo
Code language: JavaScript (javascript)
Step 2 – Replace Program.cs
Open Program.cs in your editor and replace everything with the code from section 1️⃣.
(Optionally replace the autogenerated .csproj with the one in 2️⃣, but not required.)
Step 3 – Build and run in Release
dotnet run -c Release
You’ll see output like:
=========================================
Object Pooling Demo (ArrayPool<byte>)
=========================================
Iterations : 1,000,000
BufferSize : 1024 bytes
Warming up JIT (small runs)...
--- Warmup - Without Pooling ---
Iterations: 100,000, BufferSize: 1024 bytes
Time Elapsed : 120 ms
GC Gen0 : 12
GC Gen1 : 1
GC Gen2 : 0
Managed Memory Delta (approx): 3.40 MB
Checksum (ignore, just to keep JIT honest): 123456789
--- Warmup - With Pooling ---
Iterations: 100,000, BufferSize: 1024 bytes
Time Elapsed : 60 ms
GC Gen0 : 2
GC Gen1 : 0
GC Gen2 : 0
Managed Memory Delta (approx): 0.20 MB
Checksum (ignore, just to keep JIT honest): 123456789
=========== REAL TESTS (Release) ==========
--- WITHOUT pooling (new byte[] each time) ---
Iterations: 1,000,000, BufferSize: 1024 bytes
Time Elapsed : 900 ms
GC Gen0 : 100
GC Gen1 : 5
GC Gen2 : 1
Managed Memory Delta (approx): 40.00 MB
Checksum (ignore, just to keep JIT honest): 123456789
--- WITH pooling (ArrayPool<byte>.Shared) ---
Iterations: 1,000,000, BufferSize: 1024 bytes
Time Elapsed : 400 ms
GC Gen0 : 5
GC Gen1 : 0
GC Gen2 : 0
Managed Memory Delta (approx): 2.00 MB
Checksum (ignore, just to keep JIT honest): 123456789
Code language: HTML, XML (xml)
(Exact numbers will differ per machine, but the pattern will be similar.)
4️⃣ How to “Experience” and Interpret the Results
Focus on these lines for each scenario:
- Time Elapsed
- GC Gen0 / Gen1 / Gen2
- Managed Memory Delta (MB)
🔴 Scenario: WITHOUT pooling
new byte[bufferSize]is executed 1,000,000 times.- That means ~1,024 * 1,000,000 ≈ 1 GB worth of arrays allocated over time.
- You should see:
- Higher elapsed time
- Many more Gen0 collections
- Possibly some Gen1/Gen2 collections
- A larger memory delta
This simulates a real-world high-allocation hot path (e.g., per-request/per-message allocations).
🟢 Scenario: WITH pooling (ArrayPool.Shared)
- Arrays are rented and returned from a shared pool.
- Only a small number of underlying arrays are actually allocated.
- Subsequent rents reuse these buffers.
Expected result:
- Lower elapsed time (less GC interference + fewer allocations)
- Much fewer Gen0 collections, often 5–10x fewer
- Gen1/Gen2 collections may drop to 0 or near 0
- Managed Memory Delta much smaller
You are literally seeing GC pressure decreasing because:
- Without pooling → allocate, discard, GC must clean up
- With pooling → allocate a few times, then reuse, GC is mostly idle
5️⃣ Tweaking the Demo to Feel the Effect More
If you want even more dramatic differences:
- Increase
iterations(e.g.,5_000_000) - Or increase
bufferSize(e.g.,4096or16_384)
const int iterations = 5_000_000;
const int bufferSize = 4096;
Code language: JavaScript (javascript)
⚠️ Be careful: this can make the non-pooled version quite heavy on CPU and memory. Good for a demo on a strong machine, but maybe too much on low-end hardware.
6️⃣ How to Understand in Training
You can summarize for your audience:
“In the no pooling scenario, we allocate a fresh buffer for each iteration, forcing the GC to keep cleaning up.
In the pooling scenario, we reuse buffers fromArrayPool<byte>.Shared, dramatically reducing allocations and GC work.
The difference in GC counts and elapsed time is the direct impact of object pooling.”
I’m a DevOps/SRE/DevSecOps/Cloud Expert passionate about sharing knowledge and experiences. I have worked at Cotocus. I share tech blog at DevOps School, travel stories at Holiday Landmark, stock market tips at Stocks Mantra, health and fitness guidance at My Medic Plus, product reviews at TrueReviewNow , and SEO strategies at Wizbrand.
Do you want to learn Quantum Computing?
Please find my social handles as below;
Rajesh Kumar Personal Website
Rajesh Kumar at YOUTUBE
Rajesh Kumar at INSTAGRAM
Rajesh Kumar at X
Rajesh Kumar at FACEBOOK
Rajesh Kumar at LINKEDIN
Rajesh Kumar at WIZBRAND
Find Trusted Cardiac Hospitals
Compare heart hospitals by city and services — all in one place.
Explore Hospitals
This article does a great job of showing how object pooling can be used as a practical performance strategy in .NET applications. By reusing objects instead of creating and disposing them repeatedly, developers can lower memory allocations and reduce the overhead of frequent garbage collection. The explanation makes it clear that object pooling is especially useful in high-throughput and long-running services. Overall, the blog helps readers understand when object pooling makes sense and how it can contribute to building stable, high-performance .NET systems.
This article gives a very clear and practical explanation of how object pooling can help .NET applications reuse instances instead of constantly allocating and discarding them — a tactic that can greatly reduce memory churn and improve performance in high-load environments. The examples and scenarios described make it easy to see when object pooling is most beneficial — especially for expensive-to-create objects or frequently used resources — and why it matters for scalability and efficiency. I appreciate how the post balances theory with actionable advice that developers can implement right away. Overall, this is a valuable resource for anyone working on optimizing resource usage and responsiveness in .NET applications.