Find the Best Cosmetic Hospitals

Explore trusted cosmetic hospitals and make a confident choice for your transformation.

โ€œInvest in yourself โ€” your confidence is always worth it.โ€

Explore Cosmetic Hospitals

Start your journey today โ€” compare options in one place.

DOTNET: Understanding Garbage Collection with Gen0, Gen1, Gen2

Yes, we can absolutely do this in one self-contained program ๐Ÿ‘
Youโ€™ll be able to:

  • See GC before/after each scenario
  • Trigger Gen0 / Gen1 / Gen2 collections
  • Observe heap size and collection counts
  • Use this with dotnet-counters / dotMemory / VS profiler if you want

Below is a single Program.cs you can drop into a new console app.


1๏ธโƒฃ Create the project

dotnet new console -n GcGenerationsDemo
cd GcGenerationsDemo
Code language: JavaScript (javascript)

Replace all contents of Program.cs with this:



2๏ธโƒฃ How to run and โ€œexperienceโ€ GC

From inside the project folder:

dotnet run

The program will walk you through:

  1. Startup stats
  2. Gen0 scenario โ€“ short-lived allocations
  3. Gen1 scenario โ€“ medium-lived allocations
  4. Gen2 scenario โ€“ long-lived allocations

Each step prints:

  • Gen0/Gen1/Gen2 collection counts
  • Total managed heap (MB)

Youโ€™ll see:

  • Gen0 counts jump a lot in the first scenario.
  • Gen1 counts increase in the second scenario.
  • Gen2 counts and heap size behavior change in the third scenario.

3๏ธโƒฃ Optional: Watch it live with dotnet-counters

In another terminal, while the app is running:

  1. Get process list: dotnet-counters ps
  2. Find your GcGenerationsDemo PID, then: dotnet-counters monitor --process-id <PID> System.Runtime

Watch:

  • gc-heap-size
  • gen-0-gc-count
  • gen-1-gc-count
  • gen-2-gc-count

Run each scenario and youโ€™ll see the counters move in sync with console output.


4๏ธโƒฃ How this maps to Gen0 / Gen1 / Gen2 concepts

  • Gen0 scenario
    • Many tiny, short-lived objects
    • Mostly collected in Gen0
    • Youโ€™ll see Gen0 collections spike
  • Gen1 scenario
    • Objects kept alive briefly in a List<byte[]>
    • They survive at least one collection โ†’ promoted to Gen1
    • We force Gen1 collections and then free references
    • You see Gen1 counts increase, heap shrink
  • Gen2 scenario
    • Objects stored in a static list (LongLivedHolder.Buffers)
    • They are long-lived; promoted to Gen2
    • Even after Gen2 collection, many remain because references are still held
    • This is how leaks and long-lived caches behave



1๏ธโƒฃ How to read the four numbers

Each PrintGcStats gives you:

  • Gen0 collections โ€“ how many times .NET cleaned short-lived objects
  • Gen1 collections โ€“ how many times it cleaned objects that survived Gen0
  • Gen2 collections โ€“ how many times it cleaned long-lived objects
  • Total managed heap โ€“ how much managed memory is still in use after GC (roughly)

These counters are cumulative since process start.


2๏ธโƒฃ Gen0 scenario โ€“ short-lived objects

You saw:

=== Gen0 Scenario: Short-lived objects ===
--- GC Stats: Before Gen0 work ---
  Gen0 collections: 0
  Gen1 collections: 0
  Gen2 collections: 0
  Total managed heap: 0 MB

Gen0 scenario completed in 35 ms
--- GC Stats: After Gen0 forced collection ---
  Gen0 collections: 71
  Gen1 collections: 1
  Gen2 collections: 0
  Total managed heap: 0 MB

What happened?

  • We allocated millions of tiny objects in a loop.
  • They were not stored anywhere, so they died quickly.
  • GC cleaned them mostly in Gen0:
    • Gen0 collections: 0 โ†’ 71 โœ…
  • A few objects survived briefly โ†’ promoted to Gen1 once:
    • Gen1 collections: 0 โ†’ 1
  • After the forced collection:
    • Total managed heap: 0 MB (rounded) โ†’ means almost nothing left.

๐Ÿ‘‰ Interpretation:

โ€œLots of short-lived garbage โ†’ GC handled it cheaply in Gen0.
We generated a ton of allocations, but the memory was fully reclaimed. No leak, GC working as designed.โ€

This is typical of request-scoped allocations in APIs when theyโ€™re well-behaved.


3๏ธโƒฃ Gen1 scenario โ€“ medium-lived objects

You saw:

=== Gen1 Scenario: Medium-lived objects ===
--- GC Stats: Before Gen1 work ---
  Gen0 collections: 71
  Gen1 collections: 1
  Gen2 collections: 0
  Total managed heap: 0 MB

Gen1 scenario completed in 2613 ms
--- GC Stats: After Gen1 forced collections ---
  Gen0 collections: 627
  Gen1 collections: 296
  Gen2 collections: 33
  Total managed heap: 3131 MB

What did the code do here?

  • It allocated a lot of 16 KB buffers and stored them in a List<byte[]> survivors.
  • That local list stayed alive for a while โ†’ the buffers survived multiple Gen0 collections.
  • That caused:
    • Gen0 collections: 71 โ†’ 627 (lots of allocations)
    • Gen1 collections: 1 โ†’ 296 (many promotions & cleanups)
    • Gen2 collections: 0 โ†’ 33 (some long-lived promotions too)

Why is heap so big here (3131 MB)?

  • Right after the scenario, before the runtime has fully compacted and reused memory, GetTotalMemory sees ~3 GB still reserved/used.
  • These objects were just cleared at the end of the scenario (we call survivors.Clear() and GC.Collect), but this snapshot is still showing that a lot of memory was in play.

Then before the next scenario you saw:

--- GC Stats: Before Gen2 work ---
  ...
  Total managed heap: 2 MB

So eventually the runtime fully reclaimed it, and the heap dropped back down.

๐Ÿ‘‰ Interpretation:

โ€œHere we created objects that lived longer than Gen0 (in a list).
We see big jumps in Gen1 and Gen2 collections and temporary heap growth (~3 GB).
After clearing references and more GC, memory drops back to a few MB โ†’ no leak, just heavy temporary pressure.โ€

This demonstrates:

  • Promotion from Gen0 โ†’ Gen1 โ†’ Gen2
  • Longer-lived objects = more expensive GC
  • Why you donโ€™t want to keep large collections alive longer than necessary.

4๏ธโƒฃ Gen2 scenario โ€“ long-lived / leaked objects

You saw:

=== Gen2 Scenario: Long-lived objects ===
--- GC Stats: Before Gen2 work ---
  Gen0 collections: 627
  Gen1 collections: 296
  Gen2 collections: 33
  Total managed heap: 2 MB

Gen2 scenario completed in 184 ms
--- GC Stats: After Gen2 forced collection ---
  Gen0 collections: 693
  Gen1 collections: 329
  Gen2 collections: 34
  Total managed heap: 392 MB

Long-lived objects stored: 50000
Note: Because we still keep references, these objects won't be freed.

What did the code do here?

  • Allocated 50,000 ร— 8 KB buffers โ‰ˆ ~400 MB.
  • Stored them in LongLivedHolder.Buffers, which is static.
  • We never clear that list โ†’ those objects are effectively long-lived.

Even after a full Gen2 collection:

  • Gen2 collections: 33 โ†’ 34 (we forced a full GC)
  • But heap only drops to:
    • Total managed heap: 392 MB
  • And we still have:
    • Long-lived objects stored: 50000

๐Ÿ‘‰ Interpretation:

โ€œThese are long-lived objects (or a leak).
Even after a full Gen2 GC, almost 400 MB remains because weโ€™re still holding references in a static list.
This is what a memory leak / long-lived cache looks like in production:
Gen2 collections happen, but memory never really goes down.โ€

This is the pattern youโ€™d see in:

  • Static caches that donโ€™t evict
  • Static lists/dicts that only grow
  • Singletons holding on to big data
  • Event handler leaks, etc.

5๏ธโƒฃ How to explain (short version)

โ€œIn the Gen0 scenario, we created millions of tiny, short-lived objects.
GC handled them mostly in Gen0 (71 collections), and after GC, memory is basically 0 MB.
This is healthy, short-lived garbage.

In the Gen1 scenario, we kept objects alive for a while in a list.
They survived Gen0, got promoted to Gen1 and some to Gen2.
We see big Gen1/Gen2 collection counts and temporary heap growth to ~3 GB, but after clearing references, the heap returns to a small size.
This shows the cost of medium-lived objects.

In the Gen2 scenario, we stored 50k buffers in a static list.
Even after a full Gen2 collection, we still use ~392 MB.
Because the app still references these objects, the GC cannot free them.
This is exactly how long-lived objects and memory leaks behave in real .NET apps.โ€

Find Trusted Cardiac Hospitals

Compare heart hospitals by city and services โ€” all in one place.

Explore Hospitals
Iโ€™m a DevOps/SRE/DevSecOps/Cloud Expert passionate about sharing knowledge and experiences. I have worked at <a href="https://www.cotocus.com/">Cotocus</a>. I share tech blog at <a href="https://www.devopsschool.com/">DevOps School</a>, travel stories at <a href="https://www.holidaylandmark.com/">Holiday Landmark</a>, stock market tips at <a href="https://www.stocksmantra.in/">Stocks Mantra</a>, health and fitness guidance at <a href="https://www.mymedicplus.com/">My Medic Plus</a>, product reviews at <a href="https://www.truereviewnow.com/">TrueReviewNow</a> , and SEO strategies at <a href="https://www.wizbrand.com/">Wizbrand.</a> Do you want to learn <a href="https://www.quantumuting.com/">Quantum Computing</a>? <strong>Please find my social handles as below;</strong> <a href="https://www.rajeshkumar.xyz/">Rajesh Kumar Personal Website</a> <a href="https://www.youtube.com/TheDevOpsSchool">Rajesh Kumar at YOUTUBE</a> <a href="https://www.instagram.com/rajeshkumarin">Rajesh Kumar at INSTAGRAM</a> <a href="https://x.com/RajeshKumarIn">Rajesh Kumar at X</a> <a href="https://www.facebook.com/RajeshKumarLog">Rajesh Kumar at FACEBOOK</a> <a href="https://www.linkedin.com/in/rajeshkumarin/">Rajesh Kumar at LINKEDIN</a> <a href="https://www.wizbrand.com/rajeshkumar">Rajesh Kumar at WIZBRAND</a> <a href="https://www.rajeshkumar.xyz/dailylogs">Rajesh Kumar DailyLogs</a>

Related Posts

Top 10 Model Compression Toolkits: Features, Pros, Cons & Comparison

Introduction Model compression toolkits help AI teams reduce the size, memory usage, latency, and serving cost of machine learning models while keeping useful performance as high as…

Read More

Top 10 Model Quantization Tooling: Features, Pros, Cons & Comparison

Introduction Model quantization tooling helps AI teams make models smaller, faster, and cheaper to run by reducing numerical precision. Instead of running every model weight or activation…

Read More

Top 10 Model Distillation Toolkits: Features, Pros, Cons & Comparison

Introduction Model distillation toolkits help AI teams transfer knowledge from a larger, more capable model into a smaller, faster, and cheaper model. In simple terms, the larger…

Read More

Top 10 RLHF / RLAIF Training Platforms: Features, Pros, Cons & Comparison

Introduction RLHF and RLAIF training platforms help AI teams improve model behavior using structured feedback. RLHF, or reinforcement learning from human feedback, uses human preference signals, ratings,…

Read More

Certified FinOps Architect: The Ultimate Roadmap for Cloud Financial Engineering

Introduction The journey to becoming a Certified FinOps Architect is a strategic move for any technical professional looking to bridge the gap between engineering excellence and financial…

Read More

Top 10 Parameter-Efficient Fine-Tuning (PEFT) Tooling: Features, Pros, Cons & Comparison

Introduction Parameter-Efficient Fine-Tuning (PEFT) tooling refers to modern frameworks that enable customization of large language models without updating all model parameters. Instead of retraining billions of weights,…

Read More
Subscribe
Notify of
guest
3 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Jason Mitchell
Jason Mitchell
5 months ago

Fantastic explanation โ€” this article makes the workings of .NETโ€™s garbage collector (GC) and the ideas behind generations (Gen0, Gen1, Gen2) much clearer. I like how it shows that most shortโ€‘lived objects stay in Gen0 and are cleaned up quickly, while only surviving objects get promoted to Gen1 and then Gen2, which helps keep memory management efficient and responsive. The included demo that lets readers โ€œseeโ€ GC behavior in Gen0/1/2 โ€” including heap size and collection counts โ€” is especially helpful. For developers tuning memory usage or trying to avoid memory leaks, this breakdown makes what could be cryptic GC internals very tangible and practical. ๐Ÿ‘

Skylar Bennett
Skylar Bennett
5 months ago

This article does an excellent job of breaking down how memory management works in .NET through its generational garbageโ€‘collection strategy. By explaining that new objects start in Genโ€ฏ0 โ€” where most shortโ€‘lived allocations get collected quickly โ€” and that only objects which โ€œsurviveโ€ get promoted to Genโ€ฏ1 and eventually Genโ€ฏ2, it makes clear how GC optimizes for both performance and memory usage. The discussion of how the garbage collector determines which objects are reachable (roots, references, etc.) and safely reclaims unreachable objects helps demystify what often feels like a โ€œblack box.โ€ For developers building longโ€‘running applications, this post offers valuable insight into why allocation patterns matter, how GC impacts memory lifetimes, and how to design code to minimize unwanted memory retention.

Skylar Bennett
Skylar Bennett
5 months ago

this article explains very clearly how garbage collection works under the hood in .NET (with .NET / CLR), especially the logic behind generational GC via Garbage Collector (GC) and its โ€œGen0 โ†’ Gen1 โ†’ Gen2โ€ model. It shows why most objects โ€” like temporary locals or shortโ€‘lived data โ€” go into Gen0, get collected frequently, and rarely promoted, while only longerโ€‘lived survivors move to Gen1, and ultimately Gen2, which keeps longโ€‘lived or โ€œstatic/ cachedโ€ objects. This generational strategy optimizes for performance by avoiding repeated scanning of the entire heap, reclaiming shortโ€‘lived objects cheaply, and only occasionally collecting longโ€‘lived data. The articleโ€™s example program (with forced collections and heapโ€‘size / collectionโ€‘count outputs) is especially useful: it makes abstract GC concepts tangible โ€” you can actually see Genโ€‘0 spikes with many shortโ€‘lived allocations, observe promotions to Gen1/Gen2 with mediumโ€‘lived data, and understand how memoryโ€‘leaks or longโ€‘lived caches behave when references persist. For anyone designing memoryโ€‘sensitive .NET applications, this tutorial is an excellent resource to build intuition about when GC runs, why allocation patterns matter, and how to avoid unintended memory retention.

3
0
Would love your thoughts, please comment.x
()
x