Skip to main content

Command Palette

Search for a command to run...

The Configuration Chaos

Updated
22 min read
The Configuration Chaos
K

A code-dependent life form.

Meet Jordan. Six months ago, Jordan joined a promising startup called "FastAPI Co." as a DevOps engineer. The company's main product is a high-traffic API service that handles configuration management for thousands of micro-services. At peak times, they process 5,000+ requests per minute.

Everything seemed fine during the first few months. The monitoring metrics looked acceptable, users were happy, and the product was growing. But then came December 20th, 2025.

The Incident

It's 3:17 AM on a cold Friday morning. Jordan's phone vibrates violently on the nightstand, then starts blaring alerts:

🚨 CRITICAL: HIGH MEMORY USAGE: 95% (Threshold: 80%)
🚨 CRITICAL: DATABASE: Too many connections (ERROR 1040)
🚨 WARNING: RESPONSE TIME: 5247ms (Normal: 45-60ms)
🚨 CRITICAL: SERVER CRASH IMMINENT - OOM Killer Active
🚨 ALERT: Error rate: 45% (Normal: <0.1%)

Jordan jolts awake, grabs the laptop with shaking hands, and logs into the monitoring dashboard. The graphs look like a horror movie:

Monitoring Dashboard Snapshot:

  • Memory usage: Climbing steadily from 2GB to 7.8GB in 90 minutes

  • Database connection pools: 523 active pools!?

  • Config file reads: 14,247 in the last hour

  • Active database connections: 8,941 (PostgreSQL limit: 100)

  • Cache misses: 99.7%

Jordan's internal monologue:

"What the hell is happening? Why are there 523 database connection pools? We should only need ONE! And why is the config file being read 14 THOUSAND times? We have caching, don't we? Why are different parts of the app seeing DIFFERENT configuration values? This doesn't make ANY sense! We're about to hit OOM and the database is rejecting connections... I need to figure this out NOW."

The Current Code

Jordan opens the codebase and starts tracing execution paths. What they find is terrifying:

// ❌ THE PROBLEM CODE - What Jordan inherited.
// This code is scattered across 30+ files!

// In src/database.rs
pub fn connect_to_database() -> DatabaseConnection {
    // PROBLEM 1: Reading config file from disk EVERY TIME!
    let config = read_config_from_file("config.json");

    // PROBLEM 2: Creating a NEW connection pool EVERY TIME!
    let pool = ConnectionPool::new(
        &config.database_url,
        config.max_connections  // Each pool holds 10-50 connections!
    );

    DatabaseConnection { pool }
}

// In src/handlers.rs
async fn handle_api_request(req: Request) -> Response {
    // ANOTHER config read from disk!
    let config = read_config_from_file("config.json");

    // Each request reads config for feature flags.
    if config.enable_feature_x {
        // Process with feature X.
    }

    if config.enable_caching {
        // Check cache... but wait, cache is also initialized per request!
        let cache = Cache::new(config.cache_size);  // NEW CACHE!
    }

    // ... more logic.
}

// In src/logger.rs
pub fn log(level: &str, message: &str) {
    // YET ANOTHER config read!
    let config = read_config_from_file("config.json");

    // Check log level threshold.
    if should_log(level, &config.log_level) {
        // Initialize NEW logger instance!
        let logger = Logger::new(&config.log_file_path);  // NEW LOGGER!
        logger.write(level, message);
    }
}

// In src/middleware/auth.rs
async fn auth_middleware(req: Request, next: Next) -> Response {
    // ANOTHER ONE!
    let config = read_config_from_file("config.json");

    let start = Instant::now();

    let response = next.run(req).await;

    // Check timeout.
    if start.elapsed().as_secs() > config.api_timeout_seconds {
        return timeout_response();
    }

    response
}

// This pattern repeats in 30+ files across the codebase!
// EVERY function that needs configuration creates NEW instances of EVERYTHING!

The Symptoms

Jordan creates a comprehensive diagram to visualise what's actually happening:

The Pain Points:

  1. Performance Killer - Repeated Disk I/O

    • What: Config file read from disk on EVERY request

    • Why it's bad: Disk I/O is 100,000x slower than RAM access

    • Impact: Each request wasted 2-6ms just reading config

    • Math: 5,000 requests/min × 4ms = 20,000ms = 20 seconds wasted per minute!

  2. Memory Leak - Resource Multiplication

    • What: New database connection pool created for each request

    • Why it's bad: Each pool holds 10-50 database connections

    • Impact: 500 requests = 500 pools = 5,000-25,000 DB connections

    • Math: PostgreSQL default max_connections = 100. We hit that in seconds!

  3. Resource Exhaustion - Database Rejection

    • What: Database refuses new connections -3.

    • Why it's bad: Database has a hard limit on connections

    • Impact: Requests start failing with "Too many connections" error

    • Math: After 10-20 requests, database connection limit reached

  4. Race Conditions - Inconsistent State

    • What: Config file updated during request processing

    • Why it's bad: Different threads see different configuration

    • Impact: Inconsistent behaviour, hard-to-debug issues

    • Example: Thread A sees max_connections=10, Thread B sees 50

  5. Initialisation Storm - Repeated Setup

    • What: Logger, cache, metrics collectors all initialised repeatedly

    • Why it's bad: Initialisation is expensive (allocations, setup)

    • Impact: CPU cycles wasted, memory fragmented

    • Math: 1ms per initialisation × 500 requests = 500ms wasted

The Disaster

Let's break down EXACTLY what's happening at the system level. Understanding the problem deeply is crucial for appreciating why the Singleton pattern is the right solution.

Problem 1: Repeated File I/O (The I/O Bottleneck)

Imagine if every time you wanted to check the time, you had to walk to the library, find a clock on the wall, read it, and walk back. That's essentially what's happening here. The application is reading the configuration file from disk for every single operation that needs config data, instead of keeping it in memory. Disk operations are incredibly slow compared to memory access, we're talking about a difference of 100,000x in speed. This seemingly innocent function call is creating a massive performance bottleneck that compounds with every request.

// This innocent-looking function is called THOUSANDS of times!
fn read_config_from_file(path: &str) -> Config {
    // Step 1: Open file. (syscall to operating system)
    let file = File::open(path).unwrap();  // ~1-2ms

    // Step 2: Create buffered reader.
    let reader = BufReader::new(file);

    // Step 3: Parse JSON. (CPU-intensive)
    serde_json::from_reader(reader).unwrap()  // ~0.5-1ms
}

What's actually happening under the hood:

Performance Reality Check:

OperationTime (Best Case)Time (Worst Case)Notes
System call (open)1-2 μs50-100 μsContext switch to kernel
Disk seek0 μs (cache hit)5-10 ms (cache miss)SSD vs HDD matters
Read file (1KB)10-50 μs500-1000 μsDepends on disk cache
JSON parsing100-200 μs1-2 msCPU-bound operation
TOTAL~200 μs~5-10 msDepends on caching

Multiplication Effect:

  • 1 request: ~2ms for config (acceptable)

  • 100 requests: ~200ms wasted (noticeable)

  • 1,000 requests: ~2 seconds wasted (bad)

  • 5,000 requests/min: ~10 seconds wasted per minute (catastrophic!)

Memory Impact:

// Each config read allocates memory:
struct Config {
    server_host: String,        // 24 bytes + heap allocation.
    server_port: u16,           // 2 bytes.
    database_url: String,       // 24 bytes + heap allocation.
    max_connections: u32,       // 4 bytes.
    log_level: String,          // 24 bytes + heap allocation.
    // ... more fields.
}

// Total: ~150-200 bytes per Config instance.
// 5,000 requests × 200 bytes = 1MB just for config copies!
// But wait... each config is cloned in multiple places!
// Real impact: 5-10 MB wasted memory per minute.

Problem 2: Resource Multiplication

This is where things get truly catastrophic. Every time a request needs database access, it creates a brand new connection pool. Think of it like building an entire new highway system every time a single car wants to travel somewhere. Each connection pool maintains multiple active TCP connections to the database, consuming memory and holding valuable database connection slots. With hundreds of concurrent requests, we're creating hundreds of pools, thousands of connections, and exhausting both our server's memory and the database's connection limit. The database eventually says "no more!" and starts rejecting connections, causing a cascade of failures.

// Every request creates a NEW connection pool!
pub struct DatabaseConnection {
    pool: ConnectionPool,  // This is the killer.
}

pub struct ConnectionPool {
    connections: Vec<Connection>,  // Holds 10-50 actual TCP connections!
    url: String,
    max_size: u32,
}

Visual representation of resource multiplication:

Database Connection States:

Problem 3: Inconsistent State (The Race Condition)

When multiple threads are reading the configuration file independently, they can end up with different versions of the config at the same time. This is like having a company where the sales team is looking at last month's price list while the support team is looking at this month's updated prices! If an administrator updates the config file while the application is running, some threads will see the old values while others see the new ones. This creates unpredictable behavior that's extremely difficult to debug because it only happens under specific timing conditions.

// Scenario: Sys admin updates config while app is running.

// Timeline:
// T=0: config.json contains: { "max_connections": 10 }

// T=1: Thread 1 starts processing request
let config = read_config_from_file("config.json");
// Thread 1 sees: max_connections = 10

// T=2: Sys admin updates config.json
// New content: { "max_connections": 50 }

// T=3: Thread 2 starts processing request
let config = read_config_from_file("config.json");
// Thread 2 sees: max_connections = 50

// T=4: Thread 1 creates database connection with max=10
let db1 = DatabaseConnection::new(&config.database_url, 10);

// T=5: Thread 2 creates database connection with max=50
let db2 = DatabaseConnection::new(&config.database_url, 50);

// PROBLEM: Two different configurations in the same running application!
// This leads to:
// - Unpredictable behavior
// - Hard-to-reproduce bugs
// - Inconsistent resource limits
// - Data race conditions

Visual timeline of the race condition:

Problem 4: The Memory Usage Graph

Numbers and code are important, but sometimes a graph tells the story best. When Jordan pulled up the monitoring dashboard, the memory usage graph painted a terrifying picture: a steady, relentless climb from normal operating levels to near-crash conditions in just under two hours. This isn't a memory leak in the traditional sense, the memory is technically being used, but it's being wasted on thousands of duplicate instances of objects that should exist only once. The graph shows the inevitable trajectory toward system failure, with memory consumption accelerating as traffic increases.

Jordan pulls up the actual monitoring dashboard and exports the data:

Jordan's realisation:

"Oh my god. We're not reusing ANYTHING! Every single request creates new instances of everything. We have 523 connection pools when we should have ONE. We have 892 logger instances when we should have ONE. We have 456 cache instances when we should have ONE!

The actual application data - the stuff users care about - is only 10% of our memory usage. The other 90% is completely wasted duplication.

We need ONE config, ONE logger, ONE connection pool, ONE cache manager. Everything should be shared safely across all requests. But how do we do that in Rust?"

The Breaking Point

Monday morning, 9:00 AM. The post-incident review meeting. The atmosphere is tense.

Attendees:

  • Maria (CTO)

  • Alex (Senior Backend Engineer)

  • Jordan (DevOps Engineer - looking exhausted)

  • Sarah (Product Manager)

  • Mike (Database Administrator)

Maria (pulling up crash reports): "Alright team. Let's break down what happened Friday night. Jordan, can you walk us through the incident?"

Jordan (opening laptop, screen showing monitoring graphs): "At 3:17 AM, we had a catastrophic failure. Memory usage went from normal 2GB to 7.8GB in 90 minutes. The database started rejecting connections. Response times went from 50ms to over 5 seconds. The system crashed at 3:42 AM."

Mike (database admin, frustrated): "I was getting alerts about too many connections. We have a limit of 100 concurrent connections. But I was seeing connection attempts in the thousands! How is that even possible?"

Jordan (pulling up code): "I traced through the codebase. Every single request creates a new database connection pool. Each pool tries to establish 10 connections. So after just 10-20 requests, we hit the database limit."

Alex (leaning forward): "Wait, what? Show me the code."

Jordan (sharing screen):

// This is in our request handler.
async fn handle_request(req: Request) -> Response {
    let config = read_config_from_file("config.json");
    let db = DatabaseConnection::new(&config);  // Creates NEW pool!
    // ...
}

Alex (eyes widening): "Oh no. OH NO. Are we reading the config file on every request?"

Jordan (nodding grimly): "Not just every request. Every FUNCTION that needs config. I counted 30+ places in the codebase. We're reading that file thousands of times per minute."

Sarah (confused): "But... why doesn't it just use the same config? Can't we just... keep it in memory?"

Alex: "We can and we should! What we're seeing here is the classic anti-pattern: no singleton. We're creating new instances of everything that should be shared."

Jordan: "Singleton? I've heard the term but never really understood it. What is it?"

Alex (standing up and walking to the whiteboard): "Alright, let me explain. A singleton is a design pattern that ensures you have exactly ONE instance of something in your entire application. Not two, not a thousand - ONE."

Alex draws on the whiteboard:

Current State (THE PROBLEM):
============================
Request 1  Config #1, DB Pool #1, Logger #1, Cache #1
Request 2  Config #2, DB Pool #2, Logger #2, Cache #2
Request 3  Config #3, DB Pool #3, Logger #3, Cache #3
...
Request N  Config #N, DB Pool #N, Logger #N, Cache #N

Result: N requests = N instances of EVERYTHING!
Memory: 500 requests × 15MB = 7.5GB
DB connections: 500 pools × 10 = 5,000 connections
Config file reads: 500 requests × 6 functions = 3,000 I/O operations!

Desired State (WITH SINGLETON):
================================
Request 1 
Request 2  ONE Config, ONE DB Pool, ONE Logger, ONE Cache
Request 3 
Request 4 
...
Request N 

Result: N requests = 1 instance of shared resources!
Memory: 15MB (constant, regardless of request count!)
DB connections: 1 pool × 10 = 10 connections
Config file reads: 1 (total, for entire application lifetime!)

Maria: "So we need to refactor to use singletons. How long will this take?"

Alex: "In most languages, creating thread-safe singletons is tricky and error-prone. But in Rust, the compiler helps us. We can use Arc, RwLock, and once_cell to create singletons that are thread-safe by default."

Jordan (determined): "I can do this. Give me until Wednesday, and I'll have a solution."

Alex: "I'll pair with you. This is important to get right."

Maria: "Alright. Jordan and Alex, this is your top priority. Sarah, let's communicate with customers about Friday's downtime. Mike, increase the database connection limit temporarily as a band-aid, but we know the real fix needs to happen in the code."

Mike: "Already done. I bumped it to 500, but that's just delaying the inevitable if we don't fix the root cause."

Jordan: "We'll fix the root cause. No more band-aids."

The Discovery

Alex: "Alright Jordan, let's build your understanding from the ground up. First, what do you think a singleton is?"

Jordan: "From what you said yesterday... it's a way to ensure only one instance of something exists?"

Alex: "Exactly! Let me give you some real-world analogies to make this concrete."

Real-World Singleton Examples

Alex's whiteboard:

Real-World Singletons:
======================

1. The Sun in our solar system
   - We have exactly ONE sun
   - Everyone on Earth sees the SAME sun
   - You can't create a second sun
   - It exists for the lifetime of the solar system

2. The President of a country
   - A country has ONE president at any given time
   - All citizens interact with the SAME president
   - You can't have multiple presidents simultaneously
   - When the term ends, a new president is elected (but still ONE)

3. A Company's Configuration System
   - A company has ONE HR policy manual
   - All employees follow the SAME policies
   - If each department had different policies = chaos!
   - Updates to the manual apply to everyone immediately

In Software:
============

1. Application Configuration
   - ONE config for the entire app
   - All modules read the SAME values
   - Ensures consistency

2. Logger
   - ONE central logging system
   - All components write to the SAME log
   - Makes debugging easier

3. Database Connection Pool
   - ONE pool manager for the database
   - All requests share connections from the SAME pool
   - Prevents connection exhaustion

4. Cache Manager
   - ONE cache for the application
   - All components share the SAME cached data
   - Maximizes cache hit rate

Jordan: "Okay, that makes sense. But how do we enforce 'only one instance' in code? What stops someone from creating a second instance?"

Alex: "Great question! Let's look at how other languages do it first, then we'll see why Rust is better."

The Traditional Singleton

Alex writes Java code on the whiteboard:

// Traditional Singleton in Java.
public class ConfigManager {
    // Private static instance.
    private static ConfigManager instance;

    // Private constructor - can't create instance from outside!
    private ConfigManager() {
        // Load configuration.
    }

    // Public static method to get the instance.
    public static ConfigManager getInstance() {
        if (instance == null) {
            instance = new ConfigManager();  // Create on first access.
        }
        return instance;
    }
}

// Usage:
ConfigManager config = ConfigManager.getInstance();  // Gets the ONLY instance.

Alex: "See the pattern? The constructor is private, so nobody can call new ConfigManager(). The only way to get an instance is through getInstance(), which creates it once and reuses it."

Jordan: "That's clever! But wait... what if two threads call getInstance() at the same time?"

Alex (grinning): "EXACTLY! You've found the problem! If two threads enter getInstance() simultaneously, before the instance is created, you can end up with TWO instances!"

Jordan: "So it's not actually a singleton anymore?"

Alex: "Right. In Java, you need to add synchronisation:"

// Thread-safe singleton in Java. (more complex)
public class ConfigManager {
    private static volatile ConfigManager instance;

    private ConfigManager() {}

    public static ConfigManager getInstance() {
        if (instance == null) {  // First check (no locking)
            synchronized (ConfigManager.class) {  // Lock for thread-safety.
                if (instance == null) {  // Second check. (inside lock)
                    instance = new ConfigManager();
                }
            }
        }
        return instance;
    }
}

Alex: "This is called 'double-checked locking'. It's tricky to get right. People mess it up all the time. And it has performance overhead because of the synchronisation."

Jordan: "Yikes. That looks complicated."

Alex: "Now let me show you the Rust way."

The Rust Advantage

Alex writes Rust code:

// Singleton in Rust - Simple and Thread-Safe!
use once_cell::sync::Lazy;

static CONFIG: Lazy<ConfigManager> = Lazy::new(|| {
    ConfigManager::new()
});

// Usage:
let config = &*CONFIG;  // Gets the ONLY instance

Jordan: "That's... it? Just three lines?"

Alex: "Yep! And it's automatically thread-safe. The Lazy type from once_cell guarantees that the initialisation happens exactly once, even if a thousand threads try to access it simultaneously."

Jordan: "How does it work?"

Alex: "Under the hood, Lazy uses atomic operations and locks efficiently. But the key is: we don't have to think about it! The library handles all the thread-safety for us, and the Rust compiler ensures we use it correctly."

Jordan: "Okay, but what if we need to modify the config? Like when we hot-reload it?"

Alex: "Ah, now we need interior mutability. That's where Arc and RwLock come in."

Alex draws a more complete example:

use once_cell::sync::Lazy;
use parking_lot::RwLock;  // Better performance than std::sync::RwLock.
use std::sync::Arc;

// The singleton instance.
static CONFIG: Lazy<Arc<RwLock<ConfigManager>>> = Lazy::new(|| {
    println!("Initializing config - this prints EXACTLY ONCE");
    Arc::new(RwLock::new(ConfigManager::default()))
});

// Safe concurrent reads.
pub fn get_config() -> ConfigManager {
    CONFIG.read().clone()  // Multiple threads can read simultaneously.
}

// Safe exclusive writes.
pub fn update_config(new_config: ConfigManager) {
    let mut config = CONFIG.write();  // Only one thread can write.
    *config = new_config;
}

Jordan: "Okay, now I have more questions. What's Arc? What's RwLock? And why do we need both?"

Alex: "Perfect! Let's break down each component. Understanding these is crucial for building safe concurrent systems in Rust."

The Singleton Pattern

Let's dive deep into the Singleton pattern - what it is, why it exists, and how it works.

Breaking it down:

  1. "Ensure a class has only one instance"

    • Not zero instances (that would be useless)

    • Not two or more instances (that defeats the purpose)

    • Exactly ONE instance for the entire application lifetime

  2. "Provide a global point of access to it"

    • Anyone who needs the instance can get it

    • No need to pass it around as parameters

    • Consistent access mechanism

The Structure

When to Use Singletons

Use Singletons When:

  1. Exactly one instance must exist

     // Application configuration - one source of truth.
     static CONFIG: Lazy<Arc<RwLock<Config>>> = ...;
    
     // Logger - centralized logging.
     static LOGGER: Lazy<Logger> = ...;
    
     // Database connection pool - manage limited resources.
     static DB_POOL: Lazy<Arc<ConnectionPool>> = ...;
    
  2. Global access is genuinely needed

     // Accessed from many different modules/layers.
     mod handlers {
         use crate::CONFIG;
         fn handle() { let config = CONFIG.read(); }
     }
    
     mod database {
         use crate::CONFIG;
         fn connect() { let config = CONFIG.read(); }
     }
    
     mod middleware {
         use crate::CONFIG;
         fn auth() { let config = CONFIG.read(); }
     }
    
  3. Initialisation is expensive

     // Loading large reference data
     static COUNTRY_DATA: Lazy<HashMap<String, Country>> = Lazy::new(|| {
         // Load 200MB of country/city data once
         load_countries_from_disk()  // Expensive!
     });
    
     // Singleton ensures this expensive operation happens only once.
    

Don't Use Singletons When:

  1. You need multiple instances

     // ❌ DON'T: User sessions should NOT be singletons!
     // Each user needs their own session.
     struct UserSession { /* ... */ }
    
     // ✅ DO: Create instances per user.
     let session1 = UserSession::new(user1);
     let session2 = UserSession::new(user2);
    
  2. Testing is difficult

     // ❌ Singletons make unit testing harder.
     #[test]
     fn test_something() {
         // Hard to mock or reset singleton state between tests.
         let result = some_function_using_singleton();
     }
    
     // ✅ Better: Dependency injection.
     fn some_function(config: &Config) -> Result { /* ... */ }
    
     #[test]
     fn test_something() {
         let mock_config = Config::test_default();
         let result = some_function(&mock_config);  // Easy to test!
     }
    

The Lifecycle

Rust Concepts for Thread-Safe Singletons

Before we build the solution, we need to understand the Rust primitives that make thread-safe singletons possible. This is where Rust truly shines compared to other languages.

1. Static Variables in Rust

Static variables live for the entire program duration and have a fixed memory address.

// Static variable - lives forever.
static COUNTER: i32 = 0;

fn main() {
    println!("{}", COUNTER);  // ✅ Can read.
    // COUNTER += 1;          // ❌ Can't mutate!
}

The problem:

// You might think: just make it mutable!
static mut COUNTER: i32 = 0;  // ⚠️ This requires unsafe!

fn main() {
    unsafe {
        COUNTER += 1;  // ⚠️ Unsafe! No compiler protection!
    }
}

Why is static mut unsafe?

// Data race! Multiple threads modifying COUNTER simultaneously.
use std::thread;

static mut COUNTER: i32 = 0;

fn main() {
    let handles: Vec<_> = (0..10)
        .map(|_| {
            thread::spawn(|| {
                unsafe {
                    // All threads increment simultaneously
                    // No synchronization!
                    // Final value is unpredictable!
                    COUNTER += 1;  // ⚠️ DATA RACE!
                }
            })
        })
        .collect();

    for handle in handles {
        handle.join().unwrap();
    }

    unsafe {
        // Expected: 10
        // Actual: ??? (could be anything from 1 to 10!)
        println!("Counter: {}", COUNTER);
    }
}

The solution: Interior mutability with thread-safe synchronisation

use std::sync::Mutex;

// Safe! Mutex provides interior mutability + thread safety.
static COUNTER: Mutex<i32> = Mutex::new(0);

fn main() {
    let mut counter = COUNTER.lock().unwrap();
    *counter += 1;  // Safe! Compiler-enforced locking.
}

2. once_cell::Lazy - Lazy Initialisation

The problem: How do we initialise complex types at runtime?

// ❌ This doesn't work!
static CONFIG: AppConfig = AppConfig::new();
// Error: cannot call non-const fn in static.

// Why? Static variables must be initialized at compile-time,
// but AppConfig::new() runs at runtime!

The solution: once_cell::Lazy

use once_cell::sync::Lazy;

static CONFIG: Lazy<AppConfig> = Lazy::new(|| {
    println!("Initializing config...");
    AppConfig::load_from_env()  // Runs at first access!
});

fn main() {
    // First access - triggers initialization.
    let config1 = &*CONFIG;
    // Prints: "Initializing config..."

    // Second access - reuses existing instance.
    let config2 = &*CONFIG;
    // No print! Uses existing instance.

    // config1 and config2 point to THE SAME data!
}

How Lazy works internally:

3. Arc<T> - Atomic Reference Counting

The problem: How do we share ownership across multiple threads?

// Regular references don't work across threads:
let data = vec![1, 2, 3];
let data_ref = &data;

thread::spawn(move || {
    println!("{:?}", data_ref);  // Error! Can't send reference across threads.
});

The solution: Arc (Atomic Reference Counted)

use std::sync::Arc;
use std::thread;

let data = Arc::new(vec![1, 2, 3]);

// Clone the Arc (cheap! just increments reference count)
let data_clone1 = Arc::clone(&data);
let data_clone2 = Arc::clone(&data);

// Now we can send to threads!
let handle1 = thread::spawn(move || {
    println!("Thread 1: {:?}", data_clone1);  // Works!
});

let handle2 = thread::spawn(move || {
    println!("Thread 2: {:?}", data_clone2);  // Works!
});

handle1.join().unwrap();
handle2.join().unwrap();

// Original Arc still valid.
println!("Main: {:?}", data);  // Works!

How Arc works - Memory Layout:

4. RwLock<T> - Reader-Writer Lock

The problem: How do we allow multiple readers OR one writer?

// With Mutex, only ONE thread can access at a time:
use std::sync::{Arc, Mutex};

let data = Arc::new(Mutex::new(0));

// Reader 1 locks.
let guard1 = data.lock().unwrap();
println!("Reader 1: {}", *guard1);

// Reader 2 wants to read... but must wait!
// Even though reading doesn't conflict with other reads!

The solution: RwLock (Reader-Writer Lock)

use std::sync::{Arc, RwLock};

let data = Arc::new(RwLock::new(0));

// Multiple readers can hold read locks simultaneously!
let reader1 = data.read().unwrap();
let reader2 = data.read().unwrap();  // OK! Both reading.
let reader3 = data.read().unwrap();  // OK! All reading.
println!("{} {} {}", *reader1, *reader2, *reader3);

// But writer must wait for all readers...
let mut writer = data.write().unwrap();
// ⏳ Blocks until reader1, reader2, reader3 are dropped

*writer = 42;  // Now writer has exclusive access.

RwLock State Machine:

When to use RwLock vs Mutex:

ScenarioUse ThisWhy
Reads >> WritesRwLockMultiple readers don't block each other
Reads ≈ WritesMutexSimpler, less overhead
Only writesMutexRwLock adds unnecessary complexity
Short critical sectionsMutexLess overhead
Long read operationsRwLockMaximize concurrency

5. Putting It All Together

Now we combine all these primitives to create the perfect singleton:

use once_cell::sync::Lazy;
use parking_lot::RwLock;
use std::sync::Arc;

/// The Complete Singleton Pattern in Rust.
static CONFIG: Lazy<Arc<RwLock<AppConfig>>> = Lazy::new(|| {
    println!("Initializing config singleton!");
    Arc::new(RwLock::new(AppConfig::default()))
});

// Let's break down each layer:

// Layer 1: Lazy<...>
// - Delays initialization until first access
// - Guarantees initialization happens exactly once
// - Thread-safe initialization

// Layer 2: Arc<...>
// - Allows shared ownership across threads
// - Reference counted (atomically)
// - Data is freed when last Arc is dropped

// Layer 3: RwLock<...>
// - Allows multiple readers OR one writer
// - Interior mutability (modify through shared reference)
// - Thread-safe access to the data

// Layer 4: AppConfig
// - The actual data we want to store

How they work together:

Getting the Complete Code

The complete working implementation of the singleton pattern we've discussed is available in the GitHub repository.

Clone it, run it, and experiment with it to solidify your understanding!

git clone https://github.com/kartikmehta8/config-manager-api.git
cd config-manager-api
cargo run

The Learning Session

Jordan (leaning back in the chair, mind buzzing with new knowledge): "Wow. Okay. So we've covered:

  • Static variables and why they're immutable by default

  • Lazy for deferred initialisation

  • Arc for shared ownership across threads

  • RwLock for concurrent reads with exclusive writes

  • And how they all work together to create a perfect singleton"

Alex (smiling): "Exactly! And the beautiful thing about Rust is that the compiler enforces all the safety guarantees. You literally cannot create a data race or deadlock without using unsafe. The type system guides you toward correct concurrent code."

Jordan: "I have to admit, when I first saw Lazy<Arc<RwLock<T>>>, it looked intimidating. But now I understand why each layer is necessary and what it does."

Alex: "That's the Rust learning curve. It looks complex at first, but once you understand the primitives, you realise they're all simple concepts that compose elegantly. Each type has one job and does it well."

Jordan (opening laptop): "Alright, I'm ready to start implementing. Let me create the config singleton first, then the logger, then refactor the database connections."

Alex (standing up): "Perfect! That's the spirit! I'll be at my desk if you need help. You've got this, Jordan."

The Conclusion

The singleton pattern might seem simple on the surface, "just one instance, right?" but as we've seen, implementing it correctly in a concurrent environment requires careful consideration of thread safety, lazy vs eager initialisation, read vs write patterns, and memory management.

Rust gives us the tools to handle all of these concerns with compile-time guarantees. No other mainstream language offers this level of safety without garbage collection or runtime overhead.

As Jordan discovered, the learning curve is worth it. The initial complexity of Lazy<Arc<RwLock<T>>> pays dividends in correctness, performance, and peace of mind.

Thanks for reading! If you found this helpful, star the repo and share it with fellow Rustaceans!

Stay caffeinated, stay rusty! 🦀