Design Patterns for Humans: Making Complex Software Simple with TypeScript Snippets (Part 2)

Design Patterns for Humans: Making Complex Software Simple with TypeScript Snippets (Part 2)


Producer–Consumer

"One side makes work, the other side processes it."

The Producer-Consumer pattern is a classic concurrency design pattern that decouples the production of data from its consumption by introducing a shared buffer (queue) between producers and consumers. Example: A video app buffering and displaying frames.


Key Characteristics

  1. Decoupling: Producers and consumers don't communicate directly
  2. Concurrency: Producers and consumers can run at different speeds
  3. Synchronization: The shared buffer handles thread-safe operations
  4. Load Balancing: The buffer acts as a shock absorber between bursts of production and steady consumption


Potential Pitfalls

  1. Deadlocks: Improper synchronization can lead to deadlocks
  2. Buffer Overflow: Unbounded buffers can consume all memory
  3. Starvation: Consumers might starve if producers are too fast
  4. Latency: Too small buffer can cause unnecessary waiting
  5. Complexity: Debugging async producer-consumer systems can be challenging
  6. Resource Leaks: Unhandled errors can leave resources locked


When to Use

  1. When producers and consumers work at different rates
  2. When you need to decouple data production from consumption
  3. When you need to handle bursts of work efficiently
  4. When you want to parallelize work across multiple threads/processes
  5. When you need to implement a work queue or task processing system


When to Avoid

  1. When the overhead of synchronization outweighs benefits
  2. When processing is inherently synchronous and simple
  3. When the order of processing doesn't matter (consider Observer pattern)
  4. When you need immediate processing without buffering
  5. In extremely low-latency systems where queue overhead is unacceptable

class BoundedBuffer<T> {
    private buffer: T[];
    private capacity: number;
    private count: number;
    private putIndex: number;
    private takeIndex: number;
    private notEmpty: Promise<void>;
    private notFull: Promise<void>;
    private resolveNotEmpty: () => void;
    private resolveNotFull: () => void;

    constructor(capacity: number) {
        this.capacity = capacity;
        this.buffer = new Array(capacity);
        this.count = 0;
        this.putIndex = 0;
        this.takeIndex = 0;
        
        // Create resolvable promises for flow control
        this.notEmpty = new Promise(resolve => this.resolveNotEmpty = resolve);
        this.notFull = new Promise(resolve => this.resolveNotFull = resolve);
    }

    async put(item: T): Promise<void> {
        while (this.count === this.capacity) {
            await this.notFull;
        }

        this.buffer[this.putIndex] = item;
        this.putIndex = (this.putIndex + 1) % this.capacity;
        this.count++;

        // Notify waiting consumers
        if (this.count === 1) {
            this.resolveNotEmpty();
            this.notEmpty = new Promise(resolve => this.resolveNotEmpty = resolve);
        }
    }

    async take(): Promise<T> {
        while (this.count === 0) {
            await this.notEmpty;
        }

        const item = this.buffer[this.takeIndex];
        this.takeIndex = (this.takeIndex + 1) % this.capacity;
        this.count--;

        // Notify waiting producers
        if (this.count === this.capacity - 1) {
            this.resolveNotFull();
            this.notFull = new Promise(resolve => this.resolveNotFull = resolve);
        }

        return item;
    }
}

// Example usage
async function runExample() {
    const buffer = new BoundedBuffer<number>(5);

    // Producer
    const producer = async () => {
        for (let i = 0; i < 10; i++) {
            await new Promise(resolve => setTimeout(resolve, Math.random() * 500));
            await buffer.put(i);
            console.log(`Produced: ${i}`);
        }
    };

    // Consumer
    const consumer = async () => {
        for (let i = 0; i < 10; i++) {
            await new Promise(resolve => setTimeout(resolve, Math.random() * 1000));
            const item = await buffer.take();
            console.log(`Consumed: ${item}`);
        }
    };

    await Promise.all([producer(), consumer()]);
}

runExample().catch(console.error);        

Actor Model

“Each actor is like a tiny service that handles one job at a time.”

The Actor Model is a conceptual model for concurrent computation that treats "actors" as the universal primitives of computation. In response to a message it receives, an actor can make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Example: Used in frameworks like Akka or languages like Erlang.


Key Characteristics

  1. Encapsulation: Each actor encapsulates its own state and behavior.
  2. Message Passing: Actors communicate exclusively through asynchronous messages.
  3. No Shared State: Actors don't share memory; all communication happens via messages.
  4. Concurrency: Actors process messages one at a time, enabling safe concurrency.
  5. Location Transparency: Actors can be local or remote, with the same interface.
  6. Fault Isolation: Failure in one actor doesn't directly affect others.


Potential Pitfalls

  1. Message Overhead: Excessive message passing can lead to performance issues.
  2. Debugging Complexity: Asynchronous message flows can be difficult to trace and debug.
  3. Memory Consumption: Each actor maintains its own state and mailbox, which can lead to high memory usage.
  4. Message Ordering: In distributed systems, message ordering guarantees can be complex to implement.
  5. Deadlocks: Actors waiting for each other can still deadlock, despite the model's design.


When to Use

  1. When you need to manage shared state in a concurrent environment.
  2. When building distributed systems that span multiple processes or machines.
  3. When fault isolation is a critical requirement.
  4. When dealing with systems that naturally map to message-passing paradigms.
  5. When you need to scale horizontally across multiple cores or machines.


When to Avoid

  1. For simple, single-threaded applications where concurrency isn't needed.
  2. When performance is critical and message-passing overhead would be prohibitive.
  3. When the problem domain doesn't naturally decompose into isolated units of state and behavior.
  4. When you need strong transactional consistency across multiple stateful components.
  5. When working in environments without good actor model library support.

interface Message {
    type: string;
    payload?: any;
    sender?: ActorRef;
}

type ActorRef = {
    send: (message: Message) => void;
};

type Behavior = (message: Message, context: ActorContext) => Behavior;

interface ActorContext {
    self: ActorRef;
    spawn: (behavior: Behavior) => ActorRef;
}

class ActorSystem {
    private actors: Map<ActorRef, Behavior> = new Map();

    spawn(initialBehavior: Behavior): ActorRef {
        const self: ActorRef = {
            send: (message: Message) => this.handleMessage(self, message),
        };

        this.actors.set(self, initialBehavior);
        return self;
    }

    private handleMessage(actor: ActorRef, message: Message): void {
        const behavior = this.actors.get(actor);
        if (!behavior) return;

        const context: ActorContext = {
            self: actor,
            spawn: this.spawn.bind(this),
        };

        const nextBehavior = behavior({ ...message, sender: actor }, context);
        this.actors.set(actor, nextBehavior);
    }
}        

Reactor

“React to events as they arrive. Handle I/O events asynchronously."

The Reactor pattern is a event handling architecture that efficiently manages service requests delivered concurrently to an application by one or more clients. It demultiplexes incoming requests and dispatches them synchronously to the associated request handlers.


Key Characteristics

  1. Event Demultiplexing: The reactor core uses an event loop to listen for events and dispatch them to appropriate handlers.
  2. Non-blocking I/O: Handlers process requests without blocking the entire application.
  3. Single-threaded by default: Typically runs in a single thread, though variants exist for multi-threaded scenarios.
  4. Synchronous dispatching: While I/O is non-blocking, handler execution is synchronous.
  5. Scalability: Efficiently handles many concurrent connections with minimal threads.


Potential Pitfalls

  1. Callback hell: Can lead to deeply nested, hard-to-maintain code (mitigated by promises/async-await).
  2. Single point of failure: A bug in one handler can block the entire event loop.
  3. Debugging complexity: Asynchronous flow can be harder to trace and debug.
  4. Starvation: Long-running handlers can delay processing of other events.
  5. Thread safety concerns: Adding multi-threading requires careful synchronization.


Real-World Use Cases

  1. Node.js: The core event loop implements a variant of the Reactor pattern.
  2. Nginx: Uses an event-driven architecture similar to the Reactor pattern.
  3. Redis: Single-threaded event loop handles all client connections.
  4. GUI frameworks: Many use event loops to handle user input.
  5. Network servers: HTTP, FTP, and other protocol servers often use this pattern.


When to Use

  1. High-concurrency servers: When you need to handle many simultaneous connections (e.g., web servers, chat servers).
  2. Resource-constrained environments: When thread creation is expensive (memory, context-switching overhead).
  3. Predictable latency requirements: When you need consistent response times.
  4. I/O-bound applications: Where the application spends most time waiting for I/O operations.


When to Avoid

  1. CPU-intensive workloads: The pattern can lead to poor performance if handlers perform heavy computations.
  2. Blocking operations: If handlers must perform blocking operations, it defeats the purpose.
  3. Simple applications: When the overhead isn't justified by the requirements.
  4. Windows platforms: Traditional reactors don't work well with Windows' I/O completion ports.

interface EventHandler {
    handleEvent(event: string, data: any): void;
}

class Reactor {
    private handlers: Map<string, EventHandler[]> = new Map();

    registerHandler(eventType: string, handler: EventHandler): void {
        if (!this.handlers.has(eventType)) {
            this.handlers.set(eventType, []);
        }
        this.handlers.get(eventType)?.push(handler);
    }

    removeHandler(eventType: string, handler: EventHandler): void {
        const handlers = this.handlers.get(eventType);
        if (handlers) {
            const index = handlers.indexOf(handler);
            if (index > -1) {
                handlers.splice(index, 1);
            }
        }
    }

    dispatchEvent(eventType: string, data: any): void {
        const handlers = this.handlers.get(eventType);
        if (handlers) {
            for (const handler of handlers) {
                handler.handleEvent(eventType, data);
            }
        }
    }

    run(): void {
        // Simulated event loop
        setInterval(() => {
            // In a real implementation, this would check for I/O events
            const now = new Date().toISOString();
            this.dispatchEvent('timer', { time: now });
        }, 1000);
    }
}

class LoggerHandler implements EventHandler {
    handleEvent(event: string, data: any): void {
        console.log(`[${event}] ${JSON.stringify(data)}`);
    }
}

// Usage
const reactor = new Reactor();
const logger = new LoggerHandler();

reactor.registerHandler('timer', logger);
reactor.run();        

Double-Checked Locking

"A pattern to reduce overhead of acquiring a lock multiple times."

Double-Checked Locking is a software design pattern that reduces the overhead of acquiring a lock by first testing the locking criterion without actually acquiring the lock. Only if the check indicates that locking is required does the actual locking logic proceed.


Key Characteristics

  1. Lazy Initialization: The pattern is primarily used to implement lazy initialization of expensive objects.
  2. Performance Optimization: Avoids the overhead of synchronization after the object is initialized.
  3. Thread Safety: Provides thread-safe initialization while minimizing synchronization costs.
  4. Two-Phase Check: First checks without synchronization, then verifies again under synchronization.


Potential Pitfalls

  1. Memory Visibility Issues: Without proper volatile semantics (in Java) or memory barriers, changes might not be visible across threads.
  2. Complex Implementation: Easy to implement incorrectly, leading to subtle bugs.
  3. Language-Specific Considerations: Behavior varies across programming languages due to different memory models.
  4. Obsolete in Some Contexts: Modern languages often provide better alternatives (e.g., Lazy<T> in .NET, module-level initialization in Python).


Real-World Use Cases

  1. Singleton Pattern: When implementing thread-safe singletons with lazy initialization.
  2. Resource-Intensive Objects: Initializing objects that are expensive to create (database connections, file systems).
  3. Caching Systems: When building thread-safe caching mechanisms where the cache might not be needed immediately.
  4. Logger Initialization: Deferring logger setup until it's actually needed.


When to Use

  1. When object initialization is expensive and shouldn't be done until absolutely necessary.
  2. When the object is needed by multiple threads but might not always be used.
  3. When you need to reduce synchronization overhead after the object is initialized.
  4. In performance-critical sections where you want to avoid unnecessary locking.


When to Avoid

  1. When the programming language provides better built-in alternatives (e.g., std::call_once in C++, Lazy<T> in .NET).
  2. For simple scenarios where eager initialization is acceptable.
  3. When working with languages that don't have well-defined memory models for this pattern.
  4. When the initialization logic isn't thread-safe or has side effects.

class Singleton {
    private static instance: Singleton;
    private static initialized: boolean = false;

    private constructor() {
        // Private constructor to prevent direct instantiation
    }

    public static getInstance(): Singleton {
        if (!Singleton.initialized) {
            synchronized(Singleton) {
                if (!Singleton.initialized) {
                    Singleton.instance = new Singleton();
                    Singleton.initialized = true;
                }
            }
        }
        return Singleton.instance;
    }
}

// Note: TypeScript/JavaScript doesn't have built-in synchronized blocks like Java.
// In a real implementation, you would use a mutex or similar synchronization primitive.        
class ThreadSafeSingleton {
    private static instance: ThreadSafeSingleton;
    private static lock: boolean = false;

    private constructor() {
        // Initialization code
    }

    public static getInstance(): Promise<ThreadSafeSingleton> {
        if (!ThreadSafeSingleton.instance) {
            return new Promise((resolve) => {
                if (!ThreadSafeSingleton.lock) {
                    ThreadSafeSingleton.lock = true;
                    // Simulate async initialization
                    setTimeout(() => {
                        ThreadSafeSingleton.instance = new ThreadSafeSingleton();
                        resolve(ThreadSafeSingleton.instance);
                    }, 0);
                } else {
                    // Wait for initialization to complete
                    const check = setInterval(() => {
                        if (ThreadSafeSingleton.instance) {
                            clearInterval(check);
                            resolve(ThreadSafeSingleton.instance);
                        }
                    }, 10);
                }
            });
        }
        return Promise.resolve(ThreadSafeSingleton.instance);
    }
}

// Usage
ThreadSafeSingleton.getInstance().then(instance => {
    console.log('Singleton instance created');
});        

Read–Write Lock

“Multiple readers, one writer.” (Not native to JS; simulated with logic or libraries.)

The Read-Write Lock (also known as Shared-Exclusive Lock) is a synchronization primitive that allows concurrent access for read-only operations while maintaining exclusive access for write operations. This pattern is particularly useful in scenarios where data is read more frequently than it is modified. Example: A shared data store that’s read often but written rarely.


Key Characteristics

  1. Multiple Readers: Multiple threads can hold the read lock simultaneously as long as no thread holds the write lock.
  2. Single Writer: Only one thread can hold the write lock, and only when no threads hold read locks.
  3. Write Priority: Most implementations give priority to write operations to prevent writer starvation.
  4. Thread Safety: Ensures thread-safe access to shared resources while optimizing for read-heavy workloads.


Potential Pitfalls

  1. Starvation: Poor implementations may lead to writer or reader starvation.
  2. Deadlocks: Incorrect usage can lead to deadlocks, especially when upgrading locks.
  3. Performance overhead: The lock management itself adds overhead that may not be justified for all use cases.
  4. Priority inversion: High-priority threads may be blocked by low-priority threads holding locks.
  5. Recursive acquisition: Some implementations don't support recursive lock acquisition.


Real-World Use Cases

  1. Database systems: Managing concurrent access to database records.
  2. Filesystems: Allowing multiple processes to read files while ensuring exclusive access for writes.
  3. Caching mechanisms: Such as in-memory caches where reads dominate.
  4. Configuration management: Where configuration is read often but updated rarely.
  5. Financial systems: Maintaining consistency of financial data while allowing concurrent reads.


When to Use

  1. Read-heavy workloads: When your application performs significantly more read operations than write operations.
  2. Shared resource access: When multiple threads need to read shared data but writes are less frequent.
  3. Data consistency requirements: When you need to ensure readers see a consistent state while allowing concurrent reads.
  4. Caching systems: Where cached data is read frequently but updated occasionally.


When to Avoid

  1. Write-heavy workloads: If writes are as frequent as reads, the overhead of the lock may outweigh benefits.
  2. Simple synchronization needs: When a simple mutex would suffice for your use case.
  3. Real-time systems: Where predictable timing is more important than throughput.
  4. Non-shared resources: When data is not accessed by multiple threads.

interface ReadWriteLock {
    readLock(): Promise<() => void>;
    writeLock(): Promise<() => void>;
}

class SimpleReadWriteLock implements ReadWriteLock {
    private readers = 0;
    private writer = false;
    private queue: (() => void)[] = [];

    async readLock(): Promise<() => void> {
        return new Promise((resolve) => {
            const tryAcquire = () => {
                if (!this.writer) {
                    this.readers++;
                    resolve(() => {
                        this.readers--;
                        this.processQueue();
                    });
                } else {
                    this.queue.push(tryAcquire);
                }
            };
            tryAcquire();
        });
    }

    async writeLock(): Promise<() => void> {
        return new Promise((resolve) => {
            const tryAcquire = () => {
                if (this.readers === 0 && !this.writer) {
                    this.writer = true;
                    resolve(() => {
                        this.writer = false;
                        this.processQueue();
                    });
                } else {
                    this.queue.push(tryAcquire);
                }
            };
            tryAcquire();
        });
    }

    private processQueue() {
        while (this.queue.length > 0) {
            const next = this.queue[0];
            if (this.writer) break;
            
            if (next === this.queue[0]) {
                this.queue.shift();
                next();
            }
        }
    }
}        
class UpgradeableReadWriteLock implements ReadWriteLock {
    private state: { readers: number; writer: boolean; writeRequests: number } = {
        readers: 0,
        writer: false,
        writeRequests: 0
    };
    private queue: (() => void)[] = [];

    async readLock(): Promise<() => void> {
        return new Promise((resolve) => {
            const tryAcquire = () => {
                if (!this.state.writer && this.state.writeRequests === 0) {
                    this.state.readers++;
                    resolve(() => {
                        this.state.readers--;
                        this.processQueue();
                    });
                } else {
                    this.queue.push(tryAcquire);
                }
            };
            tryAcquire();
        });
    }

    async writeLock(): Promise<() => void> {
        return new Promise((resolve) => {
            this.state.writeRequests++;
            const tryAcquire = () => {
                if (this.state.readers === 0 && !this.state.writer) {
                    this.state.writer = true;
                    this.state.writeRequests--;
                    resolve(() => {
                        this.state.writer = false;
                        this.processQueue();
                    });
                } else {
                    this.queue.push(tryAcquire);
                }
            };
            tryAcquire();
        });
    }

    async upgradeToWriteLock(currentReadUnlock: () => void): Promise<() => void> {
        currentReadUnlock(); // Release the read lock first
        return this.writeLock(); // Acquire write lock
    }

    private processQueue() {
        while (this.queue.length > 0) {
            const next = this.queue[0];
            if (this.state.writer) break;
            
            if (next === this.queue[0]) {
                this.queue.shift();
                next();
            }
        }
    }
}        

Applying Patterns in Real Life

Design patterns aren't just academic tools or interview buzzwords—they appear frequently in day-to-day development. Whether writing front-end applications, backend services, or full-stack projects, developers often use patterns without explicit recognition.


How to Recognize Pattern-Shaped Problems

Consider patterns as familiar tools in a toolbox. When facing coding challenges, ask these diagnostic questions:

  • Am I repeating this logic in multiple places? → Consider Template Method or Strategy patterns
  • Is object creation becoming complex? → Factory or Builder patterns may help
  • Are modules overly interdependent? → Facade, Adapter, or Mediator patterns could decouple them
  • Do components need change notifications? → The Observer pattern provides a solution

As pattern recognition improves, identification within your own codebase becomes more intuitive.


Framework Implementations of Patterns

Modern JavaScript frameworks incorporate patterns extensively:

React

  • Observer pattern: Components observe state changes
  • Template Method: Hooks often follow this structure

Redux

  • Command pattern: Actions encapsulate state changes
  • Mediator pattern: The store coordinates state management

Express.js

  • Chain of Responsibility: Middleware processing
  • Facade pattern: Simplified HTTP module interface

Understanding these implementations aids in framework mastery.


Refactoring Example: Notification System

Initial Implementation

function notifyUser(user: string, type: string) {
  if (type === 'email') {
    // Email implementation
  } else if (type === 'sms') {
    // SMS implementation
  } else if (type === 'push') {
    // Push notification
  }
}        

Strategy Pattern Refactor

interface NotificationStrategy {
  send(user: string): void;
}

class EmailStrategy implements NotificationStrategy {
  send(user: string) {
    console.log(`Email sent to ${user}`);
  }
}

class Notifier {
  constructor(private strategy: NotificationStrategy) {}
  
  notify(user: string) {
    this.strategy.send(user);
  }
}

// Implementation
const notifier = new Notifier(new EmailStrategy());
notifier.notify("Abdulmoiz");        


This approach improves extensibility and testability while reducing conditional complexity.

Pattern Application Guidelines

  1. Solve actual problems: Avoid pattern application without clear need
  2. Favor simplicity: Let patterns emerge through natural refactoring
  3. Understand first: Comprehend the problem before selecting solutions
  4. Iterate: Begin with straightforward implementations


Anti-Patterns of Pattern Usage

Overengineering Risks

Applying complex patterns to simple problems creates unnecessary abstraction. For example, using Abstract Factory for minimal component variation adds complexity without benefit.


Misapplication Consequences

Forcing patterns where they don't fit produces confusing code. Singleton misuse can create unnecessary global state, while excessive Decorator use may obfuscate program flow.


Core Objectives

Prioritize these qualities over pattern adherence:

  • Readability
  • Testability
  • Maintainability
  • Team comprehension


Learning Resources

Recommended Platforms

  • Refactoring Guru: Visual pattern explanations with multi-language examples
  • GitHub Repositories: Practical implementations in real codebases


Essential Literature

  • Head First Design Patterns: Accessible pattern introduction
  • Design Patterns in TypeScript: Language-specific guidance


Practical Exercises

  1. Refactor existing projects with appropriate patterns
  2. Reimplement simple applications using different patterns
  3. Conduct code reviews focusing on pattern opportunities
  4. Collaborate on pattern identification exercises


Conclusion: Human-Centric Pattern Use

Design patterns serve as problem-solving tools rather than rigid requirements. Their value emerges when they:

  • Reduce complexity
  • Improve communication
  • Enhance maintainability
  • Solve identifiable problems

Developers should cultivate:

  • Core pattern understanding
  • Recognition skills
  • Contextual application judgment

The ultimate goal remains creating software that balances technical excellence with human comprehension—patterns serve this purpose when applied judiciously.









Waleed Shah

Software Engineer | Next, Nest, React, Typescript

2mo

It was definitely worth a read!

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics