Lock

A simple analogy to Understand Locks

Think of a lock like a bathroom door lock in a busy office. When someone is inside (using a shared resource), they lock the door so others must wait. This prevents awkward situations (race conditions) and ensures only one person uses the bathroom at a time.

In programming, a lock is a synchronization mechanism that controls access to shared resources. When multiple threads need to access the same data simultaneously, locks ensure only one thread can access it at a time, preventing data corruption and maintaining consistency.

Real-World Lock Scenarios

Locks are essential in these everyday programming situations:

  • Bank Account Updates: Preventing two ATM transactions from modifying the same balance simultaneously
  • File Writing: Ensuring only one process writes to a log file at a time
  • Shopping Cart: Preventing inventory conflicts when multiple users buy the last item
  • Configuration Updates: Making sure system settings aren’t partially overwritten

Basic Concepts

Let’s start with the simplest lock - a Mutex (Mutual Exclusion):

var counter int
var mutex sync.Mutex

func increment() {
    mutex.Lock()           // "Lock the bathroom door"
    counter++              // "Use the bathroom safely"
    mutex.Unlock()         // "Unlock so others can use it"
}

This is like having a single key for the bathroom - only one person can hold it at a time.

Common Problems Without Locks

Race Conditions: The Problem

Imagine two people trying to update the same bank balance simultaneously:

  sequenceDiagram
    participant T1 as Thread 1
    participant B as Balance ($100)
    participant T2 as Thread 2

    T1->>B: Read balance: $100
    T2->>B: Read balance: $100
    T1->>B: Add $50 β†’ Write $150
    T2->>B: Add $30 β†’ Write $130

    Note over B: Expected: $180, Actual: $130!

Without locks, both threads read the same initial value and overwrite each other’s changes.

The Solution with Locks

  sequenceDiagram
    participant T1 as Thread 1
    participant L as Lock
    participant B as Balance ($100)
    participant T2 as Thread 2

    T1->>L: Acquire Lock
    L->>T1: Lock Granted
    T2->>L: Request Lock
    L->>T2: Blocked (waiting)
    T1->>B: Read $100, Add $50, Write $150
    T1->>L: Release Lock
    L->>T2: Lock Granted
    T2->>B: Read $150, Add $30, Write $180

    Note over B: Correct result: $180

Deadlocks

A deadlock occurs when threads get stuck waiting for each other forever - like two people trying to pass through a narrow doorway, each waiting for the other to go first.

Classic Example:

  graph LR
    A[Thread A] -->|holds| L1[Lock 1]
    A -->|wants| L2[Lock 2]
    B[Thread B] -->|holds| L2
    B -->|wants| L1

    style A fill:#c62929
    style B fill:#c62929
    style L1 fill:#147c76
    style L2 fill:#147c76

Thread A holds Lock 1 and waits for Lock 2, while Thread B holds Lock 2 and waits for Lock 1. Both wait forever!

How to Prevent Deadlocks

1. Lock Ordering - Always acquire locks in the same order:

  graph TD
    subgraph good ["βœ… Good: Consistent Lock Ordering"]
        A1[Thread A] --> O1[Lock 1 first]
        A1 --> O2[Lock 2 second]
        B1[Thread B] --> O3[Lock 1 first]
        B1 --> O4[Lock 2 second]
    end

    subgraph bad ["❌ Bad: Inconsistent Lock Ordering"]
        A2[Thread A] --> X1[Lock 1 first]
        A2 --> X2[Lock 2 second]
        B2[Thread B] --> X3[Lock 2 first]
        B2 --> X4[Lock 1 second]
    end

    style good fill:#147c76
    style bad fill:#c62929
// Good: Always lock accounts in ID order
func transfer(from, to *Account) {
    if from.ID < to.ID {
        from.Lock()
        defer from.Unlock()
        to.Lock()
        defer to.Unlock()
    } else {
        to.Lock()
        defer to.Unlock()
        from.Lock()
        defer from.Unlock()
    }
    // Safe to transfer money
}

2. Use Timeouts - Don’t wait forever:

if !lock.TryLock(5 * time.Second) {
    return errors.New("couldn't get lock, probably deadlocked")
}

3. Keep It Simple - Avoid holding multiple locks when possible.

Solutions: Different Types of Locks

Now that we understand the problems, let’s explore different solutions:

  mindmap
  root((Lock Solutions))
    Simple
      Mutex
      RWMutex
    Smart
      Optimistic
      Pessimistic
    Advanced
      Distributed
      Lock-Free

1. Read-Write Locks: Multiple Readers, One Writer

Think of a library: many people can read books simultaneously, but only one person can reorganize the shelves.

  graph TD
    subgraph "Read-Write Lock Behavior"
        RWL[RW Lock]

        subgraph "Multiple Readers Allowed"
            R1[Reader 1] --> RWL
            R2[Reader 2] --> RWL
            R3[Reader 3] --> RWL
        end

        subgraph "Single Writer Only"
            W1[Writer] -.-> RWL
            W2[Writer] -.->|Blocked| RWL
        end

        RWL --> RES[(Shared Resource)]
    end

    style R1 fill:#147c76
    style R2 fill:#147c76
    style R3 fill:#147c76
    style W1 fill:#c62929
    style W2 fill:#ffa726

Perfect for read-heavy workloads:

var data map[string]string
var rwLock sync.RWMutex

// Many goroutines can read simultaneously
func getValue(key string) string {
    rwLock.RLock()
    defer rwLock.RUnlock()
    return data[key]
}

// Only one can write at a time
func setValue(key, value string) {
    rwLock.Lock()
    defer rwLock.Unlock()
    data[key] = value
}

2. Pessimistic vs Optimistic Locking

Pessimistic Locking - “Lock first, ask questions later”

  sequenceDiagram
    participant T1 as Thread 1
    participant L as Lock
    participant R as Resource
    participant T2 as Thread 2

    T1->>L: Acquire Lock
    L->>T1: Lock Granted
    T2->>L: Request Lock
    L->>T2: Blocked (waiting)
    T1->>R: Read/Write Resource
    R->>T1: Operation Complete
    T1->>L: Release Lock
    L->>T2: Lock Granted
    T2->>R: Read/Write Resource

Use when: Conflicts are common, consistency is critical

-- Database example
SELECT * FROM accounts WHERE id = 123 FOR UPDATE;
UPDATE accounts SET balance = balance - 100 WHERE id = 123;

Optimistic Locking - “Work freely, check for conflicts later”

  sequenceDiagram
    participant T1 as Thread 1
    participant R as Resource
    participant T2 as Thread 2

    T1->>R: Read (version=1)
    T2->>R: Read (version=1)
    Note over T1, T2: Both work concurrently
    T1->>R: Write (check version=1)
    R->>T1: Success (version=2)
    T2->>R: Write (check version=1)
    R->>T2: Conflict! Version is now 2
    T2->>R: Retry - Read (version=2)
    T2->>R: Write (check version=2)
    R->>T2: Success (version=3)

Use when: Conflicts are rare, performance is important

type Account struct {
    ID      int
    Balance int
    Version int  // Version field for conflict detection
}

func (a *Account) UpdateBalance(newBalance int) error {
    originalVersion := a.Version

    // Do expensive work without locking
    time.Sleep(100 * time.Millisecond)

    // Check if someone else modified it
    if a.Version != originalVersion {
        return errors.New("conflict detected, please retry")
    }

    a.Balance = newBalance
    a.Version++
    return nil
}

Advanced Topics

Distributed Locks: Coordination Across Services

When your application runs on multiple servers, you need distributed locks to coordinate between them:

  graph TB
    subgraph "Distributed System"
        S1[Service A]
        S2[Service B]
        S3[Service C]
    end

    subgraph "Lock Coordinator"
        R[(Redis/etcd)]
    end

    subgraph "Shared Resource"
        DB[(Database)]
        FS[File System]
    end

    S1 <-->|1. Request Lock| R
    S2 <-->|2. Wait| R
    S3 <-->|3. Wait| R

    R -.->|Lock Granted| S1
    S1 --> DB
    S1 --> FS

    style S1 fill:#147c76
    style S2 fill:#ffa726
    style S3 fill:#ffa726
    style R fill:#c62929

Redis-based Example:

func acquireDistributedLock(redis *redis.Client, key string, ttl time.Duration) bool {
    result := redis.SetNX(key, "locked", ttl)
    return result.Val()
}

Key Challenges:

  • Network failures can leave locks stuck
  • Clock drift between servers
  • Lock expiration vs operation time

Performance Considerations

Different locks have different performance characteristics:

Lock Type Throughput Latency Best For
Mutex Medium Low General purpose
RWMutex High (reads) Low Read-heavy (80%+ reads)
Optimistic Very High Variable Low conflict rate
Distributed Low High Cross-service coordination

Specialized Lock Types

Spin Locks - Keep checking instead of sleeping:

  • Great for very short critical sections
  • Waste CPU cycles while waiting
  • Used in kernel/system programming

Adaptive Locks - Smart locks that learn:

  • Start by spinning, then switch to blocking
  • Adapt based on historical contention patterns

Best Practices and Testing

1. Keep Critical Sections Small

Minimize the time locks are held:

// Bad: Long critical section
mutex.Lock()
data := processLargeDataset() // Long operation
updateSharedState(data)
mutex.Unlock()

// Good: Short critical section
data := processLargeDataset() // Outside critical section
mutex.Lock()
updateSharedState(data) // Only shared state update locked
mutex.Unlock()

2. Always Use defer for Unlocking

// Bad: What if processData() panics?
mutex.Lock()
processData()
mutex.Unlock()

// Good: Always use defer
mutex.Lock()
defer mutex.Unlock()
processData()

3. Choose the Right Lock Granularity

  • Fine-grained: More concurrency, more complex
  • Coarse-grained: Simpler, potentially less concurrent

4. Testing Concurrent Code

func TestConcurrentCounter(t *testing.T) {
    var counter Counter
    var wg sync.WaitGroup

    // Start 100 goroutines
    for i := 0; i < 100; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for j := 0; j < 1000; j++ {
                counter.Increment()
            }
        }()
    }

    wg.Wait()

    if counter.Value() != 100000 {
        t.Errorf("Expected 100000, got %d", counter.Value())
    }
}

5. Debugging Lock Issues

  • Use Go’s race detector: go test -race
  • Profile lock contention: go tool pprof
  • Add timeouts to prevent infinite waits
  • Log lock acquisition/release for debugging

When NOT to Use Locks

Sometimes locks aren’t the answer:

1. Use Atomic Operations Instead

// Instead of this:
var counter int64
var mutex sync.Mutex

func increment() {
    mutex.Lock()
    counter++
    mutex.Unlock()
}

// Use this:
var counter int64

func increment() {
    atomic.AddInt64(&counter, 1)
}

2. Use Channels for Communication

// Instead of shared memory with locks:
type SafeMap struct {
    mu sync.RWMutex
    data map[string]int
}

// Use channels:
type MapService struct {
    requests chan MapRequest
}

type MapRequest struct {
    key string
    response chan int
}

3. Consider Lock-Free Data Structures

  • Atomic pointers
  • Compare-and-swap operations
  • Lock-free queues and stacks

4. When Performance Isn’t Critical

Sometimes simple solutions are better than complex lock-free algorithms.

Quick Reference: Choosing the Right Lock

  flowchart TD
    Start([Need Synchronization?]) --> SingleProcess{Single Process/Service?}

    SingleProcess -->|Yes| ReadHeavy{Mostly Reading Data?}
    SingleProcess -->|No| DistributedLock[Use Distributed Lock<br/>Redis, etcd, Database]

    ReadHeavy -->|Yes| UseRWLock[Use Read-Write Lock<br/>sync.RWMutex]
    ReadHeavy -->|No| ConflictRate{Expect Many Conflicts?}

    ConflictRate -->|Yes| UseMutex[Use Mutex<br/>sync.Mutex]
    ConflictRate -->|No| UseOptimistic[Use Optimistic Lock<br/>Version-based]

    style Start fill:#147c76
    style DistributedLock fill:#ffa726
    style UseRWLock fill:#147c76
    style UseMutex fill:#c62929
    style UseOptimistic fill:#ffa726

Summary Table

Scenario Recommended Approach Why
High contention, critical consistency Pessimistic locking Prevents conflicts entirely
Low contention, performance important Optimistic locking Better throughput
Read-heavy workloads (80%+ reads) Read-Write locks Multiple readers allowed
Distributed systems Distributed locks Cross-service coordination
Simple counters Atomic operations No lock overhead
Communication between goroutines Channels “Don’t communicate by sharing memory”

Remember: The best approach often combines multiple techniques, tailored to your specific requirements. Start simple with basic mutexes, then optimize based on actual performance measurements and bottlenecks.