Back to Home
 Swift Concurrency is great, but... - Part 1

Swift Concurrency is great, but... - Part 1

Swift Concurrency is great, but...

Swift Concurrency offers powerful tools for writing asynchronous code that's more readable and maintainable than traditional callback-based approaches. However, beneath its elegant syntax lies a complex system that can create serious issues when misused. The cooperative thread pool, actor isolation, and bridging mechanisms introduce subtle pitfalls that can lead to deadlocks, memory leaks, and unexpected runtime behavior.

This article explores the challenges developers face with Swift Concurrency and provides guidance for avoiding common pitfalls that can compromise application stability and performance.

What we'll cover

  1. Task Continuation Leaks: Understanding both leaked and double-resumed continuations
  2. Deadlocks with Legacy Concurrency: Deep dive into forward progress violations
  3. Actor Reentrancy Problems: When suspension points break your assumptions
  4. MainActor Execution Costs: Analyzing unnecessary actor hops and performance implications
  5. Common Implementation Mistakes: Patterns that lead to suboptimal async/await usage
  6. Preparing for Swift 6: Migration strategies for strict concurrency checking

Due to the depth of this topic, this could be structured as a three-part series:

  • Part 1: Common Pitfalls and Misuse Patterns (this article)
  • Part 2: Decision Making and Implementation Strategies
  • Part 3: Legacy Code Integration and Best Practices

Let's dive into Part 1:


Task Continuation Leaks: The Double-Edged Sword

When bridging callback-based code with Swift Concurrency using continuations, two dangerous scenarios can occur that both result in SWIFT TASK CONTINUATION MISUSE runtime errors.

Leaked Continuations

The most common issue occurs when continuation.resume() is never called, leaving tasks suspended indefinitely:

func fetchLegacyData() async throws -> Data {
    try await withCheckedThrowingContinuation { continuation in
        legacyNetworkAPI.fetch { result in
            // DANGER: What if result is nil? No resume() call!
            guard let data = result.data else { return }
            
            if result.isSuccess {
                continuation.resume(returning: data)
            } else if let error = result.error {
                continuation.resume(throwing: error)
            }
            // Missing else branch for other failure cases
        }
    }
}

In this case, the task never completes, potentially causing memory leaks and frozen UI states. The same issue also arises when calling a completion-based method whose completion handler is not always called due to the nature of logic or just developer error (especially in 3rd frameworks).

Multiple Resume Calls

Equally dangerous is resuming the same continuation multiple times, which commonly happens with unreliable callback APIs:

class NetworkManager {
    private var storedContinuation: CheckedContinuation?
    
    func fetchWithRetries() async throws -> Data {
        try await withCheckedThrowingContinuation { continuation in
            storedContinuation = continuation
            
            // DANGER: Retry mechanism might call completion multiple times
            legacyAPI.fetchWithRetries { result in
                storedContinuation?.resume(with: result)
                // Forgot to nil out the continuation!
            }
        }
    }
}

As explained in the Swift Forums discussion on preventing continuation misuse, the solution involves atomic state management and careful lifecycle handling:

actor ContinuationGuard {
    private var continuation: CheckedContinuation?
    private var hasResumed = false
    
    func store(_ cont: CheckedContinuation) {
        continuation = cont
    }
    
    func resumeOnce(with result: Result) {
        guard !hasResumed, let cont = continuation else { return }
        hasResumed = true
        continuation = nil
        cont.resume(with: result)
    }
}

Deadlocks: Breaking the Forward Progress Contract

Swift Concurrency's cooperative thread pool operates under a fundamental contract: all tasks must make forward progress. Violating this contract, even inadvertently, can create deadlocks that are difficult to diagnose and reproduce.

The Forward Progress Violation

According to Saagar Jha's comprehensive analysis, the core issue stems from blocking operations that prevent threads from making progress:

func processImage() async throws -> ProcessedImage {
    // DANGER: This blocks the cooperative thread pool
    return try await withCheckedThrowingContinuation { continuation in
        DispatchQueue.global().sync {
            // Long-running image processing
            let result = heavyImageProcessing()
            continuation.resume(returning: result)
        }
    }
}

The problem becomes critical when the thread pool size is limited. If all threads enter blocking sync calls, the runtime cannot schedule new work, creating a system-wide deadlock.

Real-World Example: Framework Integration

Consider this scenario with Apple's Vision framework:

func detectFaces(in imageURL: URL) async throws -> Int {
    try await withThrowingTaskGroup(of: Int.self) { group in
        for i in 1...10 {
            group.addTask {
                // DANGER: VNRequestHandler.perform() internally uses DispatchGroup.wait()
                let request = VNDetectFaceRectanglesRequest()
                let handler = VNImageRequestHandler(url: imageURL)
                try handler.perform([request]) // Blocks cooperative thread
                return request.results?.count ?? 0
            }
        }
        
        return try await group.reduce(0, +)
    }
}

As discussed in the Swift Forums cooperative pool deadlock thread, VNRequestHandler.perform() appears synchronous but internally performs async work using GCD, then blocks with DispatchGroup.wait(). This violates the forward progress guarantee and can deadlock the entire application.

I also had a previous post about the issue in details. Feel free to take a look.

Safe Bridging Pattern

The solution involves ensuring blocking operations happen outside the cooperative pool:

func safeImageProcessing() async throws -> ProcessedImage {
    try await withCheckedThrowingContinuation { continuation in
        // Move blocking work to a separate GCD queue
        DispatchQueue.global().async {
            do {
                let result = heavyImageProcessing()
                continuation.resume(returning: result)
            } catch {
                continuation.resume(throwing: error)
            }
        }
    }
}

Actor Reentrancy: When Suspension Points Surprise You

Actors protect against data races but introduce a different complexity: reentrancy during suspension points. This can lead to state inconsistencies that are challenging to debug.

actor CounterService {
    private var count = 0
    private var inProgress = false
    
    func incrementWithValidation() async throws -> Int {
        guard !inProgress else { throw ServiceError.busy }
        inProgress = true
        count += 1
        
        // DANGER: During this await, another task can enter this actor
        // and modify both count and inProgress
        try await validateOperation()
        
        inProgress = false
        return count // This might not be the value we just incremented to!
    }
    
    private func validateOperation() async throws {
        // Simulate async validation
        try await Task.sleep(nanoseconds: 100_000_000)
    }
}

During the await validateOperation() suspension point, another task can enter the actor and call incrementWithValidation(), potentially creating inconsistent state. The solution involves capturing state before suspension points:

actor ImprovedCounterService {
    private var count = 0
    
    func incrementWithValidation() async throws -> Int {
        count += 1
        let currentCount = count // Capture state before suspension
        
        try await validateOperation()
        
        return currentCount // Return the captured value
    }
}

Things to note here is that we must be careful when introducing suspension point in our code with Actor. It can help us protect critical code, but only when we use it correctly.


MainActor Execution Costs: Understanding Actor Hops

While @MainActor simplifies UI updates, misuse creates unnecessary performance overhead through actor hops and thread switching.

Unnecessary Actor Hops

As Antoine van der Lee explains in his MainActor usage guide, one common mistake is performing main actor hops when already isolated to the main actor:

@MainActor
class ProfileViewModel: ObservableObject {
    @Published var profile: UserProfile?
    
    func refresh() async {
        // ❌ Unnecessary: We're already on MainActor
        await MainActor.run {
            self.profile = nil // Setting loading state
        }
        
        let newProfile = await fetchUserProfile()
        
        // ❌ Another unnecessary hop
        await MainActor.run {
            self.profile = newProfile
        }
    }
}

The corrected version eliminates redundant actor hops:

@MainActor
class OptimizedProfileViewModel: ObservableObject {
    @Published var profile: UserProfile?
    
    func refresh() async {
        // ✅ Direct assignment - already on MainActor
        self.profile = nil
        
        // Network call happens off main actor
        let newProfile = await fetchUserProfile()
        
        // ✅ Direct assignment upon return to MainActor
        self.profile = newProfile
    }
}

Task Isolation Pitfalls

Vincent Pradeilles highlights in his async/await mistakes article how Task creation can lead to unexpected isolation behavior:

@MainActor
class ViewModel {
    func handleNotification(_ notification: Notification) {
        // Process notification
    }
    
    func listenToNotifications() {
        // ❌ Task inherits MainActor isolation but captures self strongly
        Task { [weak self] in
            guard let self else { return }
            // Even with weak self, the Task itself holds a strong reference
            let notifications = NotificationCenter.default.notifications(
                named: UIDevice.orientationDidChangeNotification
            )
            
            for await notification in notifications {
                self.handleNotification(notification)
            }
        }
    }
}

The issue here is that Task automatically captures self and inherits @MainActor isolation, potentially causing the task to hold references longer than expected.


Common Implementation Anti-Patterns

Sequential Execution of Independent Operations

Vincent Pradeilles demonstrates a common inefficient usage is running independent operations sequentially:

// ❌ Inefficient: Operations run one after another
func fetchUserData() async throws -> UserProfile {
    let userData = try await fetchBasicUserInfo()
    let preferences = try await fetchUserPreferences() 
    let history = try await fetchUserHistory()
    
    return UserProfile(data: userData, preferences: preferences, history: history)
}

// ✅ Efficient: Operations run concurrently
func fetchUserDataConcurrently() async throws -> UserProfile {
    async let userData = fetchBasicUserInfo()
    async let preferences = fetchUserPreferences()
    async let history = fetchUserHistory()
    
    return try await UserProfile(
        data: userData, 
        preferences: preferences, 
        history: history
    )
}

Misusing Task.detached

Another pattern highlighted by Pradeilles involves unnecessary use of Task.detached:

@MainActor
class ViewModel {
    func processData() async {
        // ❌ Unnecessary detachment
        Task.detached { [weak self] in
            let data = await fetchData()
            await self?.updateUI(with: data)
        }
    }
    
    func optimizedProcessData() async {
        // ✅ Stays on MainActor context
        let data = await fetchData()
        updateUI(with: data)
    }
}

Preparing for Swift 6: Migration Strategy

Swift 6 introduces strict concurrency checking that will catch many of these issues at compile time. The migration process requires systematic planning and incremental adoption to manage the complexity effectively.

Enabling Strict Concurrency Incrementally

As outlined in SwiftLee's migration guide, start with targeted checking to identify problem areas without immediately breaking your build:

// In Package.swift
.target(
    name: "YourTarget",
    swiftSettings: [
        .enableExperimentalFeature("StrictConcurrency=targeted")
    ]
)

// In Xcode Build Settings
SWIFT_STRICT_CONCURRENCY = targeted

Progress through the three levels systematically:

  1. Minimal: Basic Sendable enforcement for public APIs
  2. Targeted: Actor isolation checking for adopted concurrency features
  3. Complete: Full project-wide enforcement of all concurrency rules

As Donny Wals emphasizes in his migration planning guide, enabling strict concurrency checking module-by-module allows you to work on isolated packages without forcing your entire app to adopt all sendability and isolation checks simultaneously.

Key Areas to Address

1. Sendable Conformance: Understanding Value vs Reference Types

The comprehensive Sendable explanation clarifies that Swift's value types (structs, enums, tuples) provide built-in thread safety through copy semantics, but can break thread-safety rules when not used correctly:

// ❌ This struct is NOT Sendable
struct Movie {
    let formatterCache = FormatterCache() // Reference type (class)
    let releaseDate = Date()
    
    var formattedReleaseDate: String {
        let formatter = formatterCache.formatter(for: "YYYY")
        return formatter.string(from: releaseDate)
    }
}

// ✅ This struct IS Sendable  
struct Movie {
    let title: String              // String is Sendable
    let releaseYear: Int          // Int is Sendable
    let duration: TimeInterval    // Double is Sendable
}
// Swift automatically infers Sendable conformance

For reference types that cannot be easily made Sendable, use @unchecked Sendable with caution:

// Use only when you can guarantee thread safety
final class ThreadSafeCache: @unchecked Sendable {
    private let lock = NSLock()
    private var storage: [String: Any] = [:]
    
    func getValue(for key: String) -> Any? {
        lock.lock()
        defer { lock.unlock() }
        return storage[key]
    }
}

2. @Sendable Closures: Ensuring Safe Concurrent Execution

As detailed in the Sendable closures explanation, @Sendable closures guarantee that all captured variables are thread-safe:

// ❌ Problematic: Captures non-Sendable reference type
class ViewModel {
    var isLoading = false
    
    func fetchData() {
        Task {
            // Compiler error: 'self' is not Sendable
            self.isLoading = true
            let data = await networkCall()
            self.isLoading = false
        }
    }
}

// ✅ Fixed: Use @MainActor for UI-related classes
@MainActor
class ViewModel: ObservableObject {
    @Published var isLoading = false
    
    func fetchData() async {
        isLoading = true
        let data = await networkCall()
        isLoading = false
    }
}

3. Actor Isolation: Explicit Context Management

Swift 6 requires explicit isolation context awareness. The WWDC migration video demonstrates how the compiler helps identify isolation boundary violations:

// Swift 6 requires explicit isolation
@MainActor
func updateUI(with data: UserData) {
    // UI updates must be on MainActor
}

// Clear isolation context in async functions
func processUserData() async {
    let userData = await fetchUserData() // Background context
    
    // Explicit hop to MainActor for UI updates
    await updateUI(with: userData)
}

// ❌ Problematic: Mixed isolation contexts
class DataProcessor {
    @MainActor var displayText: String = ""
    
    func process() async {
        // Error: accessing MainActor property from non-isolated context
        displayText = "Processing..."
    }
}

// ✅ Fixed: Consistent isolation
@MainActor
class DataProcessor {
    var displayText: String = ""
    
    func process() async {
        displayText = "Processing..." // Safe: same isolation context
        let result = await heavyComputation() // Automatically hops off MainActor
        displayText = "Completed: \(result)" // Automatically returns to MainActor
    }
}

4. Legacy API Integration with @preconcurrency

For third-party libraries that haven't adopted Swift Concurrency yet, use @preconcurrency for gradual migration:

// Suppress warnings from legacy modules temporarily
@preconcurrency import LegacyNetworkSDK
@preconcurrency import ThirdPartyAnalytics

// This allows gradual adoption while maintaining type safety
class NetworkManager {
    func fetchData() async throws -> Data {
        try await withCheckedThrowingContinuation { continuation in
            // Legacy callback-based API
            LegacyNetworkSDK.fetch { result in
                continuation.resume(with: result)
            }
        }
    }
}

Important: Plan regular revisits to remove these attributes once libraries add proper concurrency support, as noted in the @preconcurrency article.

5. @Retroactive Sendable for External Types

When you need to extend Sendable conformance to types you don't own:

// For external types that are actually thread-safe
extension SomeThirdPartyStruct: @retroactive Sendable {}

// For Apple SDK types that should be Sendable
extension CLLocation: @retroactive Sendable {}
extension URL: @retroactive Sendable {} // Already Sendable in newer SDKs

6. Global State and Singleton Patterns

Swift 6's strict checking reveals issues with global mutable state. Brandon Weng's migration experience highlights moving away from singletons:

// ❌ Problematic global state
class AppSettings {
    static let shared = AppSettings()
    var theme: Theme = .light // Mutable global state
}

// ✅ Better: Actor-based singleton for thread safety
actor AppSettings {
    static let shared = AppSettings()
    private var theme: Theme = .light
    
    func setTheme(_ newTheme: Theme) {
        theme = newTheme
    }
    
    func currentTheme() -> Theme {
        theme
    }
}

// ✅ Even better: Dependency injection
@MainActor
class ThemeManager: ObservableObject {
    @Published var theme: Theme = .light
}

Migration Strategy Summary

Following Donny Wals' systematic approach:

  1. Take inventory: Assess current concurrency usage and team readiness
  2. Start small: Choose isolated modules with minimal dependencies
  3. Enable warnings first: Use strict concurrency checking before Swift 6 mode
  4. Address fundamentals: Fix Sendable conformance and actor isolation
  5. Gradual expansion: Move module-by-module through your codebase
  6. Plan revisits: Schedule regular reviews of @preconcurrency usage

And finally, the key to successful Swift 6 migration is taking time, not panicking when faced with numerous warnings, and understanding that temporary warning states are acceptable during the migration process.

Conclusion

Swift Concurrency provides powerful abstractions for asynchronous programming, but these abstractions come with subtle complexities that can create serious runtime issues. Understanding the forward progress contract, proper continuation lifecycle management, and actor isolation semantics is essential for building robust applications.

The key takeaways from Part 1:

  • Always ensure continuations are resumed exactly once through proper state management
  • Respect the forward progress contract by avoiding blocking operations in async contexts
  • Be aware of actor reentrancy during suspension points
  • Minimize unnecessary actor hops for better performance
  • Prepare for Swift 6 by incrementally adopting strict concurrency checking

In Part 2, we'll explore decision-making patterns for determining when to adopt Swift Concurrency, implementation strategies for complex scenarios, and performance considerations for production applications.


References