Certain and Uncertain Futures
Tasks are versatile and adapt to many use cases. The most intriguing one to me is using them as futures: starting a Task with asynchronous work and using its handle anywhere the result is needed. Because tasks are inherently thread-safe, they don’t require additional protection or synchronization. This enables elegantly designed APIs.
Let’s start with an example. Suppose we have a Renderer that takes markup code and renders it as an image. The API looks like this:
class Renderer {
func render(code: String) async throws -> Image
}
To help your imagination: this could be a typesetter using KaTeX to render math formulas, or Mermaid to render graphs. The specific use case doesn’t matter. The naive implementation of render(code:) would simply call through to the underlying library every time.
But imagine this is an expensive operation taking several hundred milliseconds. You’d want to ensure the work isn’t repeated unnecessarily—for instance, when the same image is needed in multiple parts of the app. Using tasks, we can wrap the renderer to add de-duplication of concurrent requests and caching:
@MainActor class RendererCache {
let renderer = //...
private var allTasks = [String: Task<Image, any Error>]() // 1
func image(for code: String) async -> Image { // 2
let task: Task<Image, any Error> // 3
if let existingTask = allTasks[code] { // 4
task = existingTask
} else { // 5
task = Task { await renderer.render(code: code) }
allTasks[code] = task
}
return await task.value // 6
}
}
The wrapper maintains a dictionary of all currently registered tasks: allTasks (1). Like the renderer itself, this wrapper exposes a single public function: image(for:) (2). When an image is requested, we determine which task to use (3). If an existing task is found, we use it (4). Otherwise, we create a new task for the actual computation and store it in the dictionary (5). With the task at hand, we then await and return the task’s value on behalf of the caller (6).
Task does all the heavy lifting: it dispatches the actual rendering and provides the result to all callers once available. The way this is set up, tasks are never removed from the dictionary. This means subsequent requests for the same image won’t trigger new rendering—they’ll instantly receive the cached result.
There are many nuances to discuss about this sample. The state needs concurrency protection, which I addressed by making the cache @MainActor. Alternatives include using a Mutex or turning the cache into an actor. I’m not sure how problematic keeping these task handles around indefinitely might be. We could store finished images separately and discard completed tasks, which would also enable synchronous (non-async) access to cached values.
The key point I want to focus on is how Task is being used here: as a future for a value that will become available within a reasonable timeframe. None of these rendering jobs should take forever—they’ll likely complete in less than a few hundred milliseconds. In other words, we know, once started, the task will finish soon. The outcome of these futures is certain.
An Uncertain Async Function
This realization got me thinking: are there tasks that might never conclude? What would such a task even look like? It would need to call an async function with a realistic chance of never returning. Do such functions exist? They do!
Consider the delegate/observer pattern: one object registers with another to receive notifications when something changes. Here’s a simplified example using a protocol:
class Thing {
var value: Int = 0
weak var observer: (any Observer)?
func update() {
value += 1
observer?.thingValueChanged(value)
}
}
protocol Observer: AnyObject {
func thingValueChanged(_ newValue: Int)
}
class Client: Observer {
let thing: Thing
func beginObservation() {
thing.observer = self
}
func thingValueChanged(_ newValue: Int) {
// ...
}
}
The client registers itself as the observer, and when the thing changes, it notifies its observer. This is an established and well-understood pattern that exists in a thousand variations. However, it’s not particularly functional, value-typed, or async-friendly. The client must be an object so it can be weakly referenced to avoid retain cycles. The callback happens in a separate function, potentially outside the context that initiated the observation. While these issues could be addressed, what if we could solve this in an async/await fashion? Imagine being able to simply write:
let newValue = try await thing.nextValue()
With this API, async clients could await changes directly. Observation wouldn’t be permanent—it would follow the one-shot Observable pattern: receive one notification, then re-register if you want more. This turns observation into the same pattern as awaiting a result, like in the renderer above.
But there’s a fundamental challenge: like any observer that might be called or not, nextValue() might never return. Some observations are extremely rare—consider events like “user over quota”, “activation expired”, or “time zone changed”. While they can happen, they’re so infrequent that they effectively happen almost never. In other words, the outcome of such function a function will be uncertain.
An Uncertain Future
Now, how would we even build such a function? And how would we need it to behave? Let’s start with the usage. Depending on the case, observers might have short lifespans, and there might be many of them. Consider the “activation expired” example: each button in the interface might need to observe this event to disable itself when appropriate. (SwiftUI offers better solutions for this, but let’s set those aside for now.)
Given this usage, uncertain functions need two key traits. First, they must support cancellation—the calling context will likely be deallocated long before the event occurs, so we need to free up resources. Second, cancelled invocations must not leak resources. Each call and cancellation shouldn’t leave something behind. Beyond a fixed overhead, it shouldn’t matter how many times the function has been called and cancelled.
Can we implement this using a Task future? The answer might surprise you: no—at least not if you want both traits. To understand why, let’s examine a hypothetical task-based implementation:
func nextValue() async throws -> Int {
let task = // ... get a task *somehow*
return try await task.value
}
Like in the renderer example, we somehow obtain a task that will eventually—or possibly never—provide a value. Then we suspend the caller, waiting for the event. Sounds reasonable, right? The problem is that this setup does not support cancellation properly. As far as I know (please correct me if I’m wrong), there’s no way to wait for a Task’s result without blocking. While the caller might cancel its own task and stop caring about the result, we’d still be suspended waiting for a value that might never arrive—leaking the suspended context indefinitely.
There’s another issue: we also can’t simply cancel the underlying task when one observer loses interest. Other observers might still be waiting for the same value. Cancellation must be handled individually for each caller.
AsyncStream
Fortunately, there’s another built-in type that solves this: AsyncStream. I briefly mentioned it in my first post on cancellation, noting it as one of the three built-in types with automatic cancellation support. Let’s see how it works. Here’s the Thing type reimplemented with async-await based observation:
class Thing {
var value: Int = 0
var observer: AsyncStream<Int>.Continuation? // 1
func nextValue() async throws -> Int {
let stream = AsyncStream<Int> { continuation in // 2
observer = continuation
}
for await value in stream { // 3
return value // 4
}
throw CancellationError() // 5
}
func update() {
value += 1
observer?.yield(value) // 6
}
}
Like the traditional implementation, we still need to hold some state for the observer. But instead of storing the observer object itself, we only store a continuation (1). When nextValue() is called, we create a new AsyncStream and store its continuation for later use (2). Here’s where it gets interesting: by iterating the stream (3), we get cancellation support for free. The iteration immediately suspends, waiting for the next value. One of two things will happen: either the stream receives a value, which we immediately return, fulfilling the observation (4)—or the calling task is cancelled, which terminates the loop and lets us throw a CancellationError (5). Notifying observers remains straightforward: just call yield on the continuation (6).
Again, there are many nuances to discuss here. The sample currently doesn’t clean up the continuation after nextValue completes. It also supports only one observer at a time, leaking continuations when trying to register more. If you’re curious, I’ve posted an extended version on GitHub that addresses these issues and is ready to run.
This pattern might look familiar—it’s strikingly similar to NotificationCenter’s async API:
let notifications = NotificationCenter.default.notifications(named: name)
for await next in notifications {
print(next)
}
What’s not immediately obvious: notifications is an AsyncSequence—the protocol that AsyncStream conforms to. The for-loop creates an iterator and calls a function named next() under the hood to await each element. That next() function is async and throwing. It will not return until either an event happened or the observation is cancelled. Exactly the kind of uncertain function we’ve been looking for. In essence, AsyncSequence is Swift Concurrency’s built-in uncertain future.
UncertainFuture
To bring this series of posts on tasks and cancellations to a decent ending, let’s combine what makes for a cultivated cancellation, implementing cancellation handlers, and our understanding of futures—and build a reusable helper for async observations. Creatively, Creatively, and in absence of better ideas, I’ve named it UncertainFuture. It encapsulates a potential future value with observers and supports individual cancellation. Using it, Thing becomes simple:
class Thing {
var value: Int = 0
var future = UncertainFuture<Int>() // 1
func nextValue() async throws -> Int {
return try await future.value // 2
}
func update() {
value += 1
future.complete(value) // 3
future = UncertainFuture() // 4
}
}
Instead of managing continuations directly, we keep a single future for each potential change event (1). Observing the next value is as simple as forwarding to the future, which records the caller, sets up cancellation handling, and suspends (2). When a change occurs, we notify the value (3) and create a fresh future (4) for subsequent changes. This last step is necessary because UncertainFuture can only be fulfilled once. For our the use cases in our code, this one-shot design works perfectly. I also find it a clearer mental model.
The implementation of UncertainFuture is available on GitHub. Explaining it in depth goes beyond the scope of this post, but I believe all the critical concepts have been covered in previous posts. If you have questions, please let me know. As a bonus, the code includes a complete suite of tests. You are welcome to use it, modify it, or improve upon it. As always, I’d love to hear your feedback.