Durability and Consistency
Understand Ursula's durability guarantees, linearizable writes, and hot-cold storage model.
When is a write durable?
An append is acknowledged after a majority of nodes have replicated it. A single node failure cannot lose an acknowledged write. No write is ever acknowledged based on a single copy.
Acknowledged data is held in replicated hot state across the cluster. A background process then flushes data to S3 on a configurable interval, after which the data inherits S3-grade durability. In a cross-region deployment with a 5-second flush interval, Ursula provides approximately 9-10 nines of per-message durability. The window where acknowledged-but-unflushed data is at risk from a simultaneous multi-region failure is measured in seconds.
Consistency model
Ursula provides linearizable writes. All appends are serialized through a single leader node, which assigns a total order. Reads on the leader are linearizable: if an append has been acknowledged, any subsequent read on the leader will reflect that write.
Follower reads are eventually consistent and may lag slightly behind the leader (typically sub-millisecond). They will never see data out of order or observe a write that was later rolled back, but a follower may not yet have applied the most recent committed writes. The Stream-Up-To-Date response header indicates whether the serving node has fully caught up.
Data lifecycle
Writes first enter replicated hot state across cluster nodes. This is where acknowledgment happens and where recent reads are served from. A background process flushes older data to S3 on a configurable interval. Once flushed, the data gains S3-grade durability and nodes can recover it without replaying the full history. Reads fall through transparently from hot to cold storage.
Availability and durability
The estimates below assume a standard Ursula deployment: three voting replicas across availability zones, with two non-voting replicas in a second region.
| Property | What it means |
|---|---|
| Write availability | Writes continue as long as a majority of voting replicas remain healthy. In this layout, that means the system maintains write quorum through any single voting-AZ failure. Non-voting replicas do not participate in write quorum, but they hold additional copies of acknowledged data for durability and read serving. |
| Read availability | Any healthy replica can serve reads, including non-voting replicas. Read availability can outlast write quorum in some failure scenarios. |
| Per-message durability | Approximately 9-10 nines with a 5-second flush interval. Writes are acknowledged after quorum replication in hot state. Full durability follows after background flush to S3. |
| Annual zero-loss probability | Approximately 3-4 nines. The probability of experiencing no data-loss events over a full year. |