Control CLI
Use ursulactl to inspect, bootstrap, and operate Ursula clusters.
ursulactl is Ursula's operational CLI for cluster administration. It wraps the lower-level HTTP admin endpoints, remote node execution, service control, artifact installation, and orchestrator operations used by multi-node deployments.
For local single-node development, curl and the public HTTP API are usually enough. Use ursulactl when you are running a real cluster and need repeatable operational actions.
Build
cargo build --release -p ursulactl
Or run it from source:
cargo run --release -p ursulactl -- status
Cluster inspection
ursulactl status
ursulactl cluster status
ursulactl service status --node-id 1 --node-id 2 --node-id 3
ursulactl service tail --service ursula-ds --node-id 1 --lines 120
status returns a cluster-level JSON report, including Ursula node status, service state, control-plane state, and load-generator state when present.
cluster status focuses on Ursula membership and readiness across nodes.
Service operations
ursulactl service restart --service ursula-ds --node-id 2
ursulactl service stop --service ursula-ds --node-id 3
ursulactl service start --service ursula-ds --node-id 3
For routine cluster maintenance, prefer orchestrator operations over ad hoc service restarts. They encode ordering, readiness checks, leader stability windows, and operation status.
Orchestrator service
Deploy the control-plane service onto a control node:
ursulactl deploy orchestrator --control-node-id 4
The orchestrator exposes an HTTP control service. ursulactl operations ... talks to it through ORCHESTRATOR_SERVICE_URL, defaulting to http://127.0.0.1:7010.
export ORCHESTRATOR_SERVICE_URL=http://127.0.0.1:7010
ursulactl operations list
ursulactl operations latest --limit 5
Cluster lifecycle operations
Bootstrap a three-voter cluster:
ursulactl operations bootstrap \
--bootstrap-node-id 1 \
--voter-id 1 \
--voter-id 2 \
--voter-id 3 \
--wait
Run a graceful rolling upgrade:
ursulactl operations graceful-upgrade \
--target-version 20260514-abcdef0 \
--node-id 1 \
--node-id 2 \
--node-id 3 \
--wait
Replace a node:
ursulactl operations node-migration \
--source-node-id 2 \
--target-node-id 4 \
--wait
Scale membership:
ursulactl operations scale-up --target-node-id 4 --wait
ursulactl operations scale-down --target-node-id 4 --wait
Restart and recover one node at a time:
ursulactl operations restart-recovery --node-id 2 --wait
Operation control
ursulactl operations get --id <operation-id>
ursulactl operations wait --id <operation-id>
ursulactl operations pause --id <operation-id>
ursulactl operations resume --id <operation-id>
ursulactl operations cancel --id <operation-id>
Operations are persisted by the orchestrator service, so a client can submit an operation, disconnect, and later inspect or wait on the same operation ID.
Direct node access
ursulactl node ... is useful for diagnostics:
ursulactl node exec --node-id 1 --script 'hostname && systemctl status ursula-ds --no-pager'
ursulactl node cat --node-id 1 --path /etc/ursula/ursula.toml
ursulactl node tail --node-id 1 --path /var/log/ursula/ursula.log --lines 100
These commands are powerful and bypass normal orchestrator sequencing. Use them for inspection and break-glass repair, not as the default way to perform cluster changes.