Introduction

bee-tui is a terminal cockpit for Ethereum Swarm Bee node operators. It surfaces the state Bee's API hides — bucket collisions, redistribution skip reasons, bin starvation, NAT reality — in fourteen live screens, with an always-on HTTP request tail so operators trust what they see.

bee-tui cold-start tour

This handbook is the per-screen reference. It explains what each screen shows, why it matters, and how to use the keymap. The high-level project plan lives in docs/PLAN.md; the README is the install + quickstart entry point.

Who this is for

You run a Bee node (mainnet or testnet) and want to know what's wrong without reading 50 endpoints worth of JSON. The pain points this tool exists to address:

  • "Why is my node unhealthy?" — answered by S1 with WHY tooltips encoding tribal knowledge from the bee-go source.
  • "Which batch is about to fail uploads?" — S2's worst-bucket fill bar + Enter-to-drill bucket histogram.
  • "Why am I unreachable?" — S7 distinguishes public-vs-private underlay and tracks AutoNAT reachability stability over a window.
  • "Why am I not earning rewards?" — S4's redistribution skip reasons reconstruct the truth LastPlayedRound doesn't tell you.
  • "Where is my upload stuck?" — S9's TagStatus ladder lights up the exact phase a stuck upload is in.
  • "Which of my nodes am I driving?"Ctrl+N opens the v1.10 node picker over a list of every [[nodes]] entry; the top-bar metadata line always names the active profile + endpoint.
  • "Is anything running in the background?" — top-bar awareness chips (subs N, watch N, alerts ●, v1.10+) appear whenever a pubsub subscription, a :watch-ref daemon, or webhook alerting is active, and disappear when nothing is.

What this handbook is not

It's not a Bee operations manual. The deep model of how Swarm works — postage, neighborhoods, kademlia, redistribution — is best absorbed from the Bee book and the bee-go source. This handbook assumes you know that domain and just need to know how the cockpit surfaces it.

Versioning

bee-tui follows Semantic Versioning. The handbook on this site reflects whatever version is on main; the README's Status table tells you what's shipped and what's coming.

Install

bee-tui ships as a single static binary — no Rust toolchain required, no Python runtime, no Docker. Pick the option that matches your platform.

Every GitHub release publishes prebuilt binaries for five targets via cargo-dist. The matching one-line installer detects your CPU + OS and downloads the right tarball.

Linux / macOS

curl --proto '=https' --tlsv1.2 -LsSf \
  https://github.com/ethswarm-tools/bee-tui/releases/latest/download/bee-tui-installer.sh \
  | sh

The installer writes the binary into $XDG_BIN_HOME (~/.local/bin if unset). Make sure that directory is on your $PATH — the script reminds you if it isn't.

Windows

powershell -c "irm https://github.com/ethswarm-tools/bee-tui/releases/latest/download/bee-tui-installer.ps1 | iex"

What's in the installer

  • Linux x86_64 / arm64 → .tar.xz with a stripped ELF binary
  • macOS x86_64 / arm64 → .tar.xz with a stripped Mach-O binary
  • Windows x86_64 → .zip with bee-tui.exe
  • README, CHANGELOG, LICENSE-MIT, LICENSE-APACHE bundled alongside the binary
  • Per-tarball .sha256 checksum + a top-level sha256.sum manifest covering every artifact

If you'd rather verify by hand, fetch the artifact directly from the releases page and check it against sha256.sum.

From source (cargo)

If you have a Rust toolchain (≥ 1.85, the project's MSRV):

cargo install bee-tui

The binary lands in ~/.cargo/bin/bee-tui. This route compiles from source against the latest crates.io release; expect a 30-60 second compile on a modern laptop, longer on a Raspberry Pi.

From source (git)

If you want to track main or hack on the cockpit:

git clone https://github.com/ethswarm-tools/bee-tui
cd bee-tui
cargo build --release
./target/release/bee-tui --version

The dist profile in Cargo.toml matches what cargo-dist uses for releases (LTO thin, optimised); --release uses the standard release profile and takes a hair longer at runtime.

Verifying the install

bee-tui --version
# bee-tui 1.0.0

If the version prints and Bee is running on localhost:1633, the cockpit will launch with no further configuration:

bee-tui

Platform-specific notes

macOS Gatekeeper

The released binary is unsigned (no Apple Developer signature). The first time you run it, macOS may show a "developer cannot be verified" dialog. To approve it:

xattr -d com.apple.quarantine "$(which bee-tui)"

Or right-click → Open in Finder once. Apple notarisation is on the v1.x roadmap.

Windows Defender SmartScreen

Same story — the binary is unsigned, so SmartScreen may flag it on first launch. Click "More info" → "Run anyway", or run from a PowerShell prompt where the irm | iex install step already implicitly accepted execution.

Corporate / restricted environments

If curl / iwr to github.com is blocked, download the tarball from another machine, transfer it manually, and verify the sha256 by hand:

sha256sum bee-tui-x86_64-unknown-linux-gnu.tar.xz
# compare against the value in sha256.sum from the release page

Uninstall

The shell installer drops a marker file at $XDG_DATA_HOME/bee-tui/installed-files.txt (or ~/.local/share/bee-tui/installed-files.txt) listing every file it placed. Removing those files cleanly uninstalls. Or simply delete the binary:

rm "$(which bee-tui)"

Configuration at ~/.config/bee-tui/config.toml is not touched by the installer and stays put across upgrades.

First run

This page walks through what an operator sees the first time they launch bee-tui against a Bee node — the loading shapes, the warmup behaviour, and the few key bindings worth internalising before reading the per-screen pages.

Launch

With no config file, bee-tui talks to http://localhost:1633 out of the box:

bee-tui

The cockpit takes over the terminal in alt-screen mode — your shell prompt is preserved underneath and restored on quit. If alt-screen doesn't work (e.g., piped output, no TTY), the binary errors out cleanly with a one-line message rather than scribbling escape sequences into your scrollback.

What you see in the first second

 bee-tui   local @ http://localhost:1633   ping —   UTC HH:MM:SS
 [Health]  Stamps  Swap  Lottery  Peers  Network  Warmup  API  Tags  Pins  Manifest  Watchlist  FeedTimeline  Pubsub    :cmd · Tab · ? help
─────────────────────────────────────────────────────────────────────────────────────
HEALTH   local · http://localhost:1633     ping: —ms
 ⠋ loading…

 ·  API reachable                loading…
 ·  Chain RPC                    loading…
 ·  Wallet funded                loading…
 …
─────────────────────────────────────────────────────────────────────────────────────

┌ bee::http ──────────────────────────────────────────────────────────────────────────┐
│                                                                                     │
└─────────────────────────────────────────────────────────────────────────────────────┘

A clean first launch shows only the four "always-on" header fields. As soon as something is running in the background, v1.10+ appends awareness chips after the ping block:

 bee-tui   local @ http://localhost:1633   ping 4ms   UTC HH:MM:SS   subs 2   watch 1   alerts ●
  • subs N — active PSS / GSOC subscriptions (see S15 and the :pubsub-pss / :pubsub-gsoc verbs).
  • watch N — active :watch-ref daemons (see S13).
  • alerts ● — present whenever [alerts].webhook_url is set in config.toml; the green dot confirms outbound pinging is configured even when no alerts are firing.

Each chip is hidden when its count is zero (or alerts isn't configured), so the header stays calm on a fresh session and visibly busy when daemons are running.

Three things are happening in parallel:

  1. The watch hub is firing first requests. Each screen has one or more endpoint pollers; the cadence is per-resource (2 s for health, 5 s for topology, 30 s for swap, etc.).
  2. The spinner glyph in the header is rotating (⠋ ⠙ ⠹ ⠸ ⠼ ⠴ ⠦ ⠧ ⠇ ⠏) once per tick. If you see the spinner moving, the redraw loop is alive even if Bee hasn't responded yet.
  3. The bottom bee::http strip is empty until the first request lands. The first tick that sees an HTTP response appends a line.

What "503 syncing" looks like

If you launch bee-tui against a Bee node that's still in its own warmup, most endpoints return:

HTTP 503: Node is syncing. This endpoint is unavailable. Try again later.

bee-tui detects this case specifically and renders it on the header line in yellow (warn) rather than red:

syncing — Bee is still bootstrapping; this view will populate once it catches up

That tooltip is in theme::classify_header_error. You don't need to do anything; the screen will populate as soon as Bee finishes its bootstrap.

What "Bee is unreachable" looks like

If Bee isn't running, or localhost:1633 is wrong, you'll see:

error: TCP connect failed: Connection refused

This is rendered red — it's a real problem, not transient. Common fixes:

  • Bee isn't running: start it (./bee start --config <path>)
  • Bee is on a different host: edit ~/.config/bee-tui/config.toml to point at the right URL
  • Auth token wrong / missing: see Configuration for the @env:VAR token form

What you should do during warmup (S5)

If you launched bee-tui while your Bee node was still in its 25–60 minute cold start, the most useful screen is S5 Warmup. Tab to it (or type :warmup). It's a five-step checklist:

  1. Postage snapshot loaded
  2. Peer bootstrap (against ~50 peers)
  3. Kademlia depth stable (5-tick window)
  4. Reserve fill (reserve_size_within_radius / 65,536)
  5. Stabilization (terminal step keyed on is_warming_up=false)

The screen freezes the elapsed counter the moment Bee flips is_warming_up to false, so you can come back later and see how long the warmup actually took.

Keyboard basics

Internalise these five keys:

KeyEffect
TabCycle to the next screen
:Open the command bar
?Toggle the per-screen help overlay
Drill (S2 batches, S6 peers — when a row is selected)
q / Ctrl+CQuit

Everything else is per-screen and lives in the ? overlay.

What's typical, what's not

After ~30 seconds against a healthy mainnet node:

  • S1 Health: ten gates, mostly green ✓ checkmarks. One or two warns () is normal — bin saturation flickers, chain RPC may show Δ +1 block lag.
  • S6 Peers: 80–150 connected peers across a dozen-ish bins. The first two or three bins should be Healthy. Far bins (depth+5 onward) being Empty is expected.
  • S2 Stamps: usually 1–3 batches. Worst-bucket fill percentage is the headline number to watch — anything above 80 % is Skewed (yellow), above 95 % is Critical (red).
  • Bottom log pane (always visible): a constant stream of GET /status / GET /chainstate / etc. every 1–2 seconds. If this strip goes silent for >5 seconds, something is wrong with the cockpit (or your network), not with Bee.

If S1's "Bin saturation" gate is STARVING, that's the most common operator pain point — see the S6 Peers page for what to do about it.

Quitting

q quits cleanly. So does Ctrl+C. Both:

  • Cancel every in-flight HTTP request (the hierarchical CancellationToken propagates from the root)
  • Restore your terminal from alt-screen
  • Persist nothing implicitly — if you ran :context to switch profiles, that switch isn't sticky across launches; the default = true profile in config.toml always wins on next launch

Configuration

bee-tui's configuration is a single TOML file. With no config at all, the cockpit talks to http://localhost:1633 against a node with no auth token — the most common dev setup. As soon as you have a real Bee node with a Bearer token, or multiple nodes you want to switch between, you'll want a config.

Where the config lives

bee-tui looks for config.toml in this order, taking the first hit:

  1. The path in the BEE_TUI_CONFIG environment variable, if set
  2. $XDG_CONFIG_HOME/bee-tui/config.toml
  3. ~/.config/bee-tui/config.toml
  4. (built-in default — single local node, no token)

The directory does not need to exist before launch; bee-tui only reads, never writes. Create it yourself the first time:

mkdir -p ~/.config/bee-tui
$EDITOR ~/.config/bee-tui/config.toml

Minimal example

[[nodes]]
name    = "prod-1"
url     = "http://10.0.1.5:1633"
token   = "@env:BEE_TOKEN_PROD1"
default = true

That's the whole file: one node, named prod-1, with its auth token resolved from $BEE_TOKEN_PROD1 at startup.

Schema reference

[[nodes]] — the node array

You can declare any number of [[nodes]] entries. Exactly one should have default = true; that's the profile bee-tui loads on launch. The others are reachable via :context <name>.

FieldTypeRequiredDescription
namestringyesIdentifier shown in the top bar and used by :context <name>. Keep short — prod-1, lab, staging.
urlstringyesBase URL of the Bee node, e.g. http://localhost:1633 or https://bee.example.com:1633. Trailing slash optional.
tokenstringnoBearer token. May be the literal token string, or @env:VAR_NAME to resolve from an environment variable at startup. Empty / missing = no auth header sent.
defaultboolnoIf true, this profile is loaded on launch. Exactly one entry should have it.

[ui] — UI preferences

[ui]
theme           = "default"
ascii_fallback  = false
FieldTypeDefaultDescription
theme"default" | "mono""default"Slot-based palette. default is vibrant green/yellow/red. mono is greyscale only — useful on terminals where colour is muted or distracting, or when piping to a recording tool that doesn't preserve colour.
ascii_fallbackboolfalseIf true, every component renders ASCII glyphs (OK / X / ! / > / # / .) instead of Unicode (✓ ⚠ ✗ ▶ ▇ ░). Equivalent to passing --ascii on the command line. Use on Windows Terminal pre-Win11, screen readers, or SSH chains that mangle Unicode.
refresh"live" | "default" | "slow""default"Polling cadence preset. live matches the original 2 s health / 5 s topology+tags rates (chatty; use when actively diagnosing). default doubles the fast-tier intervals (4 s / 10 s) — about half the request volume, no perceptible loss for monitoring. slow is minimal (8 s / 20 s / 60 s / 120 s) for leave-it-open-all-day operators.

Unknown values for theme fall back to "default" with a single tracing warning so a typo doesn't break startup.

[bee] — spawn Bee from bee-tui (optional)

When set, bee-tui launches Bee itself before opening the cockpit, captures its stdout + stderr to $TMPDIR/bee-tui-spawned-<ts>.log, waits for /health to respond, then enters the TUI. Quit sends SIGTERM to Bee's process group; a 5-second grace window is followed by SIGKILL if needed.

[bee]
bin    = "/home/operator/bee/dist/bee"
config = "/home/operator/bee/testnet.yaml"
FieldTypeRequiredDescription
binpathyesPath to a bee binary. Bee is invoked as <bin> start --config <config>. Relative paths resolve against the working directory.
configpathyesPath to the Bee YAML config the binary should be started with.

If [bee] is omitted, bee-tui falls back to its legacy mode: connect to whatever's already running on the URL of the default [[nodes]] entry. Use this when Bee runs under systemd / docker / k8s — bee-tui shouldn't spawn it then.

If Bee crashes mid-session, a red bee exited (code N) chip appears in the top bar. There is no auto-restart — the operator decides whether to investigate (the captured log is the place to start) or quit and relaunch.

CLI flags --bee-bin and --bee-config override the [bee] block. Both must be set together; setting only one errors at startup.

[metrics] — Prometheus scrape endpoint (optional)

[metrics]
enabled = true
addr    = "127.0.0.1:9101"   # default; only opt into 0.0.0.0 if you mean it

Off by default. When enabled, bee-tui serves Prometheus exposition-format gauges on the configured address — the unique synthesised metrics (worst-bucket per batch, depth-vs-radius gap, predicted TTL, pending-tx age, bee-tui's own request percentiles) that Bee's own /metrics doesn't expose. See the Prometheus metrics reference for the full list.

[economics] — cost-context oracles (optional)

[economics]
gnosis_rpc_url      = "https://rpc.gnosischain.com"   # required by :basefee + Market tile gas line
enable_market_tile  = true                            # default false; turns on the S3 SWAP Market tile

Two facets:

  • Verbs (:price, :basefee) work without the section being present — :price always hits the public Swarm token service; :basefee errors with a clear "configure [economics].gnosis_rpc_url" hint when unset.
  • Market tile on S3 SWAP is opt-in via enable_market_tile = true. When on, bee-tui polls tokenservice.ethswarm.org (and, if gnosis_rpc_url is set, the Gnosis RPC) every 60 s and renders a one-line tile showing BZZ ≈ $X.XXXX and gas: B base + T tip = N gwei. Off by default — fresh installs make no outbound traffic.

[durability] — chunk-graph walker tuning (optional)

[durability]
swarmscan_check  = true                                          # default false
swarmscan_url    = "https://api.swarmscan.io/v1/chunks/{ref}"   # default

Off by default — fresh installs make no outbound traffic to a third-party indexer. When swarmscan_check = true, every completed :durability-check (single-shot or via the :watch-ref daemon) probes swarmscan_url for an independent "does the network see this ref?" answer. The literal {ref} substring in the URL template is replaced with the hex-encoded reference at request time; the probe times out after 5 s.

The result lands in DurabilityResult.swarmscan_seen and shows up in:

  • The verb's summary line: swarmscan: seen / swarmscan: NOT seen (or omitted when the probe was skipped or errored).
  • The S12 Watchlist row detail: · scan: seen / · scan: NOT seen.
  • --once durability-check's JSON: swarmscan_seen field (true / false / null).

A NOT seen answer doesn't flip the is_healthy() flag — it's an independent signal, useful for catching cases where the local node returns a chunk from cache that no peer in the network actually still has. Pair with [alerts].webhook_url to ping on gate transitions and use swarmscan_seen as a manual sanity check.

[pubsub] — pubsub history file + rotation (optional)

[pubsub]
history_file   = "/var/lib/bee-tui/pubsub.jsonl"  # off by default
rotate_size_mb = 64    # active file rolls over at this size; 0 disables (default 64)
keep_files     = 5     # retain .1 .. .5; older rotations unlinked (default 5)

Off by default — fresh installs don't write any pubsub messages to disk. When history_file is set, every PSS / GSOC frame delivered to S15 is also appended to the JSONL file (one JSON-encoded message per line) so overnight subscriptions can be analysed offline. The file is created with mode 0600 (owner-only) since payloads can be sensitive on multi-user hosts.

Rotation keeps disk usage bounded. When the active file crosses rotate_size_mb MiB, bee-tui renames it to <path>.1 (older rotations shift to .2, .3, …, .keep_files; oldest beyond keep_files is unlinked) and re-opens <path> empty. Concurrent watchers serialise through the same mutex that orders appends, so no rename races. Set rotate_size_mb = 0 to disable rotation (file grows unbounded).

Pair with :pubsub-replay <path> to load a prior session's JSONL back into S15 for visual analysis without restarting any subscription.

[alerts] — webhook ping when a health gate flips (optional)

[alerts]
webhook_url    = "https://hooks.slack.com/services/T000/B000/XXX"
debounce_secs  = 300   # default; per-gate cool-down so a flapping gate doesn't pin Slack

Off by default — without webhook_url, no outbound traffic. When set, every health-gate transition (e.g. Reachability: Pass → Fail, StorageRadius: Warn → Pass, Stamp TTL: Pass → Warn when a batch crosses the 7-day topup-planning threshold) becomes one POST with a Slack/Discord-compatible {"text": "..."} body. Transitions to or from Unknown (data-not-loaded-yet) are suppressed so cockpit startup never spams the channel. After firing for gate X, no further alert for X until debounce_secs elapses, regardless of how many times that gate flapped in between.

CLI overrides

Three command-line flags override the config file:

bee-tui --ascii        # forces ascii_fallback = true
bee-tui --no-color     # forces theme = "mono"
NO_COLOR=1 bee-tui     # same as --no-color, per <https://no-color.org>

Resolution order (highest priority first):

  1. --ascii flag → ascii glyphs (regardless of config)
  2. --no-color flag OR NO_COLOR env (any non-empty value) → mono palette
  3. [ui].ascii_fallback from config → ascii glyphs
  4. [ui].theme from config → palette

The @env:VAR token form

Every Bee API endpoint that's not explicitly public requires a Bearer token. Hard-coding the token in config.toml is fine for a lab node, but for production it's the wrong shape — the file lands in dotfiles backups, screenshots, support threads, etc. The @env:VAR form keeps the token out of the file:

token = "@env:BEE_TOKEN_PROD1"

bee-tui reads $BEE_TOKEN_PROD1 once at startup and uses the resolved value for every request. The literal string @env:BEE_TOKEN_PROD1 is never logged, never captured in :diagnose bundles, never sent to Bee. If the variable is unset, bee-tui logs a tracing warning and proceeds without an auth header (the request will then 401).

You can mix forms across nodes — one @env: and one literal in the same config is fine.

Multi-node setups

[[nodes]]
name    = "prod-1"
url     = "http://10.0.1.5:1633"
token   = "@env:BEE_TOKEN_PROD1"
default = true

[[nodes]]
name  = "prod-2"
url   = "http://10.0.1.6:1633"
token = "@env:BEE_TOKEN_PROD2"

[[nodes]]
name = "lab"
url  = "http://localhost:1633"

[ui]
theme = "default"

Launch picks prod-1 (the default). At runtime, switch with:

  • :context prod-2 — swap to the second prod node
  • :context lab — swap to the local lab node
  • :context — list every configured profile name

The switch is fast (no restart) but not stateful across launches — every run starts on the default = true profile.

See :context for the deep dive on what's preserved vs reset on switch.

Validating your config

If bee-tui fails to start with a config error, the message is the first thing on stderr — common ones:

ErrorFix
no Bee node configured (config.nodes is empty)Add at least one [[nodes]] entry.
no default node selectedMark exactly one [[nodes]] with default = true.
invalid url: …Quote URLs that contain ports: url = "http://10.0.1.5:1633" (TOML accepts unquoted in a TOML number-shape unrelated to URLs, leading to confusing parses).
unknown theme name "X" — falling back to defaultJust a warning; not fatal. Set theme to "default" or "mono".

To dump the resolved config for debugging:

:diagnose

The bundle in $TMPDIR/bee-tui-diagnostic-<ts>.txt includes the active profile name and endpoint URL. Tokens are never captured — they live in HTTP headers, not URLs.

S1 — Health gates

The first screen, default view on launch. Eleven gates with a tri-state status ladder (Pass / Warn / Fail / Unknown), each carrying a tooltip that encodes tribal knowledge about why a gate fails the way it does.

Why this screen exists

Bee returns plenty of data through /health, /status, /wallet, /redistributionstate, and a handful of other endpoints. The problem is calibration: a value of storageRadius = 7 on a node with committedDepth = 8 looks broken until you know that storageRadius decreases only on the 30-minute reserve worker tick (bee#5428). Without that context, operators stare at it for ten minutes wondering what they did wrong.

S1 is the screen that hands you that calibration up front.

The eleven gates

#GateWhat's checkedSource
1API reachable/health returns 200 within timeoutHealthSnapshot.last_ping
2Chain RPCBlock tip vs chain tip from /chainstate (Δ ≤ a few blocks is healthy)ChainState.block / chain_tip
3Wallet fundedBZZ balance > 0 AND native balance > 0 from /walletWallet.bzz_balance / native_token_balance
4Warmup completeis_warming_up = false from /statusStatus.is_warming_up
5PeersConnected count from /healthHealthSnapshot.connected_peers
6Reservereserve_size_within_radius vs 65,536 (Bee's reserve target at depth)RedistributionState.reserve_size_within_radius
7Bin saturationPer-bin connected counts vs the bee-go SaturationPeers=8 constant for relevant binsTopology.bins[].connected
8Healthy for redistributionis_healthy = true from /redistributionstateRedistributionState.is_healthy
9Not frozenis_frozen = false from /redistributionstateRedistributionState.is_frozen
10Sufficient funds to playhas_sufficient_funds = true from /redistributionstateRedistributionState.has_sufficient_funds
11Stamp TTL (v1.4.0+)Worst-batch TTL across usable batches from /stamps. Pass when all usable batches have TTL > 7d; Warn when any drops under the 7d planning threshold; Fail when any drops under the 24h urgent threshold. Pending batches (usable=false) and nodes with zero usable batches show Unknown — operators on a fresh node would be surprised by a green stamp gate when no batches exist.StampsSnapshot.batches[].batch_ttl

The status ladder

StatusGlyphMeaning
PassGate is satisfied. Move on.
WarnSomething off but not blocking — bin saturation flickering, chain RPC lagging by Δ +1 block. Keep an eye, no action required.
FailReal problem requiring action. Read the tooltip on the next line.
Unknown·Snapshot hasn't loaded yet (cold start) OR the relevant endpoint returned no data.

Status is rendered both as a glyph and a colour (green / yellow / red / dim), so colourblind operators or --ascii users still see the ladder via the glyphs.

Reading a gate

Each gate occupies one line, plus an optional tooltip continuation under it:

 ⚠  Bin saturation               2 starving: bin 4, bin 5
        └─ manually `connect` more peers or wait — kademlia fills bins gradually

The first column is the status glyph. The middle is the gate label, padded to align. The right column is the value — the specific number / string driving the status. The continuation line (└─ in the default theme) is the why — a one-sentence explanation of what to do or what it means.

Tooltips only appear when there's something useful to say. A green gate with Pass status doesn't need one.

Common scenarios

"Why is my Reserve gate failing?"

Look at the value. If it reads 12,345 chunks (in-radius: 12,345) · radius 8, your reserve is filling but hasn't reached the 65,536 chunk target Bee uses at depth. This is normal during warmup — wait. The Warmup screen (S5) tracks this explicitly.

"Bin saturation says Starving but I just connected to 12 peers"

The gate looks at per-bin counts, not total peer count. You may have 100 connected peers all sitting in bin 0; the bins near your kademlia depth (where chunks actually replicate) might still have 3-4 peers each. Tab to S6 Peers and look at the bin saturation strip — that's the canonical view.

"Chain RPC shows Δ +5"

Your local Bee thinks the chain tip is 5 blocks ahead of the last block it processed. Small lags (Δ +1, Δ +2) flicker constantly and are normal. Sustained lag (Δ +5 for several minutes) means your Gnosis RPC is slow or dropping responses. Check the upstream RPC; Bee can't fix what RPC sends it.

"Wallet funded is failing"

If BZZ is zero, you can't issue postage stamps and uploads won't work. If native is zero, you can't pay gas — chequebook operations and stake / redistribution will all stall. Top up the operator wallet from a faucet (testnet) or your treasury (mainnet).

"Healthy for redistribution = Fail but Not frozen = Pass"

is_healthy looks at multiple internal preconditions (reserve filled, depth stable, recent samples). A node can be unfrozen but still un-healthy during the first post-warmup window. Wait one or two redistribution rounds (~5 minutes); if it stays un-healthy, drop down to S4 Lottery which has a six-state stake card with the actual reasoning tree.

Snapshot cadence

S1 polls four endpoints at 2-second intervals:

  • /status — warmup, peer count
  • /wallet — BZZ + native balances
  • /chainstate — block + chain tip
  • /redistributionstate — frozen / healthy / funds

The 2 s cadence is fast enough that operator-visible state changes feel live, slow enough not to hammer Bee. Per-bin data for the saturation gate comes from the /topology poller (5 s cadence) on the shared watch hub. The Stamp TTL gate reads the S2 Stamps watch (15 s cadence) — TTL counts down in seconds, so a slower poll is fine.

Webhook alerts (v1.4.0+)

When [alerts].webhook_url is set in config.toml, bee-tui diffs the gate states between ticks and POSTs a Slack / Discord-compatible payload on every transition worth pinging on (per-gate Pass↔Fail and Pass↔Warn flips, with Unknown silenced so cold-start doesn't fire noise). Each alert carries the gate label, the from / to status, and the why-tooltip — so the receiving channel gets enough context to triage without opening the cockpit.

A per-gate debounce window ([alerts].debounce_secs, default 60) suppresses thrash when a gate flickers around its threshold. The top bar shows alerts ● whenever a webhook is configured, so operators see at a glance whether outbound pinging is on. See config.md for the full block.

Keys

S1 has no screen-specific keys. The global keymap (Tab, ?, :, q) covers everything.

S2 — Stamps + bucket drill

Postage batch table with the volume + duration framing the Bee community is moving toward (bee#4992 is retiring depth

  • amount), plus a per-batch drill that surfaces which bucket is about to overflow.

Why this screen exists

Bee's /stamps endpoint exposes a utilization field that operators routinely misread. It's documented in OpenAPI as "the average usage of the batch" — but the implementation stores MaxBucketCount: the peak fill across all 2^bucket_depth buckets. A batch with 1024 buckets at 0 chunks each and one bucket at 64 chunks reads utilization = 64, not 0.06.

Operators see "utilization 14 %" and think they have headroom. Then their next upload fails with ErrBucketFull because the worst bucket is actually at 95 %.

S2 puts the worst-bucket fill bar front and centre. The drill goes deeper: it shows the full distribution, so two batches with the same headline utilization reveal whether the load is concentrated in one bucket or spread across many.

The list view

 LABEL                BATCH        VOLUME      WORST BUCKET                TTL         STATUS
 prod-mainnet         abc123de…    16.0 GiB    ▇▇▇▇▇▇░░  78% (50/64)       47d 12h     I ✓
 spillover            def456ab…    16.0 GiB    ▇▇▇▇▇▇▇▇  98% (63/64)       12d  3h     I ⚠ skewed
        └─ worst bucket 98% > safe headroom — dilute or stop using.
 fresh-buy            789bc123…    16.0 GiB    ░░░░░░░░   0% (0/64)         1d  0h     I ⏳ pending
        └─ waiting on chain confirmation (~10 blocks).
ColumnMeaning
Cursor marks the row Enter would drill into
LABELOperator-set label, or (unlabeled)
BATCHFirst 8 hex chars of the batch ID
VOLUMETheoretical capacity = 2^depth × 4 KiB
WORST BUCKETFill bar + percentage + utilization / BucketUpperBound raw count
TTLDays + hours remaining at current paid balance
I/MI = immutable, M = mutable
STATUSFive-state ladder (see below)

The status ladder

StatusGlyphWhen
Pendingusable = false — chain hasn't confirmed the batch yet (~10 blocks).
HealthyWorst bucket < 80 %, batch usable, TTL > 0.
SkewedWorst bucket ≥ 80 % — above the safe headroom line. Dilute or stop using.
CriticalWorst bucket ≥ 95 %. The very next upload may fail.
Expiredbatch_ttl ≤ 0 — paid balance exhausted. Topup or stop using.

Immutable vs mutable — bee#5334

The I/M column matters more than it looks. Immutable batches reject upload when a bucket overflows (ErrBucketFull from Bee). Mutable batches silently overwrite the oldest chunks in the full bucket. The Critical tooltip splits accordingly:

  • Immutable: "immutable batch will REJECT next upload at this bucket."
  • Mutable: "mutable batch will silently overwrite oldest chunks."

If you're using mutable batches and the cockpit shows Critical, your data is probably still on the network — but newer uploads to that bucket are dropping older ones. There's no warning from Bee.

The drill (Enter on a row)

fires GET /stamps/<id>/buckets and renders the result as a histogram + worst-N table:

  depth 22   bucket-depth 16   per-bucket cap 64   65,536 buckets
  total chunks 421 / 4,194,304   worst bucket 98%

  FILL %       COUNT   DISTRIBUTION
  0 %          65,400  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇
  1 – 19 %         88  ▇▇▇▇▇
  20 – 49 %        24  ▇▇
  50 – 79 %        12  ▇
  80 – 99 %         8
  100 %             4

  WORST BUCKETS
  #3         64 / 64    100%
  #17        63 / 64    98%
  #101       60 / 64    93%
  ...

Reading the histogram

The six bins are sorted least-to-most full. The bar widths are scaled to the largest bin, so the operator's eye locks onto the densest range. Bin colours follow the fill:

  • Pass (green): 0–79 %
  • Warn (yellow): 80–99 %
  • Fail (red): 100 %

If your batch is failing uploads, the red bin (100%) tells you exactly how many buckets are saturated. If that count is small (1-4), the load is concentrated and a :dilute would help — diluting halves every bucket count by spreading chunks across twice as many buckets. If it's large (50+), the batch is genuinely full and no dilute will save it; cut a new batch.

The worst-N table

Up to 10 entries, sorted by collisions descending, ties broken by bucket-id ascending (stable across polls). Zero-count buckets are filtered out. If your batch has fewer than 10 non-zero buckets, the table shows whatever's there.

The bucket IDs themselves are deterministic — bucket i holds chunks whose first bucket_depth bits hash to i. This isn't actionable for the operator (you can't choose which bucket a chunk lands in), but knowing it explains why saturation is uneven: bucket selection is hash-driven, not load-balanced.

Common scenarios

"Worst bucket 95 % but I haven't uploaded much"

You probably uploaded a structured dataset — say, a directory of files with similar names. Mantaray packs related entries into the same chunks; if their hashes happen to share the same bucket_depth prefix, they all hit the same bucket. The drill will show one or two saturated buckets with the rest near-empty. Solution: dilute the batch, or for very skewed cases, cut a new batch and restart the upload.

"All buckets are around 60 %, batch reads 60 % utilization"

You've been uploading random / well-distributed data. The batch is genuinely 60 % full. Watch the worst-bucket value; once it crosses 80 %, plan a dilute or topup.

"Pending for more than 10 minutes"

Batches confirm after Bee sees the batch-create transaction land on chain. If the operator wallet has insufficient gas, the transaction stays in the mempool. Tab to S8 API → pending transactions; if the buy is there with pending > 5min, top up native balance.

"TTL is dropping faster than expected"

batch_ttl is a function of paid_balance / current_price. If Bee's current_price (from /chainstate) goes up, every existing batch's TTL drops proportionally. This is normal network repricing — you didn't lose money, the batch's remaining lifetime just got shorter. Topup if you need it to last longer.

Keys

KeyEffect
↑↓ / j kMove row selection
Drill into selected batch
EscClose drill
?Toggle help overlay

Snapshot cadence

S2 polls /stamps every 5 s — slow-changing data (TTL drifts at chain rate, utilization grows at upload rate). The drill fires /stamps/<id>/buckets on demand and is not refreshed automatically — close + re-open the drill to refresh, or wait for the next list-view tick.

S3 — SWAP / cheques

Three stacked panes covering the chequebook (off-chain accounting layer Bee uses to settle inter-peer payments) and its on-chain counterpart, settlements.

Why this screen exists

Bee's pricing protocol means every chunk forwarded between peers gets paid for in BZZ. Most of that payment doesn't go on-chain — peers exchange cheques off-chain and only cash them in periodically. This means at any moment:

  • Your chequebook balance holds total + available BZZ
  • Peers have received cheques from you that haven't been cashed yet (uncashed debt)
  • You've received cheques from peers, also uncashed
  • Net per peer = received − sent

S3 surfaces all four numbers so operators can answer "do I need to cash out?" and "is any one peer way out of balance?".

SWAP / CHEQUES   contract 0xCE3EE0201A1A8296E8bC2BE9f912eC21708fd615

The contract address is the on-chain chequebook — useful for pasting into a block explorer. It's surfaced via bee-rs 1.5's chequebook_address endpoint and only shown once it's fetched (silently absent during the first second after launch).

Pane 1 — Chequebook card

  Chequebook  ✓  available BZZ 8.0000  /  total BZZ 10.0000  (80%)

Three states for the card:

StatusWhenWhat it means
Healthy ✓available / total ≥ 50 %Plenty of headroom. Operations work.
Tight ⚠available / total < 50 %Uncashed debt is eating into headroom. Cashing out may be wise — see Pane 2.
Empty ✗total = 0Chequebook hasn't been funded. Cheque-based settlement is unavailable; only time-based pseudo-settlement works.
Unknown ·snapshot not loadedCold start — wait.

The percentage in parens is available / total rounded.

Pane 2 — Last received cheques

  PEER          PAYOUT          ISSUED
▸ cccccc…cccc   BZZ 1.5000      8412930
    peer 0xcccccc8e2f1a40d7a0bf6e1c0a8a2c91e3b…
  bbbbbb…bbbb   BZZ 0.7500      8412901
  aaaaaa…aaaa   never           —

Sort: payout descending, with peers that have never sent us a cheque (never) sinking to the bottom. Absence is signal too — peers we've never been paid by are visible so the operator can see the split.

If you want to cash out, this is the table to look at. The PAYOUT column is the cumulative sent-to-us amount; cashing moves it from off-chain to on-chain.

The cursored row () prints a peer 0x<full> continuation line so the full peer address is reachable for copy without scrolling away (added in v1.9.1 — early versions only showed the truncated cccccc…cccc form, which was insufficient when you actually needed to paste it into a block explorer).

Pane 3 — Per-peer settlements

  PEER          RECV         SENT         NET
▸ bbbbbb…bbbb   BZZ 8.0000   BZZ 1.5000   +6.5000
    peer 0xbbbbbb4c9e7a31f5d2c08e914a72bef0a3b…
  cccccc…cccc   BZZ 0.4000   BZZ 0.9000   -0.5000
  ddddd…dddd   BZZ 2.1000   BZZ 1.9000   +0.2000  ⚠

Sort: |net| descending so the most out-of-balance peer is at the top. A flag marks rows where |net| > 0.5 BZZ — that's where cashout pressure builds up first. The cursored row gets the same peer 0x<full> continuation treatment as Pane 2 (v1.9.1).

The + / - signs on net read at a glance:

  • + = peer owes us (we forwarded their chunks; they paid via cheque)
  • - = we owe peer (they forwarded our chunks; we paid via cheque)

A persistent positive net with one peer and a high payout in Pane 2 = cash that cheque. A persistent negative net = you're sending more chunks than you're storing for them; might mean your chequebook funding is the bottleneck on uploads.

Time-based settlements

Bee 2.7+ also does time-based pseudo-settlement (refresh-rate based, not cheque-based). The header line shows the totals:

  time-settlements   total received BZZ 12.5  ·  total sent BZZ 11.2

These don't show up per-peer in Pane 3 — they're aggregated at the top of the snapshot.

Market tile (v1.4.0+, opt-in)

Setting [economics].enable_market_tile = true in config.toml appends a fourth tile to the screen with cost-context numbers the chequebook itself doesn't carry:

  Market   xBZZ ≈ $0.4321   ·   gas 12.3 base + 1.0 tip = 13.3 gwei
  • The xBZZ price comes from a public token service (no key, no auth). Cached for 60 seconds.
  • The basefee and tip read the configured Gnosis JSON-RPC endpoint ([economics].gnosis_rpc_url, required for the gas half of the tile). Same 60 s cadence.

The tile is always visible when enabled — no Unknown ladder — because the source feeds are external and a transient miss shouldn't blank the screen. Stale numbers render in dim; fresh numbers in info. The two underlying verbs :price and :basefee print the same numbers on demand, useful for a quick glance without flipping a config knob.

Common scenarios

"Tight chequebook"

Look at Pane 2's top peer. If their PAYOUT is > 0, you've already received cheques from them — cashing those out moves the BZZ from "uncashed debt" to "available chequebook balance". The cashout is on-chain (gas costs), so don't do it for tiny amounts.

"All my settlements are negative"

You're forwarding more chunks than you're storing, and paying peers via cheques to do so. This is normal for low-radius nodes (you're closer to roots of the kademlia tree than to leaves). If it's bothering you, increase your radius / depth.

"One peer is way out of balance, +5 BZZ"

That peer has been paying you reliably. Look at their cheque in Pane 2 — if it's a single big payout, it's a normal infrequent-but-bulk pattern. If it's many small ones, they're a high-volume forward partner.

"Total received BZZ is huge but available is tiny"

Most of the received BZZ is uncashed cheques sitting in Pane 2. Cash some out (see the next page on commands — there's no in-cockpit cashout, but you can curl POST /chequebook/cashout/<peer>).

Snapshot cadence

S3 polls four endpoints at 30 s — chequebook + settlement state changes at chain rate, no point hammering:

  • /chequebook/balance
  • /chequebook/cheque (last received per peer)
  • /settlements
  • /timesettlements
  • /chequebook/address (once-ish — header data)

Keys

S3 has no screen-specific keys (no drill yet — peer drill on this screen would duplicate S6). Use S6's peer drill if you want per-peer cheque + settlement detail with ping RTT included.

S4 — Lottery / redistribution

Three panes covering the storage incentives game (the redistribution lottery): round timeline, anchor summary, and a six-state stake card. Plus an on-demand rchash benchmark.

Why this screen exists

Bee earns BZZ through the redistribution lottery — every 152 blocks, eligible nodes commit a hash of a sample of their reserve, reveal it, and (if they win the round) claim the reward. The mechanics span four scattered RedistributionState booleans (is_frozen, is_healthy, has_sufficient_funds, is_fully_synced), the staked amount, and the per-round LastWonRound / LastPlayedRound / LastSelectedRound / LastFrozenRound anchors.

When an operator asks "why am I not earning rewards?", neither /redistributionstate nor /stake alone answers. S4 reduces it all to a single screen with explicit reasoning trees.

Pane 1 — Round timeline

ROUND 4127   block 234,512  ·  in round 87/152
  commit  ████████████░░░░░░░░░░░░  blocks 1-38
  reveal  ████████████████████████  blocks 39-76
  claim   ████████████░░░░░░░░░░░░  blocks 77-114
  idle                              blocks 115-152

The 152-block round is split into three on-chain phases per pkg/storageincentives/agent.go:

  • Commit (blocks 1-38): submit a hash of your reserve sample
  • Reveal (blocks 39-76): reveal the sample
  • Claim (blocks 77-114): if won, claim the reward
  • Idle (blocks 115-152): wait for next round

The progress bar shows where the current round is. Whether you committed / revealed depends on your stake state + RedistributionState booleans — see the stake card below.

Pane 2 — Anchor summary

ANCHORS
  Last won            round 4115     12 rounds ago
  Last played         round 4126     this round
  Last selected       round 4126     this round
  Last frozen         round —        never

Four anchors with human Δ strings:

  • Last won: the most recent round you claimed a reward
  • Last played: the most recent round you committed a hash
  • Last selected: the most recent round Bee said the network selected your sample (precondition for winning)
  • Last frozen: the most recent round you were frozen out (penalty for misbehaviour)

The Δ string ("12 rounds ago", "never", "this round") calibrates the cadence. A fresh node should be playing every round once warm; if Last played is many rounds behind Last selected, you're missing commits.

Pane 3 — Stake card

The most operator-relevant pane. Six states, each with an explicit reason:

StateWhenWhat to do
Healthy ✓Stake > 0, not frozen, healthy, sufficient fundsNothing. You're playing rounds correctly.
Unstaked ·Stake = 0Run bee stake deposit <amount> to enter the lottery.
Frozen ✗is_frozen = truePenalty round. Wait it out (variable duration; check Last frozen anchor for the round you got frozen).
InsufficientGas ⚠has_sufficient_funds = falseNative balance too low to play. Top up the operator wallet.
Unhealthy ⚠is_healthy = false, other booleans OKReserve isn't filled / depth not stable / fully synced still false. Most common during warmup; see S5.
Unknown ·Snapshot not loadedCold start.

The reasoning tree fires the first match top-to-bottom, so "InsufficientGas + Unhealthy" reads as InsufficientGas (more actionable).

Pane 4 — Rchash benchmark (on-demand)

Press r to fire GET /rchash/<depth>/<anchor1>/<anchor2> where:

  • depth = current storage_radius
  • anchor1, anchor2 = deterministic so repeat measurements compare cleanly

The result shows the duration vs the 95-second commit window deadline:

RCHASH BENCHMARK
  duration   3.4s
  hash       0xabcd12ef…
    hash 0xabcd12ef3a4b5c6d7e8f9a0b1c2d3e4f5a6b7c8d9e0f1a2b3c4d5e6f7a8b9c0d
  budget     ✓ well under 95s commit deadline

The hash 0x<full> continuation line below the truncated form is from v1.9.1 — before that, only the 8-char prefix was visible, which made copying the full hash for a Bee bug report or block-explorer search a non-starter. The truncated form stays in the table column for visual scan; the full hex is one line below.

If duration approaches or exceeds 95 s, your reserve is too slow to commit in time. The lottery will silently skip your node every round. Possible causes:

  • Reserve is on a slow disk (HDD, network-attached storage)
  • Bee is competing with other I/O (database, video)
  • Storage node has very high committedDepth

Lifecycle is owned by an internal mpsc inside the Lottery component, not a global Action — so r doesn't pollute other screens, and a benchmark already in flight is no-op'd by re-pressing r.

The "why am I not earning rewards?" decision tree

  1. Stake card says Unstaked → deposit stake
  2. Stake card says InsufficientGas → top up native balance
  3. Stake card says Frozen → wait it out
  4. Stake card says Unhealthy → check S1 for which gate is failing; if Reserve isn't filled, see S5 Warmup
  5. Stake card says Healthy but Last won is many rounds behind Last played → press r and check the rchash duration; if it's near 95 s, your reserve is too slow
  6. Healthy + good rchash + still not winning → the lottery is stochastic; some rounds you don't get selected. Watch Last selected vs Last won — if Selected is recent but Won is old, the network reveals didn't include your sample (rare but possible).

Snapshot cadence

S4 polls two streams:

  • /redistributionstate (existing 2 s health stream — shared)
  • /stake (30 s, low-rate)

The rchash benchmark is on-demand only.

Keys

KeyEffect
rFire / re-fire rchash benchmark
?Toggle help overlay

No selection cursor (yet) — the screen is mostly cards, not a list.

S5 — Warmup checklist

Five-step ladder showing where a new node is in the warmup process. Each step has a clear "done" criterion and a detail string that surfaces the current value against the target — so operators don't just know "Reserve fill is in progress" but "12,345 / 65,536 chunks (19 %)".

Why this screen exists

A fresh Bee node won't earn rewards for the first ~30 minutes. That's normal: the lottery only includes nodes that pass is_warming_up = false plus a handful of internal checks (reserve filled to depth, kademlia depth stable, sample worker healthy). But Bee returns one boolean — is_warming_up — and the operator has no way to see which of the underlying preconditions is the holdup.

S5 unrolls the boolean. Five steps, each with its own target, elapsed-time tracking, and a one-line detail. If a node is stuck at "Reserve fill 14 %" 25 minutes in, you know the issue isn't lottery code — it's that chunks aren't arriving fast enough to fill the reserve. (Slow disk, slow network, low peer count, tiny radius — the rest of the cockpit will tell you which.)

The five steps

#StepTargetSource
1Postage snapshot loaded/stamps returned ≥ 1 batchStampsSnapshot.batches
2Peer bootstrapconnected_peers ≥ PEER_BOOTSTRAP_TARGETStatus.connected_peers
3Kademlia depth stableDepth unchanged across the observation windowTopology.depth + internal stability tracker
4Reserve fillreserve_size_within_radius ≥ 65,536Status.reserve_size_within_radius
5Stabilizationis_warming_up = falseStatus.is_warming_up

The order is roughly chronological — postage loads almost immediately, peer bootstrap takes seconds, depth settles in ~1–2 minutes, reserve fill is the slow step (10–30 min on a healthy mainnet node), and the final stabilization flag flips shortly after reserve hits target.

Step state ladder

Each row carries one of four states:

StateGlyphMeaning
DoneStep satisfied. Move on.
InProgress(N)Step is partway done; N % shows the current fraction toward target.
PendingStep hasn't moved yet (e.g. reserve is still 0 chunks).
Unknown·The relevant snapshot hasn't loaded yet. Cold start.

Reading a row

  ✓  Postage snapshot loaded         3 batch(es)
  ▒  Peer bootstrap                  47 connected (target ≥ 64)              74%
  ▒  Kademlia depth stable           depth 8 (still settling)                50%
  ▒  Reserve fill                    12,345 / 65,536 in-radius chunks        19%
  ⏳  Stabilization                    Bee still reports is_warming_up=true

The detail column is the value — Bee's actual numbers, not a paraphrase. You can compare run-to-run, screenshot it for support threads, and not worry that the cockpit is hiding information. The right edge has a percentage progress bar where applicable.

The header line shows the elapsed wall-clock time since the cockpit first observed is_warming_up = true:

WARMUP CHECKLIST   elapsed 14m 23s

That elapsed counter is captured at first observation and frozen the moment warmup completes — so once the node finishes warming, the checklist stays useful as a record of how long warmup took, with all five rows green.

Common scenarios

"Reserve fill stuck at single-digit %"

The reserve only fills as peers push relevant chunks to your node. If reserve is climbing slowly (or not at all):

  • Drop to S6 Peers and check the bin saturation strip. If bins near your kademlia depth are red ("Starving"), you don't have enough peers near your address space to receive chunks.
  • Check S1 Health gates 7 (Bin saturation) and 5 (Peers). Both should be green for reserve to fill at a normal rate.
  • A skewed dataset on the network can also cause uneven fill. This usually self-corrects within an hour.

"Peer bootstrap stuck at 12 / 64"

Either the node hasn't found bootnodes (check S7 Network for a public address and /addresses connectivity), or it's NAT- trapped. AutoNAT will report Private on S7. Operators behind double-NAT typically stall here.

"Kademlia depth bouncing 7 → 8 → 7"

Normal during the first 60–90 seconds. The "stability" detector waits for the depth to hold for an observation window before calling it stable. If it's bouncing for >10 minutes, check S6 Peers — depth instability is usually peer churn.

"Postage snapshot loaded says no batches"

Bee will warm up without postage, but you can't upload anything until you buy a batch. Tab to S2 Stamps, run bee postage buy <amount> <depth> from a separate shell, wait ~10 blocks for confirmation, and the row will go green.

"Stabilization says complete but Reserve says 47 %"

Bee's is_warming_up flag flips once the minimum preconditions are satisfied — it doesn't actually require a full reserve. Reserve will keep filling while the lottery is already enabled. This means S4 Lottery's stake card may go healthy before reserve is full; that's fine.

Snapshot cadence

S5 piggy-backs on the streams S1 already runs — no extra HTTP calls:

  • /status (2 s) — warmup, peers, reserve
  • /topology (5 s) — depth + stability tracking
  • /stamps (5 s) — first batch detection

The depth-stability tracker is internal to the Warmup component; it watches the topology snapshots and only flips the step to Done after the value has held for the observation window.

Keys

S5 has no screen-specific keys. The global keymap (Tab, ?, :, q) covers everything.

S6 — Peers + bin saturation + drill

The screen most operators end up living on. A 32-row bin strip showing kademlia health at a glance, a peer table sorted by bin / latency, and a per-peer drill that fans out four endpoints in parallel.

Why this screen exists

Bee's /topology returns 32 bins of peer data, with per-bin connectedPeers, disconnectedPeers, and a fair amount of metric fields per peer (latency EWMA, session direction, reachability). The relevant numbers are scattered across 4-deep JSON nesting and most of them are noise on any given day — operators want three things:

  1. Are the bins near my depth saturated? This determines whether the reserve is fillable and whether forwarding can work. See the saturation strip.
  2. Are individual peers healthy? Latency, session direction, reachability — the peer table.
  3. What's one specific peer up to? Balance, ping, settlements, cheques — the drill.

Bee's own /topology is too dense for any of these. S6 is the cockpit's heaviest pre-render: it computes bin saturation, sorts peers, and aggregates the four-way drill into one pane.

Header — saturation rollup

PEERS / TOPOLOGY
  ✗ STARVING 2 of 9 relevant bins · worst bin 5 (3/8)

A single-glance summary of the bin-strip state, so an operator who pulls up S6 sees the alert state without having to scan all 32 rows. Healthy node:

PEERS / TOPOLOGY
  ✓ all 9 relevant bins healthy

Components:

  • X of N relevant binsX is the count of Starving bins; N is bins at or below depth + 4 (far bins don't count because their emptiness isn't actionable).
  • worst bin K (M/8) — the lowest-connected starving bin; ties broken by the lowest bin number (closer to the network root). 8 is the bee-go saturation threshold; M is current connections.
  • · N over-saturated — appended when any bin exceeds 18 connections. Not an alert (Bee trims surplus on its own) but worth surfacing.

Pane 1 — Bin saturation strip

BIN SATURATION   depth 8 · 142 connected (3 light)

  bin  pop   connected   status
  0    23    14           ✓
  1    18    11           ✓
  2    14     9           ✓
  ...
  7    11     7           ✗ STARVING        ← below depth, only 7 peers
  8    14    11           ✓                 ← at depth, healthy
  9     5     3           —                  ← far from depth, naturally sparse
  ...

The strip is one row per bin, 0..31. For each row:

  • pop = total population (connected + disconnected)
  • connected = currently connected
  • status = the four-state classification below

Saturation classification

The thresholds are pulled directly from bee-go's pkg/topology/kademlia/kademlia.go:

StatusGlyphWhen
Healthyconnected ∈ [8, 18]
Starving✗ STARVINGconnected < 8 AND the bin is relevant (≤ depth)
Over⚠ overconnected > 18 (Bee will trim oldest entries; harmless)
Emptyconnected == 0 AND the bin is not relevant (far from depth)

A bin is relevant if bin ≤ depth + FAR_BIN_RELAXATION (currently 4). Far bins with low population are normal — the network is simply sparse out there — so we don't flag them.

The headline question this strip answers: do my relevant bins have ≥ 8 peers each? If yes, your kademlia health is fine and reserve fills will work. If no, you're starving.

Pane 2 — Peer table

  PEER          BIN   DIR   LATENCY   HEALTH   REACHABILITY
▶ aaaa…aaaa     8     in    12ms      ✓        Public
  bbbb…bbbb     8     out   45ms      ✓        Private
  cccc…cccc     7     in    8ms       ⚠        Public
  dddd…dddd    14     in    23ms      ✓        Public

Sort: by bin ascending, then by latency ascending within a bin. The cursor marks the row that will drill into.

ColumnMeaning
PEERShort overlay address (first 4 + last 4 hex)
BINKademlia bin (0–31)
DIRin (we accepted their dial) / out (we dialed them) / ? (no metric yet)
LATENCYBee's EWMA latency value, formatted as Xms. if not yet measured.
HEALTH healthy, un-healthy from per-peer metrics
REACHABILITYBee's per-peer AutoNAT string (Public / Private / empty)

The table is scrollable: j/k/↑↓/PgUp/PgDn/Home move the cursor and the body scrolls under a pinned header. A right-edge scrollbar shows your position.

Pane 3 — Peer drill (Enter on a row)

fires four endpoints in parallel for the selected peer:

  • GET /peers/<overlay>/balance → settlement balance
  • GET /pingpong/<overlay> → live RTT
  • GET /settlements/<overlay> → received + sent BZZ
  • GET /chequebook/cheque/<overlay> → last cheques
PEER  aaaa…aaaa   bin 8

  Balance              +0.0042 BZZ           (peer owes us)
  Ping (live)          5.0018ms
  Settlement received  BZZ 2.4500
  Settlement sent      BZZ 1.7800
  Last received cheque BZZ 1.5000
  Last sent cheque     —

Each row is rendered independently — if pingpong 404s (peer disconnected mid-fetch) but the other three succeed, the drill still shows three rows + an inline error on Ping. Partial failure is the rule, not the exception, when peers churn.

The four fetches use tokio::join! so the drill window opens within ~2× the slowest endpoint, not 4× the average.

Esc closes the drill and restores the peer table.

The bin saturation thresholds — bee-go constants

The numbers 8 and 18 aren't cockpit decisions. They're hardcoded in bee-go:

// pkg/topology/kademlia/kademlia.go
const SaturationPeers     = 8
const OverSaturationPeers = 18

S6 mirrors them so the cockpit's "Starving" verdict is the same verdict Bee makes internally when deciding whether to keep dialing peers in a bin. If the strip says Starving, Bee itself is also unsatisfied with that bin and will keep dialing when given the chance.

Common scenarios

"Bin 8 is starving but bin 0 has 30 peers"

Normal for a new node. Bins close to bin 0 (peers furthest from your address) saturate first because there are simply more of them. Bins near your depth (where chunks live) take longer because the global address density is lower out there. Wait. If it's been 30+ minutes and depth bins still have <4 peers each, your node may not be reaching the bootnode set — check S7 Network.

"Multiple bins say Starving below depth"

Reserve fill will be slow or stuck. Check the connectivity basics first:

  • S1 Health gate 5 (Peers) — total connected count
  • S7 Network — reachability + advertised underlays
  • /connect endpoint (via curl) — manually dial a known good bootnode

If you're behind NAT (S7 says Private), expect bin starvation on inbound bins unless you set up port forwarding or a relay.

"Drill shows ping 200ms+"

That peer is genuinely far away or congested. Bee will route around them; they'll get cycled out as Bee's EWMA latency favours faster peers. No action needed.

"Drill 'last received cheque' shows BZZ 5+"

You haven't cashed it. That much uncashed value with one peer is unusual — either they're a major forwarding partner (good) or your chequebook hasn't been topped up enough to settle on-chain (action: check S3 Swap).

"Per-peer reachability is empty for everyone"

Older Bee builds didn't populate per-peer reachability. The cockpit shows a blank column rather than guess. Upgrade Bee or just rely on the global reachability on S7.

Snapshot cadence

S6 piggy-backs entirely on the shared /topology stream (5 s cadence). No dedicated S6 polling. The drill fires four endpoints on demand and is not refreshed automatically — close + re-open to get a fresh fan-out.

Keys

KeyEffect
↑↓ / j kMove cursor in peer table
PgUp / PgDnPage through peers
HomeJump to first peer
Drill into selected peer (4 endpoints in parallel)
EscClose drill
?Toggle help overlay

S7 — Network / NAT

Reachability + advertised addresses, in one screen. Answers the "I have peers but I'm unreachable" question (bee#4194) that operators hit when AutoNAT silently flips them to Private mid-session.

Why this screen exists

Bee tells you whether a peer is connected. It doesn't tell you whether you are reachable from outside. A node behind NAT can have 100 connected peers (all outbound) and still be useless to the network — chunks won't be pushed to it because no one can dial it.

The data to answer this is in /addresses (advertised multiaddrs) and the AutoNAT reachability / networkAvailability fields on /topology. S7 surfaces both with a stability window so transient flap (common on symmetric NAT) doesn't trigger false alarms.

Header — overlay + ethereum

NETWORK   overlay aaaa…aaaa  ethereum 0xCE3…fd615

Just identifiers. The overlay is the kademlia address; the ethereum address is the operator wallet. Both shortened to first 4 + last 4 hex.

Pane 1 — Underlays (advertised addresses)

  /ip4/198.51.100.42/tcp/1634/p2p/16Uiu2…       ← Public, IPv4
  /ip4/192.168.1.5/tcp/1634/p2p/16Uiu2…         ← Private (dimmed)
  /ip4/127.0.0.1/tcp/1634/p2p/16Uiu2…           ← Private (loopback, dimmed)
  /ip6/2a01::1/tcp/1634/p2p/16Uiu2…             ← Public, IPv6

Every multiaddr Bee returns from /addresses is shown. Classification:

KindStyleExamples
Publicnormal textRoutable IPv4 / IPv6
PrivatedimmedRFC 1918 (10.*, 172.16.*, 192.168.*), link-local, loopback
UnknownnormalDNS multiaddrs, exotic transports the cockpit doesn't classify

The dim treatment makes it obvious which underlays are actually advertised to the network (vs. the laundry list of LAN addresses every Bee node spits out).

If your underlay list shows only private addresses, you're NAT-trapped — Bee literally has no public address to give to peers, and they can't dial you back.

Pane 2 — Inbound vs outbound

  Inbound  47        ← peers dialing in to you
  Outbound 95        ← peers you've dialed out to

Counted from each peer's session_connection_direction metric. The headline check: can peers dial me? If Inbound is 0 (or near zero) and Outbound is healthy, you're reachable in name only — chunks pushed to you won't arrive.

Inbound 0 on a public node usually means firewall (port 1634/tcp blocked) or the node restarted recently and hasn't been re-dialed yet. Wait 5–10 min; if Inbound stays at 0, debug the firewall.

Pane 3 — Reachability + availability

  Reachability         Public         (stable for 9m)
  Network availability Available

Two strings from AutoNAT:

FieldSourceMeaning
Reachabilitytopology.reachabilityPublic / Private / (unknown)
Network availabilitytopology.networkAvailabilityAvailable / Unavailable / (unknown)

The stability window

Reachability flickers. Under symmetric NAT (carrier-grade NAT, double NAT, weird home routers), AutoNAT can flip Public → Private → Public on a per-minute basis. If the cockpit just showed the latest value, you'd see it bouncing and chase a phantom problem.

The fix: track the timestamp of the last change. The pane shows "stable for Xm" — if the value just changed, this is "a few seconds"; if it's been stable for 9 minutes, that's the value you should trust.

Reachability ladder

StatusGlyphWhat it means
Public ✓greenAutoNAT confirmed inbound dials work
Private ⚠yellowAutoNAT failed inbound dials. Operator action needed.
(unknown) ·dimAutoNAT hasn't reported yet (cold start or older Bee build)
Other (verbatim)dimBee surfaced a string we don't classify; shown raw

Common scenarios

"Reachability says Public but Inbound is 0"

AutoNAT thinks you're reachable but in practice no one is dialing you. Could be:

  • Firewall blocks 1634/tcp for external traffic but not for AutoNAT's own dialback flow (rare but seen on cloud-VPS security groups).
  • Recent restart — wait 5–10 min for the network to re-discover you.
  • Your Inbound count is just delayed (the metric updates per-session); refresh after a minute.

"Reachability bouncing every minute"

Symmetric NAT. The stability window will show "stable for a few seconds" repeatedly. Options:

  • Set up explicit port forwarding on your router (UPnP + manual rule on TCP 1634)
  • Run Bee on a public VPS instead
  • Accept it — your node will still work, just only as an outbound participant; reserve fills will be slower

"Network availability says Unavailable"

Bee's libp2p layer can't reach the network at all. Check the underlays — if they're empty or all loopback, the node isn't binding to anything routable. Restart Bee with the correct --api-addr and --p2p-addr flags.

"Underlay list is empty"

/addresses returned nothing. Either the API wasn't ready yet (cold start, wait 30 s) or Bee's listening sockets failed to bind. Check bee process logs.

"Public IPv4 underlay shows but a different IPv4 is what people see"

Common on multi-homed hosts. Bee advertises whatever it can detect locally; if your real public IP comes via a NAT gateway, AutoNAT will figure it out and add a second underlay once the dialback succeeds. Until then, ignore the local-detected one.

What this screen doesn't show

  • External port-check — the cockpit doesn't dial you back from a 3rd-party endpoint. AutoNAT does this for free via dialback peers; the result feeds the Reachability field. If you want a manual check, use a port-checker service against your public IP + 1634.
  • Relay candidates — Bee doesn't expose its relay-pool state via API. There's no way for the cockpit to show "5 relay candidates available". Future Bee builds may expose this; the pane will grow accordingly.

Snapshot cadence

S7 piggy-backs on:

  • /topology (5 s) — reachability strings, peer directions
  • /addresses (60 s — slow data, only changes on bind change)

The reachability stability tracker is internal; it reads each topology snapshot and maintains a "last changed" timestamp.

Keys

S7 has no screen-specific keys. The global keymap (Tab, ?, :, q) covers everything.

S8 — RPC / API health

Three panes covering Bee's local API performance, Bee's view of the chain, and pending operator transactions. The screen that answers "is the local Bee API responsive?" — separately from "is the chain healthy?".

Why this screen exists

The original PLAN was Gnosis-RPC latency + remote chain tip. Bee doesn't expose its eth-RPC URL (intentionally — it's a private endpoint), and there's no remote chain-tip reference in the API. So S8 pivots to what we can measure:

  1. Bee API call stats — latency p50 / p99 + error rate from the live tracing capture. This is the more operator-relevant metric anyway; a slow Bee API tells you the local node is sluggish, regardless of what the underlying RPC is doing.
  2. Chain stateblock / chain_tip / their delta from /chainstate. Bee's own view.
  3. Pending operator transactions/transactions with hash, nonce, creation timestamp, description.

The "Bee doesn't expose its eth RPC URL or remote block height" gap is acknowledged inline, so operators see what isn't being measured rather than assuming silence equals success.

Header

RPC / API HEALTH    Bee endpoint http://localhost:1633
  Bee doesn't expose its eth RPC URL or remote chain tip;
  this view measures the local Bee API instead.

The endpoint URL is the configured Bee URL — same one shown in the top status bar. The disclaimer is fixed; it's not a warning, just a clarification that we measure what we can.

Pane 1 — Bee API call stats

  CALLS (last 100)
    p50 latency      45ms
    p99 latency     180ms
    error rate       0.0%
    sample size     100

Computed from the most recent 100 entries in the live LogCapture ring buffer (the same buffer that powers S10's command tail). Per-entry data:

  • elapsed_ms — captured by the bee-rs HTTP client tracing
  • status — HTTP response code
StatHow
p50 / p99Sort the window by elapsed, pick the median + 99th index
Error rate(count where status >= 400) / sample_size × 100
Sample sizeNumber of entries with elapsed_ms set

Window is fixed at the last 100 entries (STATS_WINDOW), which matches the LogCapture ring buffer capacity (200) at 2× headroom. Lifting the cap above 200 wouldn't yield more data — older entries are gone.

Reading the stats

  • p50 < 100ms, p99 < 500ms, error rate 0 % — healthy.
  • p99 climbing past 1 s — Bee is under load. Could be upload + heavy reserve activity + a slow disk all at once.
  • Error rate > 1 % — something's failing repeatedly. Look at the bottom log pane (the persistent HTTP tail underneath every screen) to see the actual error responses (most often 503 during warmup, or 401 if the auth token expired).

Pane 2 — Chain state

  CHAIN
    block         234,512        from /chainstate
    chain tip     234,514
    delta         +2 blocks      ✓ in sync
    total amount  150000000000000000
    current price 24000

Bee's view of the chain. delta = chain_tip - block. Small deltas (Δ +1 to Δ +3) flicker constantly — they're the normal indexing lag between Bee processing a block and the chain producing the next one. Sustained Δ ≥ +5 means Bee's RPC is slow or dropping responses.

FieldMeaning
blockLast block Bee has processed.
chain tipWhat Bee's RPC reports as the head.
deltaDifference. Positive = Bee is behind.
total amountSum of all postage stamps issued (BZZ in PLUR).
current priceBee's per-chunk-per-block stamp price (PLUR).

total_amount and current_price are the on-chain stamp parameters. They drive batch_ttl, so when the price moves, every batch's TTL moves with it. This is also surfaced on S2.

Pane 3 — Pending transactions

  PENDING TRANSACTIONS  (2)

  NONCE   HASH         TO            CREATED                DESCRIPTION
  47      0xabcd…ef    0x123…45      2026-05-07T08:12:03Z   stamp topup
  48      0x9876…12    0x123…45      2026-05-07T08:14:15Z   stake deposit

Operator-issued transactions that Bee has submitted but the chain hasn't confirmed yet. Sourced from /transactions.

ColumnMeaning
NONCEOperator wallet nonce
HASHFirst 6 + last 4 hex of the tx hash
TOFirst 4 + last 4 hex of the destination address
CREATEDRFC 3339 timestamp from Bee, rendered verbatim
DESCRIPTIONOperator-supplied description (empty for system txs)

If a transaction has been pending for > 5 min, gas was probably too low. You can re-broadcast or replace via POST /transactions/<hash> (cancel/resend). The cockpit doesn't do this for you — there's no in-cockpit cashout — but the data is here so you know where to look.

Common scenarios

"p99 spiked to 5 s"

Bee is overloaded. Likely culprits:

  • A big /stamps/<id>/buckets drill on a deep batch (just finished — the spike will fade).
  • A reserve worker tick (every 30 min, lots of disk I/O).
  • An upload that triggered chunk-pushing (S9 will show this).

If the spike doesn't fade in 5 min, check iotop on the host — slow disks are the usual culprit.

"Delta stuck at +5"

Bee's RPC is slow. Bee can't do anything about it; this is the upstream Gnosis RPC. Switch RPC providers if the issue persists. (Bee config is --blockchain-rpc-endpoint.)

"Pending transaction sitting at 10+ min"

Check the gas price. If the transaction was submitted with a gas price below the current chain floor, it'll sit in the mempool until it's evicted. Use a chain explorer to inspect the actual gas params — Bee's /transactions doesn't include them in detail.

"Total amount goes up but no batch I bought"

total_amount is the network-wide total, not yours. It goes up whenever anyone on the network buys postage. Use S2 Stamps for your local batches.

Snapshot cadence

  • /chainstate — 2 s (existing health stream)
  • /transactions — 5 s (cheap call, low-rate change)

The call stats are recomputed every Tick (60 fps tick budget, but the LogCapture itself only updates on actual HTTP events).

Keys

S8 has no screen-specific keys. The global keymap (Tab, ?, :, q) covers everything.

S9 — Tags / uploads

One row per Bee tag. Bee creates a tag for every upload (and exposes them via /tags); the row shows the lifecycle stage plus per-stage progress so operators see exactly where an upload is — splitting, pushing, syncing, or stalled.

Why this screen exists

Operators uploading large content (a 4 GiB tarball, a directory of files, a feed update) need to know when the upload is "done enough" to share. Bee's tag tracks five counters per upload:

  • total — total chunks declared up front
  • split — chunks produced by the splitter
  • seen — chunks the network already had (no push needed)
  • stored — chunks landed locally
  • sent — chunks pushed to the network
  • synced — chunks the network confirmed receipt for

Bee's /tags returns all of these, but operators routinely focus on the wrong one. synced == total is the only correct "done" check. stored == total only means you have the chunks; the network still needs them.

S9 surfaces all the stages so the lifecycle is visible at a glance.

The list view

  UID    LABEL              ADDRESS         STATUS       %     SYNCED / TOTAL
▸ 142    backup-2026-05     0xabcd…ef       ✓ synced    100   8,192 / 8,192
        ref 0xabcd2c1e9f7a3b5d2c8e0f4a76b1c9d2e3f4a5b6c7d8e9f0a1b2c3d4e5f
  143    site-publish       0xdeadb…ef      ▒ pushing    74   1,247 / 1,684
  144    streaming-feed     —               · pending     0       0 / 0
  145    deep-archive       0xc0ffee…00     ▒ syncing    91   3,421 / 3,765

Sorted: by uid descending, so the most recent upload is at the top. Sort key is stable across polls. Every row with an address gets a ref 0x<full> continuation line below it (v1.9.1+) so the full reference is reachable for copy without scrolling — the truncated 0xabcd…ef in the table is for visual scan only. Pending tags (no address yet) suppress the continuation.

ColumnMeaning
UIDBee's per-tag id
LABELOperator-supplied label (or if none)
ADDRESSShort reference for the upload root
STATUSFive-state lifecycle (see below)
%synced / total × 100, clamped to 0–100
SYNCED / TOTALThe raw counters Bee reported

The summary header above the table shows the rollup:

TAGS   total 4   active 2   synced chunks 12,860 / 13,641

active = tags currently in Splitting / Pushing / Syncing — "work in flight".

The status ladder

StatusGlyphWhen
Pending· pendingtotal <= 0 — Bee hasn't filled the chunk count yet (upload either hasn't started or used a streaming endpoint that doesn't pre-declare)
Splitting▒ splittingsplit < total — chunker is still slicing the input
Pushing▒ pushingAll chunks split, pushing them out: sent < total
Syncing▒ syncingAll pushed but waiting on receipts: synced < total
Synced✓ syncedsynced >= total > 0 — the upload is done

The three "in flight" states (Splitting / Pushing / Syncing) are coloured warn-yellow so they all read as "working, don't unplug yet". Pending is dimmed (cold). Synced is green.

Why seen matters but doesn't have a stage

seen counts chunks the network already had — Bee skips re-pushing them. A tag with high seen finishes faster because there's less network traffic. But it doesn't change the lifecycle — the tag still goes Splitting → Pushing → Syncing → Synced; it just spends less time in Pushing.

If you upload duplicate content (e.g. the same large file twice), the second tag's seen will be near total and the upload will complete almost instantly. This is normal.

Common scenarios

"Tag stuck at 99 % synced"

Bee waits for synced == total exactly. The last 1 % can take longer than the first 99 % because:

  • Some chunks are at deep replication depth where peers are sparse
  • A handful of receipts haven't come back yet (network jitter)
  • Your chequebook ran low during push and Bee paused

Wait 5 min. If still stuck, check S3 Swap for chequebook balance and S6 Peers for bin saturation.

"All my tags are Pending forever"

You used a streaming upload endpoint that doesn't pre-declare total. Tags from POST /chunks/stream (websocket) and similar may stay Pending. The upload still works; the tag just doesn't track progress meaningfully. Check the underlying upload's reference instead.

"Sent > Synced for a long time"

Normal. Sent means Bee pushed the chunk out; Synced means a peer responded with a storage receipt. The gap is the in-flight queue. If the gap is widening, push throughput is exceeding sync throughput — common on slow networks. Will catch up once you stop uploading.

"tags screen shows hundreds of old tags"

Bee keeps tags forever unless you delete them. If you've done a lot of uploads, the list grows. Use :tag-prune (when implemented) or DELETE /tags/<uid> directly to clean up.

"Address column is empty"

The tag was created but the upload finished or errored before producing a root reference. Probably a failed upload. Safe to ignore.

Snapshot cadence

S9 polls /tags every 5 s — fast enough for upload progress to feel live, slow enough not to hammer Bee while it's busy pushing chunks. The call is cheap (no per-tag fan-out, just the list).

Keys

KeyEffect
↑↓ / j kScroll the table by one row
PgUp / PgDnPage through tags
HomeJump to top
?Toggle help overlay

No selection cursor, no drill (yet) — the data per tag is small enough to fit in one row. A future drill could expose per-stage timing graphs but isn't in 1.0.

S10 — Pins

Earlier docs (and the file name s11-pins.md) called this S11. The screen is now the 10th tab in the strip — the file name is kept for stable links.

Sortable list of every reference Bee has pinned locally, with on-demand integrity checks. Promotes the :pins-check command's write-to-file output into a real screen so operators can browse their pin set, spot the unhealthy ones, and re-check a single pin without walking the whole graph.

Columns

ColumnWhat it shows
REFERENCEThe 32-byte pin reference, shortened to prefix…suffix form
TOTALTotal chunks reachable from the pin (after a check; until then)
MISSINGChunks that should be reachable but are missing locally
INVALIDChunks present but failing integrity validation
STATUSOne of ? unchecked / · checking… / ✓ healthy / ✗ degraded / ✗ check failed: …

Header summary: N pinned ✓ X ✗ Y ? Z sort <mode>. The counts let an operator spot the alert state (red ) without scanning every row.

Keymap

KeyWhat it does
↑↓ / j kMove row selection
EnterIntegrity-check the highlighted pin (single /pins/check?ref=… call)
cIntegrity-check every pin currently unchecked
sCycle sort: ref orderbad firstby size

Sort modes

  • ref order (default) — Bee's response order; matches curl /pins.
  • bad first — unhealthy → check-failed → unchecked → checking → healthy. Surfaces the rows that matter for an operator who suspects local chunk loss.
  • by size — largest pin first by total chunk count. Useful when figuring out which pin set dominates local reserve usage. Pins that haven't been checked yet count as size-0 and go to the bottom.

How this differs from :pins-check

:pins-check walks every pin sequentially and writes the full output to a temp file. It's the right tool when you want a one-shot integrity report you can email or attach to a support thread. Useful but slow on nodes with hundreds of pins.

S11 trades that bulk-walk for interactivity: pick the pin you care about, get its integrity in one call, see the result inline. The two commands are complementary — :pins-check for the audit, S11 for the fix-loop.

What's intentionally out of scope (v1)

  • No pin/unpin actions. Pinning is a write op; the cockpit stays read-mostly. Add pins from swarm-cli pin add and they'll appear here on the next /pins poll (≤ 30 s).
  • No automatic integrity polling. /pins/check walks the chunk graph — too expensive to run on a clock. Operators trigger it on demand.
  • No diff against a previous check. A pin that goes from healthy to degraded shows up the moment you re-check it; the cockpit doesn't keep a history. Use :pins-check for a point-in-time snapshot if you need to compare runs.

S11 — Manifests

Earlier docs (and the file name s12-manifests.md) called this S12. The screen is now the 11th tab — the file name is kept for stable links.

A Mantaray-tree browser. The first screen in bee-tui that gives operators X-ray vision into their data — not just their node. Type a Swarm reference into :manifest <ref> (or :inspect <ref>) and the cockpit fetches the chunk, parses it as a Mantaray manifest, and renders the tree here as a flat indented list.

How to load a manifest

:manifest <ref>      # always tries to render as a manifest
:inspect  <ref>      # auto-detects: manifest, raw chunk, or feed manifest

<ref> is a 64-hex-char Swarm reference (with or without 0x).

  • :manifest jumps to S12 immediately and starts an async GET /chunks/{ref} against the active node. If the chunk parses as a Mantaray manifest, the tree renders. If it doesn't, the screen shows error: <reason> so the operator can re-try with :inspect to learn what it is.
  • :inspect is the universal "what is this thing?" verb. It fetches the same chunk, then auto-detects:
    • Mantaray manifest → routes to S12 (same as :manifest)
    • Raw chunk → prints raw chunk, N bytes on the command-status line; doesn't switch screens.
    • Feed manifest → prints the feed-manifest fingerprint hint.

:inspect is non-destructive — at most one chunk fetch per invocation.

Layout

┌ MANIFEST  · 32-byte chunk · 12 forks · 4 leaves ─────────────────────────────┐
│   f8aa0f76…3e4d1abf                                                          │
│ ▼ (root)                                                                     │
│   ▶ images/                                                                  │
│   ▼ articles/                                                                │
│       · post-1.html         text/html        ee7f3a20…                       │
│       · post-2.html         text/html        9c4d9a80…                       │
│       ⌛ assets/                              loading…                        │
│   · index.html              text/html        a02ee188…                       │
│                                                                              │
│   selected: target ee7f3a201810c5e9…                                         │
│  Tab switch screen   ↑↓/jk select   ↵ expand/collapse   ? help   q quit      │
└──────────────────────────────────────────────────────────────────────────────┘

The header line shows the chunk size + fork/leaf summary. The second header line shows the full root reference for click-drag copy.

Tree glyphs:

GlyphMeaning
Expanded fork (children visible)
Collapsed fork with children
·Leaf (TYPE_VALUE — points at a file target, no further forks)
Fork is loading (async fetch in flight)
Fetch / parse failed

Each row carries: indent depth, glyph, path-segment label, content-type (when present in metadata), and the truncated target reference (for leaves) or fork self-address.

Lazy-load semantics

Pressing on a collapsed fork that has children either:

  • Toggles expansion (cheap) when the child node is already loaded.
  • Starts an async GET /chunks/{self_address} when it isn't. The row glyph flips to until the response arrives; on failure it becomes ✗ error: <reason> (and you can retry with another ).

The walker only fetches forks the operator opens — large manifests (e.g. a 10k-page wiki) don't pre-load every chunk. The cost of exploring the tree scales with how much you actually look at.

The selected: line

The detail row above the footer renders the cursored row's identifier in plain text, so you can drag-select it in your terminal and copy without bee-tui needing a copy key:

  • For leaf rows: selected: target <target-ref-hex>
  • For fork rows: selected: chunk <self-address-hex>
  • For the root summary: (no copyable id on this row)

Copy that hex into a bee CLI invocation, a browser URL (http://<gateway>/bzz/<ref>/<path>), or another bee-tui verb.

Keymap

KeyAction
/ kMove cursor up
/ jMove cursor down
Toggle expand / load the cursored fork
TabCycle to the next screen
:Open the command bar

The ? overlay shows these alongside the global keys.

What it doesn't do

  • No encrypted-ref support (yet). 64-byte references with an obfuscation key suffix render as error: not a manifest — bee-rs's recursive walker would need to thread the key through the chunk decoder. Tracked for a v2.x follow-up.
  • No path-based addressing. Operators type a chunk reference, not <ref>/path/in/manifest. bee-rs's resolve_path lives in the runtime; surfacing it as :manifest <ref> <path> is a candidate enhancement.
  • No write side. S12 is strictly a read-only browser. Editing a manifest, re-uploading after a fix, or rewiring a fork lives in the deferred write tier.
  • No file-content preview. Leaf rows show the target reference but not the bytes the leaf points at. Use a separate bee / swarm-cli invocation, or the Bee gateway URL above, to fetch the file itself.

Trust anchor: where do these counts come from?

The fork count and leaf count in the header are derived purely from the loaded MantarayNodes — no Bee API call is needed once the root + currently-expanded children are in memory. If a fork has never been opened, its sub-tree is not counted. This is intentional: the walker only commits to fetching forks the operator actually navigates into, so the cost of opening S12 on a 10⁵-chunk manifest is one chunk fetch, not 10⁵.

When all forks in a sub-tree are expanded, the leaf count for that branch reflects every child fork's loaded MantarayNode.

S12 — Durability Watchlist

Earlier docs (and the file name s13-watchlist.md) called this S13. The screen is now the 12th tab — the file name is kept for stable links.

A running history of :durability-check results, plus the live state of any :watch-ref daemons. The operator-facing answer to the single most-feared question: is my data still alive?

How rows get here

Every invocation of :durability-check <ref> adds one row to S13. The verb walks the chunk graph rooted at <ref> and records the outcome:

:durability-check <ref>

Walker behaviour:

  • Fetches the root chunk via GET /chunks/{ref}.
  • If the root parses as a Mantaray manifest, recursively fetches every fork's self_address. Forks that carry a target reference are counted as leaves but their target's file content is not chunk-walked further (manifest topology only).
  • If the root doesn't parse as a manifest, the single-chunk fetch is the durability answer.
  • Hard cap: 10 000 chunks per walk. Operators with very large manifests get a partial answer marked truncated rather than a stuck cockpit.
  • BMT verification is on by default — every fetched chunk's content is keccak-hashed and compared against the requested reference. Mismatches land in the separate chunks_corrupt bucket. Opt-out via [durability].bmt_verify = false in config.

The rolling history is bounded to the most recent 50 rows; older rows are evicted from the back as new checks land.

Layout

┌  4 checks · 3 healthy · 1 unhealthy ─────────────────────────────────────────┐
│                                                                              │
│ ▸ OK         manifest  ee7f3a20  12 total · 0 lost · 0 errors · BMT · 412ms  4s ago
│   UNHEALTHY  manifest  9c4d9a80  18 total · 1 lost · 0 errors · 1 corrupt · BMT · scan: NOT seen · 1018ms  31s ago
│   OK         chunk     a02ee188  1 total · 0 lost · 0 errors · BMT · 87ms   2m ago
│   OK         manifest  f8aa0f76  120 total · 0 lost · 0 errors · BMT (truncated) · 8841ms  17m ago
│                                                                              │
│   selected: ee7f3a201810c5e9…3e4d1abf                                        │
│  Tab switch screen   ↑↓/jk select   ? help   q quit   :durability-check <ref> to record
└──────────────────────────────────────────────────────────────────────────────┘

Each row reports:

ColumnMeaning
OK / UNHEALTHYGreen / red status pill — is_healthy() is true iff lost == 0 && errors == 0 && corrupt == 0
manifest / chunkWhether the root parsed as a Mantaray manifest
short refFirst 8 hex chars of the reference; full hex is on the selected: line
detail<total> total · <lost> lost · <errors> errors · <corrupt> corrupt · BMT · scan: seen/NOT seen · <duration>ms (truncated)
ageWall-clock time since the check started

BMT appears in detail when the walk verified each chunk's content against its address; truncated appears when the walk stopped at the 10 000-chunk cap; the swarmscan segment appears only when [durability].swarmscan_check = true.

The four outcome buckets

S13 separates four counts with different operator implications:

BucketMeaningLikely cause
lostGET /chunks/{ref} returned 404Network truly dropped your data — check stamp TTL, peer reachability, batch utilisation
errorsAnything else (timeout, 500, decode error)Flaky local node or transient network — retry usually fixes
corruptContent fetched but BMT hash didn't match the requested referenceBit-rot, swap-corrupted on-disk chunk, or hostile peer returning a different chunk
(rest)Successfully retrieved + verifiedHealthy

Optional swarmscan cross-check

When [durability].swarmscan_check = true is set in the configuration, the walker — after the local walk completes — also probes a swarmscan-style indexer for the same reference:

[durability]
swarmscan_check = true
swarmscan_url   = "https://api.swarmscan.io/v1/chunks/{ref}"  # default

The probe replaces {ref} with the hex-encoded reference and expects a 200 (seen) or 404 (not seen). Anything else (timeout, non-200/404) renders as no answer (scan: segment is hidden).

This gives an independent network-side answer — "the indexer says the network sees this ref" — separate from "my local node was able to retrieve it." Useful when triaging:

  • Healthy + scan: seen → all good.
  • Healthy + scan: NOT seen → your local node has it cached; the network may have dropped the rest. Re-upload before your cache expires.
  • Unhealthy + scan: seen → your local node is the problem; the network has the ref. Restart, re-sync, or check connectivity.
  • Unhealthy + scan: NOT seen → genuine data loss. Re-upload from the source if you still have it.

Daemon mode (:watch-ref)

For a continuous answer, run :watch-ref as a daemon:

:watch-ref      <ref> [interval-seconds]   # default 60s, clamped 10..=86400
:watch-ref-stop [ref]                      # cancel one (or all if no arg)

:watch-ref re-runs :durability-check on a tokio interval and records each result on S13 — same row format as a manual :durability-check. Re-issuing for an already-watched ref cancels the prior daemon (clean restart). The cockpit's root cancellation token also fires on quit, so daemons clean up without operator action.

See :watch-ref daemon mode for the full verb reference.

Keymap

KeyAction
/ kMove cursor up
/ jMove cursor down
TabCycle to the next screen
:Open the command bar

What S13 isn't

  • Not persisted across cockpit restarts. The history is an in-memory ring buffer; quitting bee-tui drops it. If you want durable history, redirect the verb's stdout from --once durability-check into a JSONL file from cron (the JSON shape is part of the v1.3.0 stable surface).
  • Not a fixer. S13 surfaces the diagnosis; remediation (:reupload, manifest re-binding, stamp top-up) lives in the deferred write tier.
  • Not a content checker. A manifest's leaves point at file content that is itself chunked; the walker only verifies the manifest topology + each chunk it visits, not the file content reachable through leaves. A leaf reporting "OK" means the Mantaray fork loaded cleanly; the file's individual chunks are a separate :durability-check away.
  • Not a CI gate. For automation, use --once durability-check — it exits 1 on unhealthy, 2 on usage error, and emits the same result shape as a JSON object via --json.

S13 — Feed Timeline

Earlier docs (and the file name s14-feed-timeline.md) called this S14. The screen is now the 13th tab — the file name is kept for stable links.

A scrollable history walk of a Swarm feed. Where v1.5's :feed-probe returns the latest update only, S14 walks backward from the latest index and shows each historical entry side-by-side: index, age, payload size, and (when reference-shaped) the embedded Swarm reference.

How to load

The screen has no auto-poll — it loads exactly when an operator issues the verb:

:feed-timeline <owner> <topic> [N]

<owner> and <topic> accept the same forms as :feed-probe: 20-byte hex address (0x-prefixed or bare) and either 64-hex literal or arbitrary string (keccak256-hashed via Topic::from_string).

[N] is optional — defaults to 50, hard-capped at 1000. For larger walks, drive :feed-probe from a shell loop instead; the cockpit's in-memory tar / mpsc plumbing isn't sized for multi-thousand-entry walks.

The first lookup hits Bee's /feeds/{owner}/{topic} to find the latest index — this can take 30-60 s on a fresh feed. The screen shows a spinner until that completes; the historical chunks then fetch in parallel (8-way bounded concurrency) so a 50-entry walk finishes in seconds once the latest-index probe returns.

Layout

┌ FEED TIMELINE  owner=0x12345678…  topic=ab12cd34…  latest=idx42  · 50 entries ─┐
│                                                                                │
│  INDEX     AGE      SIZE   TYPE      REF / ERROR                               │
│      42        3m     40   ref       e7f3a201cd…                               │
│      41       12m     40   ref       9b1c8a72f4…                               │
│      40       18m     20   raw       payload 12B                               │
│      39       28m      0   miss      [lost: 404 Not Found]                     │
│      38       45m     40   ref       12abcdef34…                               │
│      …                                                                         │
│  selected: ref=e7f3a201cd1f0e9b…                                               │
│  ↑↓/jk select   Tab switch screen   : command   q quit                         │
└────────────────────────────────────────────────────────────────────────────────┘

The cursor row is reverse-styled. Miss rows (chunk fetch failed or didn't unmarshal as a SOC) render dim, so gaps in the history are visible at a glance.

The selected-line detail at the bottom shows the full reference of the cursored row when present, or the raw payload size + Unix timestamp when the entry isn't reference-shaped.

Keymap

KeyAction
/ kMove cursor up
/ jMove cursor down
PgUp / PgDnJump 10 rows
TabCycle to the next screen
:Open the command bar (e.g. for :inspect <ref> on the cursored entry)

CI mode (--once feed-timeline)

bee-tui --once --json feed-timeline 0x1234… my-app/notifications 100

Emits structured JSON with owner, topic, latest_index, index_next, reached_requested, and an entries array of { index, timestamp_unix, payload_bytes, reference, error }. A snapshot-publish workflow can fetch 100 historical entries and gate on entries[0].index strictly advancing across runs, or on the error count not crossing a threshold.

What it doesn't do

  • No epoch-feed walk. v1.6 walks sequential feeds (indexes 0, 1, 2, …). Epoch feeds (Swarm's older lookup scheme) are not yet supported in the walker.
  • No live refresh. The walk is one-shot per verb invocation; there's no auto-poll. Re-run the verb to get a fresh snapshot.
  • No payload preview. Raw-feed entries surface their byte size only; if you want the contents, pass the entry's index back through :feed-probe or feed it to :inspect when reference-shaped.
  • No write side. :feed-timeline is read-only; updating a feed requires a private key + a stamp, both outside the cockpit's current write surface.

S14 — Pubsub watch

Earlier docs (and the file name s15-pubsub.md) called this S15. The screen is now the 14th (and last) tab — the file name is kept for stable links.

Live tail of PSS topic subscriptions and GSOC (owner, identifier) subscriptions, merged into a single chronological timeline. The receiver-side complement to v1.3's :gsoc-mine and :pss-target writer verbs: operators can finally see the messages those senders produce without leaving the cockpit.

How to start a subscription

The screen has no auto-load. Subscriptions are started by verb:

:pubsub-pss   <topic>
:pubsub-gsoc  <owner> <identifier>

<topic> accepts the same forms as :feed-probe:

  • 64 hex chars (with or without 0x) is the raw 32-byte topic.
  • Anything else is keccak256(utf8(s)), mirroring bee-js's Topic.fromString.

<owner> is a 20-byte Ethereum address (0x-prefixed or bare). <identifier> is a 32-byte SOC identifier (64 hex chars, 0x-prefixed or bare).

Each subscription opens a WebSocket against Bee's /pss/subscribe/{topic} or /gsoc/subscribe/{soc-address} and forwards every delivered frame into the screen's ring buffer. The verb switches to S15 immediately so the operator sees the "0 messages" state until the first frame arrives.

Re-issuing for an already-watched (topic) or (owner, identifier) errors with a clear message — no silent duplicate sockets.

Layout

┌ PUBSUB WATCH  · 2 active subs · 17 messages ─────────────────────────────┐
│                                                                           │
│  TIME      KIND   CHANNEL       SIZE   PREVIEW                            │
│  10:14:32  PSS    abc1234567…    18    hello cockpit!                     │
│  10:14:31  GSOC   ee7f3a2018…    32    deadbeef…                          │
│  10:14:30  PSS    abc1234567…    42    {"event":"ping","seq":12}          │
│  ...                                                                      │
│                                                                           │
│  channel: 0xabc1234567890abcdef…fedcba0987654321 · 18 bytes               │
│  data: hello cockpit!                                                     │
│                                                                           │
│  ↑↓/jk select   c clear timeline   Tab switch screen   : command   q quit │
└───────────────────────────────────────────────────────────────────────────┘

The cursor row is reverse-styled. GSOC rows tint blue so PSS and GSOC are distinguishable at a glance even after the kind column scrolls offscreen.

The two-line detail strip shows the full channel hex and the smart-preview of the cursored row's payload (capped at 200 chars). "Smart" means: ASCII when ≥ 75 % of bytes are printable, hex otherwise. Empty payloads render as (empty).

Keymap

KeyAction
/ kMove cursor up
/ jMove cursor down
PgUp / PgDnJump 10 rows
cClear the timeline (subscriptions stay open)
TabCycle to the next screen
:Open the command bar

Stopping subscriptions

:pubsub-stop                        # cancels every active subscription
:pubsub-stop pss:abc1234567…        # cancels just the matching one
:pubsub-stop gsoc:0xabc…:def0…      # GSOC subs are keyed by owner:id

Sub-IDs are reported by the :pubsub-pss / :pubsub-gsoc "subscribed: …" line. The cockpit's root cancellation token also fires on quit, so operators don't need to remember to issue :pubsub-stop before exiting.

Filtering the timeline

:pubsub-filter <substring>          # show only matching rows
:pubsub-filter-clear                # remove the active filter

Case-insensitive substring match against the channel hex OR the smart-preview of the payload. The underlying ring still receives every message — filtering is presentation-only, so clearing the filter restores the full view without re-subscribing.

Persisting + replaying history (v1.8 / v1.9)

Set [pubsub].history_file in config.toml to write every delivered frame to a JSONL file:

[pubsub]
history_file = "/var/lib/bee-tui/pubsub.jsonl"
rotate_size_mb = 64        # roll over at 64 MiB (default; 0 disables)
keep_files     = 5         # retain .1 .. .5 (default)

Files are created with mode 0600 (owner-only). When the active file crosses rotate_size_mb, it's renamed to <path>.1 (older rotations shift to .2 .. .N; oldest beyond keep_files is unlinked) and a fresh empty file takes its place.

To browse a past session without re-subscribing:

:pubsub-replay <path>

Loads the file back into the S15 ring (oldest → newest, capped at 500 entries). Bad lines are skipped with a warn log; replay does not start any watchers.

What it doesn't do

  • No live "tail since T-30s". WebSocket subscriptions only deliver messages sent after the subscription opens — start the sub before the publisher does. (Past sessions can be loaded via :pubsub-replay; live ones cannot be rewound.)
  • No write side. Sending PSS / GSOC requires a stamp + private key, both outside the cockpit's current write surface. Use bee-cli or a dApp for that.
  • No --once mode. A live tail doesn't fit one-shot exit semantics; if you want to gate on "did this topic see N messages in T seconds", script it with a separate tool.

The bottom command-log pane

Naming note. This page is named s10-log.md for legacy reasons. In v0.1 the command log was the tenth screen (S10); since v0.9 it's been a persistent pane at the bottom of every screen, not a screen of its own. The current numbered screens are S1 Health through S14 Pubsub — all 14 of them tab-cycled through the screen strip. The log pane is always visible underneath. The file is kept at its old path so existing bookmarks resolve.

A lazygit-style append-only tail of every HTTP request the cockpit makes to Bee. The trust anchor and live tutorial: operators see the actual request behind every gauge they're watching, with method, path, status, and elapsed time.

Why this screen exists

Three reasons, in priority order:

  1. Trust — when the cockpit says "Bin saturation: 7 starving", an operator with a healthy paranoia wants to verify it's not a render bug. S10 shows the literal GET /topology that produced the answer.
  2. Live tutorial — every cockpit gauge is fed by some Bee endpoint. New operators can use S10 as a "Bee API by example" — see what's polled, what's NDJSON-streamed, what fires only on user action.
  3. Debug aid — when something is failing (401, 503, connection refused), the failure shows up here in real-time. Way faster than attaching a debugger to Bee.

What's logged

Every HTTP call made via the bee-rs ApiClient is captured by the global LogCapture (installed at startup) and rendered in S10. That includes:

  • Periodic polls (/health, /status, /wallet, /chainstate, /redistributionstate, /stamps, /topology, /tags, …)
  • On-demand drill fetches (/stamps/<id>/buckets, the per-peer drill fan-out, rchash, etc.)
  • Slash-command requests (:pins-check, :loggers, :set-logger)

Bearer tokens are never logged. The capture sees method, url, status, elapsed_ms, ts — never headers.

The display

 bee::http
  08:12:01.123  GET    /health                        200    34ms
  08:12:01.456  GET    /chainstate                    200    18ms
  08:12:03.001  GET    /redistributionstate           200    21ms
  08:12:03.456  GET    /wallet                        200    15ms
  08:12:05.001  GET    /status                        200    12ms
  08:12:05.222  GET    /tags                          200    45ms
  08:12:05.500  GET    /stamps/abc123…/buckets        200   2840ms
  08:12:05.700  GET    /pingpong/aaa…aaa              200     6ms
  08:12:05.701  GET    /peers/aaa…aaa/balance         200    11ms
  08:12:05.702  GET    /settlements/aaa…aaa           200    14ms
  08:12:05.703  GET    /chequebook/cheque/aaa…aaa     200    19ms

Columns:

ColumnMeaning
TimestampLocal time when the request started
MethodGET (blue), POST (green), PUT (yellow), DELETE (red), PATCH (magenta), HEAD (cyan)
PathURL path, scheme + host stripped
StatusHTTP status code, colour-coded
ElapsedRound-trip time in ms

Status colour coding

  • 2xx — green (success)
  • 3xx — info-blue (redirect; rare in Bee)
  • 4xx — warn-yellow (client error: 401 auth, 404 missing, 503 syncing)
  • 5xx — fail-red (server error)
  • — dim (request didn't complete; connection refused, timeout)

Path stripping

Scheme + host are dropped so the line stays readable on 80-col terminals. http://localhost:1633/health renders as /health. Query strings are kept (visible on /chunks/stream).

How big is the buffer?

200 entries, ring buffer. Older entries fall off as new ones arrive. At a typical poll cadence (~10 calls/sec across streams + polls), you have ~20 s of recent history. That's enough to debug "what just happened" but not enough for long-term forensics.

If you need more history, use :diagnose — it dumps the entire current buffer plus snapshot state to $TMPDIR/bee-tui-diagnostic-<ts>.txt.

Reading patterns

"I just tabbed to a screen and saw 4 calls"

That's the screen activating its on-tab fetches. S2 fires /stamps. S3 fires the chequebook + settlements set. S6 fires /topology (already in the shared stream so often no new call). S8 fires /transactions.

"Same path repeating every 2 seconds"

That's a poller. The cadence per endpoint is documented in each screen's "Snapshot cadence" section.

"Path with ? query string"

WebSocket upgrades + on-demand commands. :pins-check fires /pins/check with optional reference query, etc.

"503 status repeating"

Bee is syncing. This is the cold-start "bee is syncing chunks, gauges will hydrate within ~10 minutes" pattern. See First run.

"401 status"

Auth token mismatch. Either:

  • The token in your config doesn't match Bee's --api-token
  • @env:VAR resolved to an empty string (unset env var)
  • Bee was restarted with a new token

Check S1 Health gate 1 (API reachable) and your config.

"PUT /loggers/..." with a long base64 path

That's :set-logger (or the legacy v1 endpoint). The base64 chunk is the URL-safe-encoded logger expression.

Common scenarios

"Cockpit feels slow"

Watch the Elapsed column. If most calls are <100ms, the slowness is in render, not Bee. If many calls are 500ms+, Bee is the bottleneck. Drop to S8 RPC / API health for the p50 / p99 over the last 100 calls.

"I want to trust the chequebook number"

Watch S10 while looking at S3. You'll see GET /chequebook/balance returning a 200 every 30 seconds. Compare the cockpit's display with curl http://localhost:1633/chequebook/balance from a separate shell — they'll match.

"I'm writing my own Bee client and want to know what calls to make"

Watch S10 while flipping through every screen. Every endpoint the cockpit uses is in the bee-rs ApiClient (mirror in bee-py / bee-go); seeing them in real time is faster than reading the OpenAPI spec.

What this screen doesn't show

  • WebSocket frames — the cockpit subscribes to /chunks/stream (potentially) but the tail only shows the upgrade request, not individual frames.
  • Internal state changes — only HTTP calls. The cockpit's own snapshot diffs / cache invalidations don't appear here.
  • Bee server logs — these are Bee's internal logs, not cockpit logs. Use journalctl -u bee or whatever your Bee deployment uses.

Cadence

S10 doesn't poll. It subscribes to the live LogCapture process-wide and renders whatever's in the buffer at draw time (60 fps — but only repaints when entries change).

Keys

S10 has no screen-specific keys. The global keymap (Tab, ?, :, q) covers everything.

If you want to export a slice of the log, use :diagnose which captures the full buffer to a file alongside the snapshot state.

The :command bar

A vim-style colon prompt for actions that don't fit on the keymap: jump to a screen by name, fire on-demand checks, switch profiles, export a diagnostic bundle.

Opening + closing

KeyEffect
:Open the command bar (focus moves to a one-line prompt at the bottom)
EscClose without running
Run the command
BackspaceDelete left

The screen behind the bar keeps refreshing — gauges don't freeze while you're typing.

Status line

After a command runs, the bottom line shows the result for ~3 seconds before fading:

  • Info (green) — → Health, diagnostic bundle exported to /tmp/...
  • Err (red) — unknown command: "...", usage: :set-logger <expr> <level> ...

If you missed the message, just re-run — the status sticks until the next command or the next 3 s tick.

Screen jumps

Every screen has a name; :<name> jumps there.

CommandScreen
:healthS1 — Health gates
:stampsS2 — Stamps + bucket drill
:swapS3 — SWAP / cheques
:lotteryS4 — Lottery + rchash
:warmupS5 — Warmup checklist
:peersS6 — Peers + bin saturation
:networkS7 — Network / NAT
:apiS8 — RPC / API health
:tagsS9 — Tags / uploads
:pinsS11 — Pins
:manifest <ref>S12 — Manifests (preloads root + jumps)
:watchlistS13 — Watchlist
:feedtimelineS14 — Feed Timeline
:pubsubS15 — Pubsub watch

These are equivalent to pressing Tab until you reach the target screen, but faster on a 14-screen carousel.

Action commands

CommandPageWhat it does
:diagnose (alias :diag)diagnoseDump the full snapshot + recent log buffer to a file
:pins-check (alias :pins)pins-checkRun a full integrity check on every locally pinned reference
:loggersloggersSnapshot the live logger registry to a file
:set-logger <expr> <level>loggersChange one logger's verbosity at runtime
:topup-preview <batch> <amount>stamp-previewsPredict TTL + cost of topping up an existing batch
:dilute-preview <batch> <new-depth>stamp-previewsPredict capacity / TTL change of diluting a batch
:extend-preview <batch> <duration>stamp-previewsPredict cost to gain N days/hours of TTL
:buy-preview <depth> <amount>stamp-previewsPredict TTL / capacity / cost of a hypothetical fresh buy
:buy-suggest <size> <duration>stamp-previewsSuggest the minimum (depth, amount) to cover a target
:probe-upload <batch>probe-uploadUpload one synthetic 4 KiB chunk; report end-to-end latency
:upload-file <path> <batch>upload-fileUpload a single local file via POST /bzz, return Swarm reference
:upload-collection <dir> <batch>upload-collectionRecursive directory upload as a Swarm collection (tar POST /bzz); auto-detects index.html
:feed-probe <owner> <topic>feed-probeLatest update for a feed (read-only lookup)
:feed-timeline <owner> <topic> [N]S14 — Feed TimelineWalk a feed's history (newest first), open S14
:watch-ref <ref> [interval]watch-refRe-run :durability-check on <ref> periodically (default 60 s)
:watch-ref-stop [ref]watch-refCancel one (or all) active :watch-ref daemons
:pubsub-pss <topic>S15 — PubsubSubscribe to a PSS topic, surface frames in S15
:pubsub-gsoc <owner> <id>S15 — PubsubSubscribe to a GSOC SOC, surface frames in S15
:pubsub-stop [sub-id]S15 — PubsubCancel one (or all) active pubsub subscriptions
:pubsub-filter <substring>S15 — PubsubShow only S15 rows whose channel/preview contains substring
:pubsub-filter-clearS15 — PubsubRemove the active S15 filter
:pubsub-replay <path>S15 — PubsubLoad a prior session's pubsub-history JSONL into S15
:manifest <ref>Open a Mantaray manifest for browsing (preloads root + jumps to S11 Manifests)
:inspect <ref>Universal "what is this thing?" — auto-detects manifest / raw chunk / feed manifest
:durability-check <ref>Walk every chunk of <ref> and report retrieved / lost / corrupt / network-seen counts
:plan-batch <prefix> [usage] [ttl] [extra-depth]stamp-previewsRun beekeeper-stamper's Set algorithm read-only — outputs PlanAction (None/Topup/Dilute/Both)
:check-versionGitHub Releases API check; reports if a newer bee-tui is published
:config-doctorRead-only audit of bee.yaml against swarm-desktop's migration rules
:priceS3 — SWAPxBZZ → USD lookup via public token service
:basefeeS3 — SWAPGnosis Chain JSON-RPC basefee + tip lookup
:grantees-list <ref>Read-only GET /grantee/{ref} for ACT grantee inspection
:hash <path>Local Swarm-hash of a file via Mantaray (no Bee call)
:cid <ref> [--type=manifest|feed]Local Reference → CID conversion (no Bee call)
:depth-tablePrint canonical depth → capacity table (no Bee call)
:gsoc-mine <overlay> <identifier>Local CPU work — find a PrivateKey whose SOC address matches <overlay>
:pss-target <overlay>Extract the 4-hex-char target prefix Bee accepts on /pss/send
:watchlist (jump)Jump to S12 Watchlist (history of :durability-check results)
:context <name> (alias :ctx)contextSwitch to a different node profile from your config
:contextcontextList configured profiles (no switch)
:nodescontextOpen the node-picker overlay (also Ctrl+N)
:quit (alias :q)Exit the cockpit

Why a colon prompt?

Two reasons:

  1. Discoverability without clutter. The cockpit can have ten screen-jumps + half a dozen action commands without each one needing its own keybinding. The keymap stays minimal (Tab, , Esc, ?, :, q); rare commands live behind the colon.
  2. Familiarity. Anyone who's used vim, k9s, or lazygit has the muscle memory. The cockpit's job is to not require new muscle memory.

What's not on the bar

These actions deliberately don't have a :command form:

  • Cashing out cheques. Cashout is on-chain; it costs gas; you should think about whether to do it. The cockpit surfaces the data (S3 Pane 2) but won't trigger the on-chain transaction. Use curl POST /chequebook/cashout/<peer> if you really mean it.
  • Buying / topping up postage. Same reasoning. S2 shows TTL and worst-bucket; the :*-preview verbs (see stamp-previews) compute predicted TTL/cost without writing — but bee postage buy and bee postage topup themselves are operator decisions with funding consequences and stay outside the cockpit.
  • Stake deposit / withdraw. Same.
  • Connect / disconnect peers. Bee's kademlia handles this without operator help; manual connect is a debugging escape hatch.

The cockpit is a read-mostly observer. The mutating commands it does have are scoped to upload + diagnostic state, never chain-mutation:

  • :set-logger — changes a Bee logger level (no funds, no chain)
  • :probe-upload — uploads one synthetic 4 KiB chunk against a caller-supplied stamp to verify the upload path end-to-end
  • :upload-file / :upload-collection — real content uploads via POST /bzz; capped at 256 MiB (collections also at 10k entries). Stamp consumption is the operator's responsibility via the explicit <batch> argument.

There is intentionally no :reupload, :tx-bump, or :grantees-create verb yet — write tier verbs that consume stamps or mutate chain state warrant their own UX + confirmation pass. The current write surface stops at uploads.

See also

:diagnose

Dump the cockpit's current snapshot + recent HTTP log to a text file. The thing you attach to a support thread.

:diagnose
:diag       (alias)

What it captures

Three sections, in this order:

  1. Profile — the active node's name + URL.
  2. Health gates — every S1 gate's status and value at the moment of capture. So if a reviewer asks "what was bin saturation showing?" you don't need to remember; it's in the bundle.
  3. Last 50 API calls — most recent entries from the live LogCapture buffer. Method, path, status, elapsed.

The output looks like:

# bee-tui diagnostic bundle
# generated UTC 2026-05-07T08:14:32Z

## profile
  name      prod-1
  endpoint  http://10.0.1.5:1633

## health gates
  ✓ API reachable                last_ping 34ms
  ✓ Chain RPC                    block 234,512 / tip 234,514 (Δ +2)
  ✓ Wallet funded                BZZ 12.50 · native 0.0421 ETH
  ⚠ Bin saturation               2 starving: bin 4, bin 5
  ...

## last API calls (path only — Bearer tokens, if any, live in headers and aren't captured)
  08:14:01.123 GET   /health                          200      34ms
  08:14:01.456 GET   /chainstate                      200      18ms
  ...

## generated by bee-tui 1.0.0

Where the file goes

$TMPDIR/bee-tui-diagnostic-<unix-timestamp>.txt. On Linux that's typically /tmp/. The cockpit prints the full path in the status line:

diagnostic bundle exported to /tmp/bee-tui-diagnostic-1715056472.txt

Each invocation gets its own timestamp; you can run :diagnose multiple times in a session and not overwrite the earlier capture.

What's NOT captured

This is the important part — the bundle is safe to share:

  • Bearer tokens. They live in HTTP Authorization headers; LogCapture only sees method + URL. Tokens never appear in the file.
  • Request / response bodies. Only metadata (status code, elapsed time) is in the buffer.
  • Wallet private keys. Bee doesn't expose them via the API and the cockpit never asks.
  • Anything from your config.toml beyond the active profile name + URL. The TOML file itself is not read into the bundle.

You can paste the bundle into a public GitHub issue, a support email, or a Discord help channel without redacting.

When to use it

  • Bug report: paste the bundle into the issue body. Reviewers see what your node actually looked like at the moment of the bug, not just your description.
  • Operator handoff: when transferring node ownership, capture a baseline "what does normal look like" bundle.
  • Before a risky operation: backup the current state. Doesn't roll back anything; just records what was true.

What to compare against

Two bundles, 5 minutes apart, are a quick diff for "did anything change?". Just diff them.

For longer-term comparison, save bundles per day; the elapsed columns let you see whether call latency drifted.

See also

:pins-check

Run a full integrity check on every locally pinned reference. Bee streams one NDJSON record per pin; the cockpit dumps each one to a file as it arrives so you can tail -f it.

:pins-check
:pins         (alias)

What gets checked

Bee's GET /pins/check walks every locally pinned root reference and verifies, per pin:

  • total chunks in the manifest
  • missing — chunks the manifest references but local storage doesn't have
  • invalid — chunks present but failing hash verification

A pin is healthy when missing == 0 && invalid == 0.

Where the file goes

$TMPDIR/bee-tui-pins-check-<profile>-<unix-timestamp>.txt.

Per-profile filename: switching to a different :context mid-check won't conflict with another profile's parallel run. The original check runs to completion against the profile that started it.

The cockpit prints the path in the status line:

pins integrity check running → /tmp/bee-tui-pins-check-prod-1-1715056472.txt
                                (tail to watch progress)

File format

A header followed by one line per pin, ending with a # done. marker:

# bee-tui :pins-check
# profile  prod-1
# endpoint http://10.0.1.5:1633
# started  2026-05-07T08:14:32Z

abcd1234…   total=8192   missing=0     invalid=0    healthy
def56789…   total=1684   missing=12    invalid=0    UNHEALTHY
9876fedc…   total=4096   missing=0     invalid=2    UNHEALTHY
ba98cdef…   total=64     missing=0     invalid=0    healthy
# done. 4 pins checked.

If the check itself errors out (server 500, connection lost), the last line is # error: <message> instead of # done..

The healthy / UNHEALTHY literal at the end of each line is for grep-ability:

grep UNHEALTHY ~/path/to/bundle.txt

…lists every reference that needs attention.

Why this command exists

Locally pinned content is the only reason Bee guarantees chunks remain accessible. If a chunk is missing from a pinned manifest, your data is gone and the network may not have it either (depending on network density). If chunks are invalid, your local storage has been corrupted — disk failure, partial write, etc.

Either case is silent until you check. :pins-check is the audit trail: run it, save the file, and you have a point-in-time integrity snapshot per pinned reference.

How long it takes

/pins/check walks every chunk on disk. For a node with:

  • A handful of small pins (< 10 GB each): seconds.
  • Hundreds of pins or large multi-GB pins: minutes.

The cockpit doesn't block — the check runs in the background, the file appends as Bee streams the response, and you can keep navigating screens. A second :pins-check while one is in flight just kicks off another (Bee does not serialise; the HTTP server handles them in parallel).

What to do with UNHEALTHY pins

For invalid chunks (corruption): your local storage is broken. Best move is to re-download the reference (it's still on the network if other nodes have it) and then re-pin. Long-term, check disk health (smartctl).

For missing chunks: similar — re-fetch from the network or accept the loss. Bee won't auto-heal pins; the operator has to either re-upload or re-pin from a known good source.

If a pin shows missing > 0 and the cockpit's S1 Reserve gate is also failing, your node is in a bad state — drop to S6 Peers + S7 Network to confirm connectivity is OK before re-fetching.

What this command doesn't do

  • Doesn't try to repair anything. Read-only check.
  • Doesn't unpin orphans. Local pins that point to partially-missing references stay pinned; you decide whether to remove them.
  • Doesn't verify network availability. "missing locally" is the only check; if a chunk is missing here but available on the network, Bee will lazily re-fetch it on the next download. The check just reports current local state.

See also

:loggers and :set-logger

Inspect and mutate Bee's runtime logger registry. Useful when debugging a specific subsystem (push-sync, pricer, swap) without restarting the node.

:loggers

Snapshot the current logger registry to a file.

:loggers

Bee maintains a global registry of named loggers, each with its own verbosity. The list grows as Bee initialises modules — at steady state on a healthy node you'll see ~80–120 loggers covering pushsync, pullsync, swap, postage, storageincentives, etc.

Where the file goes

$TMPDIR/bee-tui-loggers-<profile>-<unix-timestamp>.txt. Like :pins-check, the filename is per-profile so parallel invocations across :context switches don't collide.

Sort order

Output is sorted by verbosity descending, then by logger name. Loud loggers float to the top so the operator immediately sees what's currently chatty:

# bee-tui :loggers
# profile  prod-1
# endpoint http://10.0.1.5:1633
# started  2026-05-07T08:14:32Z
# 96 loggers registered
# VERBOSITY  LOGGER
  all        node/pushsync
  debug      node/pricer
  info       node/api
  info       node/postage
  warning    node/swap
  warning    node/topology
  error      node/p2p
  none       node/pullsync
  ...
# done.

If :set-logger set push-sync to all an hour ago, you can run :loggers to confirm it's still there (and at what level).

:set-logger <expr> <level>

Change one logger's verbosity at runtime.

:set-logger node/pushsync debug
:set-logger node/swap     warning
:set-logger .             info        # all loggers

Arguments

ArgAllowed valuesDescription
<expr>logger name or . for allPath-style logger identifier as Bee emits them (node/pushsync, node/postage/listener, etc.). The literal . matches every registered logger — Bee broadcasts the level to all of them.
<level>none, error, warning, info, debug, allThe verbosity. none silences entirely; all is the loudest.

bee-rs validates the level client-side before any HTTP request goes out, so a typo errors immediately:

:set-logger node/swap warn
→ usage: :set-logger <expr> <level>  (level: none|error|warning|info|debug|all; expr: e.g. node/pushsync or '.' for all)

What happens under the hood

The cockpit fires:

PUT /loggers/<base64url(expr)>/<level>

bee-rs URL-safe-encodes <expr> and constructs the path. The result (success or error) is appended to a per-call log file:

$TMPDIR/bee-tui-set-logger-<profile>-<unix-timestamp>.txt

Containing:

# bee-tui :set-logger
# profile  prod-1
# endpoint http://10.0.1.5:1633
# expr     node/pushsync
# level    debug
# started  2026-05-07T08:14:32Z
# done. node/pushsync → debug accepted by Bee.

Verifying the change

After :set-logger, the cockpit's status line says:

set-logger "node/pushsync" → "debug" (PUT in-flight; check :loggers to verify)

Run :loggers to confirm the new level took effect. The PUT is fire-and-forget; the verification is a separate GET.

Why these commands exist

Bee's runtime logging is the way to debug specific subsystems. Without these commands, the operator would have to:

  1. Find the bee-rs (or curl) command for PUT /loggers/...
  2. Base64url-encode the logger expression
  3. Run the curl in a separate shell
  4. Run a second curl to verify

:loggers + :set-logger collapse this to one keystroke each, with the verification dump landing in a tail-able file.

The set-logger fix story: bee-rs set_logger_verbosity was silently broken in versions before 1.6 — it emitted PUT /loggers/{expr} (no verbosity in the path), which Bee accepted with a 200 but applied nothing. bee-rs 1.6 added the correct set_logger(expr, verbosity) and the cockpit uses that exclusively.

Common scenarios

"I want push-sync logs at debug level for 30 minutes"

:set-logger node/pushsync debug

Watch Bee's logs (journalctl / docker logs / stdout). When done:

:set-logger node/pushsync info

There's no auto-revert.

"What's currently at debug or louder?"

:loggers

…then grep -E 'all|debug' /tmp/bee-tui-loggers-....

"Quiet everything except errors"

:set-logger . error

The . expression hits every logger.

See also

Stamp dry-run previews

Four read-only command-bar verbs that answer "what would happen if I…" questions about postage batches without issuing any chain-bound write. Useful when you want to plan a topup, dilute, or fresh buy and need the BZZ cost / TTL impact ahead of time.

VerbArgsAnswers
:topup-preview<batch-prefix> <amount-plur-per-chunk>new TTL + BZZ cost of adding this much per-chunk PLUR
:dilute-preview<batch-prefix> <new-depth>new capacity, halved TTL, depth delta (cost is always 0 BZZ — dilute redistributes the existing balance)
:extend-preview<batch-prefix> <duration>per-chunk PLUR + BZZ cost to gain that much TTL
:buy-preview<depth> <amount-plur-per-chunk>TTL, capacity, and BZZ cost of a hypothetical fresh batch
:buy-suggest<size> <duration>minimum (depth, amount) that covers the target — the inverse of :buy-preview

<batch-prefix> is the 8-character hex shown in the S2 table (trailing is allowed; bee-tui strips it). Ambiguous prefixes print the matches and ask for a longer prefix.

<duration> accepts 30d, 12h, 90m, 45s, or plain seconds.

<size> (for :buy-suggest) accepts 5GiB, 2TiB, 512MiB, 100MB, 4096B, or just plain bytes. Single-letter shorthands (5G, 2T, 100M, 4K) default to binary (powers of two) because Bee batch capacities are always 2^depth × 4 KiB. Decimal suffixes (GB, MB) get the SI 1000-based interpretation if you explicitly use them.

<amount-plur-per-chunk> is the per-chunk PLUR amount — the same field stored on the batch. 1 BZZ = 10¹⁶ PLUR; for reference the default mainnet buy uses ~414 720 000 000 000 000 PLUR (≈ 0.04 BZZ on a depth-20 batch).

Worked examples

:topup-preview a1b2c3d4 100000000000
→ topup-preview a1b2c3d4…: +0.0419 BZZ (delta 100000000000 PLUR/chunk),
  TTL 47d 12h → 70d  6h
:dilute-preview a1b2c3d4 23
→ dilute-preview a1b2c3d4…: depth 22→23, capacity 16.0 GiB→32.0 GiB,
  TTL 47d 12h→23d 18h, cost 0 BZZ
:extend-preview a1b2c3d4 30d
→ extend-preview a1b2c3d4… +30d  0h: cost 0.0078 BZZ
  (1860000000000 PLUR/chunk), TTL 47d 12h → 77d 12h
:buy-preview 22 100000000000000
→ buy-preview depth=22 amount=100000000000000 PLUR/chunk:
  capacity 16.0 GiB, TTL 47d 12h, cost 41.9430 BZZ
:buy-suggest 5GiB 30d
→ buy-suggest 5.0 GiB / 30d  0h: depth=21 amount=518400000000 PLUR/chunk
  → capacity 8.0 GiB, TTL 30d  0h, cost 21.7268 BZZ

:buy-suggest is the inverse of :buy-preview. Operators usually think "I want 5 GiB for 30d" — not "depth=21, amount=5.18e11". The suggester rounds depth up to the next power of two (so the actual capacity is always ≥ your target, with the headroom shown verbatim) and rounds duration up in chain blocks (so actual TTL ≥ your target). Pass the suggested numbers to the real bee postage buy / swarm-cli stamp buy if you want to execute.

Why dry-run, not buy?

bee-tui is read-only by design (PLAN principle 3). Previews let operators get the predictive answers they normally have to leave the cockpit for (swarm-cli stamp buy --dry-run, calculate_bzz.sh) without bee-tui ever issuing a write. If you want to actually execute the buy, copy the numbers into swarm-cli or your scripted flow.

Formulas

Every formula matches the canonical math used across swarm-cli, beekeeper-stamper, gateway-proxy, and bee-scripts:

cost_bzz   = amount × 2^depth / 1e16
ttl_blocks = amount / current_price
ttl_secs   = ttl_blocks × 5  (Gnosis blocktime)
capacity   = 2^depth × 4 KiB

dilute(d → d+k):
  new_amount = old_amount / 2^k
  new_ttl    = old_ttl / 2^k
  new_cap    = capacity × 2^k
  cost       = 0

buy-suggest (target_bytes, target_secs):
  chunks_needed = ceil(target_bytes / 4096)
  depth         = max(17, ceil(log2(chunks_needed)))   # round up; clamp to Bee minimum
  amount        = ceil(target_secs / 5) × current_price # round up in blocks

current_price comes from S1's /chain-state poll — if the header still says "loading…" the preview will tell you the chain price isn't ready yet and to retry.

:probe-upload

Uploads one synthetic 4 KiB chunk to Bee and reports the end-to-end latency. The cockpit is otherwise read-only — this is the deliberate exception.

:probe-upload <batch-prefix>

<batch-prefix> is the 8-character hex shown in the S2 table (trailing allowed; bee-tui strips it). The chosen batch must be usable and have batch_ttl > 0.

What it answers

"Can my node actually take a stamp + persist a chunk + return its reference, end-to-end?"

/readiness returning 200 means Bee's HTTP server is up. It does not mean the storage path works — a corrupted RocksDB, an exhausted disk, or a misconfigured stamp signer can all return a healthy /readiness while uploads fail. :probe-upload exercises the same path real uploads take.

Output

The verb returns immediately with an "in flight" notice; the actual outcome lands on the command bar when Bee responds.

:probe-upload a1b2c3d4
→ probe-upload to batch a1b2c3d4… in flight — result will replace this line

(a few hundred ms later …)
→ probe-upload OK in 245ms — batch a1b2c3d4…, ref e7f3a201…

On failure:

→ probe-upload FAILED after 312ms — batch a1b2c3d4…: 422 Unprocessable Entity

Cost

One stamped chunk on the chosen batch:

  • Bucket cost — 1 collisions count on whichever bucket the chunk address falls in. With a healthy batch (depth ≥ 22, utilization « bucket_capacity) this is invisible.
  • PLUR costcurrent_price PLUR per chunk per block, times the batch's remaining TTL in blocks. With typical amounts that's on the order of 1e-12 BZZ per probe — well under a millionth of a cent.

Each invocation generates a unique chunk (timestamp-randomised payload) so Bee's content-addressing dedup doesn't short-circuit the second probe and skew the latency reading.

When to use it

  • After a Bee restart, before resuming production uploads.
  • Diagnosing intermittent upload failures: run a few back-to-back, watch the latency distribution.
  • Verifying a stamp is actually usable end-to-end (the bucket the chunk lands in might already be saturated even when worst_bucket_pct looks fine — :probe-upload will tell you).

What it doesn't do

  • Doesn't verify retrieval. A future iteration may follow up with a GET /chunks/<ref> to measure full round-trip; for now the verb stops at upload success.
  • Doesn't run repeatedly. One call = one chunk. No built-in loop. If you need throughput / latency curves, drive it from bee-bench instead.
  • Doesn't pick a batch for you. Explicit <batch-prefix> is required so you always know which batch you stamped against.

:upload-file

Uploads a single local file via POST /bzz to a chosen postage batch and returns the resulting Swarm reference. Unlike :probe-upload, which posts a synthetic 4 KiB chunk to verify the upload path, :upload-file ships an actual operator-supplied file the same way swarm-cli upload would.

:upload-file <path> <batch-prefix>

<path> is a local file (directories are rejected; a future release will add a :upload-collection verb for that). The file is capped at 256 MiB so the cockpit's event loop doesn't stall while reading it; for larger uploads use swarm-cli where the upload runs out of process.

<batch-prefix> is the 8-character hex shown in the S2 table (trailing allowed; bee-tui strips it). The chosen batch must be usable and have batch_ttl > 0.

Content type

Inferred from the extension for common types (.htmltext/html, .json, .png, .pdf, .tar.gz, .wasm, …). Anything not in the table is uploaded as application/octet-stream — Bee will still serve it on download but the Content-Type header on GET /bzz/<ref> will be the generic value. Override semantics are not exposed yet (no --content-type flag); if you need a specific MIME, rename the file or use swarm-cli.

Output

The verb returns immediately with an "in flight" notice; the actual outcome lands on the command bar when Bee responds.

:upload-file ./build/index.html a1b2c3d4
→ upload-file (12_345B) to batch a1b2c3d4… in flight — result will replace this line

(a few hundred ms later …)
→ upload-file OK in 312ms — 12345B → ref e7f3a201… (batch a1b2c3d4…)

On failure:

→ upload-file FAILED after 412ms — batch a1b2c3d4…: 413 Payload Too Large

CI mode (--once upload-file)

The same verb is available out of the TUI for snapshot-publish workflows:

bee-tui --once --json upload-file ./dist/site.html a1b2c3d4

Emits structured JSON including reference, size, content_type, and batch_id so a downstream step can pin the ref or post it to a release artefact.

When to use it

  • Publishing a single file (a static page, a release asset, a PDF) without leaving the cockpit.
  • Pinning a known input with a known batch + known content type so the swarm hash is reproducible across runs.
  • Verifying a fresh batch is wired correctly by uploading a real file end-to-end (:probe-upload covers the chunk path; this covers the manifest path).

What it doesn't do

  • No directory upload. Single-file scope only — collection upload comes later.
  • No retrieval check. Stops at upload success; pair with :inspect <ref> after if you want to verify the manifest is parseable.
  • No automatic stamp picking. Explicit <batch-prefix> is required so you always know which batch your upload was stamped against.

:upload-collection

Recursively walks a local directory and uploads it as a Swarm collection via POST /bzz (tar). The natural complement to :upload-file for publishing static sites, build-output bundles, and dApp distributions without leaving the cockpit.

:upload-collection <dir> <batch-prefix>

<dir> is a local directory. The walker skips:

  • Hidden entries — anything whose name starts with . (.git, .env, .DS_Store, etc.).
  • Symlinks — never followed, regardless of target. Defends against accidentally publishing files outside the collection root.
  • Non-UTF-8 names — Bee's manifest forks are UTF-8 only; silently dropped.

Caps mirror :upload-file's ceilings:

  • 256 MiB total across all entries.
  • 10 000 entries maximum.

Path normalisation: every entry's tar path is the relative path from <dir>, with forward slashes regardless of host OS. So ./dist/assets/logo.png becomes assets/logo.png in the manifest.

Default index

When the walked tree contains an index.html at the root (depth 1), it's auto-set as the collection's Swarm-Index-Document header — Bee then serves that file when a client requests GET /bzz/<ref>/. Nested index.html files inside subdirectories are uploaded as ordinary entries; no implicit index promotion.

Output

Returns immediately with an "in flight" notice including the entry count, total byte size, and default-index path; the actual outcome lands when Bee responds.

:upload-collection ./dist a1b2c3d4
→ upload-collection 47 files (3_241_092B) · default index=index.html to batch a1b2c3d4… in flight — result will replace this line

(several hundred ms later …)
→ upload-collection OK in 412ms — 47 files, 3241092B → ref e7f3a201… (batch a1b2c3d4…) · index=index.html

On failure:

→ upload-collection FAILED after 1240ms — ./dist → batch a1b2c3d4…: 413 Payload Too Large

CI mode (--once upload-collection)

bee-tui --once --json upload-collection ./dist a1b2c3d4

Emits structured JSON with reference, entry_count, total_bytes, default_index, and batch_id so a snapshot- publish workflow can pin the ref or post the URL without parsing the human line.

When to use it

  • Publishing a static-site / dApp distribution (the canonical dist/ directory) end-to-end from the cockpit.
  • Pinning a reproducible swarm hash for a directory tree — walking is deterministic (sorted entries, no time-of-day inputs) so two runs over identical content produce identical references.
  • Verifying a fresh batch is wired correctly by uploading a small directory end-to-end (the manifest path, with forks).

What it doesn't do

  • No recursive symlink follow. If you need symlink targets uploaded, materialise them locally (cp -L) first.
  • No explicit index override. v1.5 ships the auto-detect-index.html path only. A future iteration may add --index <path> for cases where the entry file is named differently.
  • No retrieval check. Stops at upload success; pair with :inspect <ref> after if you want to verify the manifest is parseable.
  • No automatic stamp picking. Explicit <batch-prefix> is required so you always know which batch the upload was stamped against.

:feed-probe

Single-shot lookup of the latest update of a Swarm feed. Read-only (no chain interaction, no stamp consumption); the natural counterpart to bee-tui's existing :gsoc-mine and :pss-target verbs which serve the writer side.

:feed-probe <owner> <topic>

<owner> is a 20-byte Ethereum address — 0x-prefixed or bare 40-hex (case-insensitive).

<topic> accepts two forms, picked by heuristic:

  • 64 hex chars (with or without 0x) is treated as the raw 32-byte topic.
  • Anything else is keccak256(utf8(s)), mirroring bee-js's Topic.fromString and bee-cli's topic-from-string. Operators rarely think in raw 32-byte topics; they think in "my-app/notifications".

Output

The verb returns an "in flight" notice; Bee's /feeds/{owner}/{topic} lookup can take 30-60 s on a fresh feed (epoch index walk), so the result lands asynchronously on the command bar.

:feed-probe 0x1234… my-app/notifications
→ feed-probe owner=12345678 in flight — result will replace this line (first lookup can take 30-60s)

(several seconds later …)
→ feed-probe owner=12345678 · index=42 · ts=1762000000 (3m) · ref=e7f3a201… (4123ms)

For raw feeds whose payload isn't a 32 / 64-byte reference, the tail shows payload=<n>B instead of ref=....

CI mode (--once feed-probe)

bee-tui --once --json feed-probe 0x1234… my-app/notifications

Emits structured JSON with owner, topic, topic_was_string, topic_string, index, index_next, timestamp_unix, payload_bytes, and reference. A snapshot-publish workflow can poll a known feed and gate on index advancing across runs:

PREV=$(cat /tmp/last-feed-index)
NEXT=$(bee-tui --once --json feed-probe $OWNER $TOPIC | jq -r .data.index)
if [[ "$NEXT" == "$PREV" ]]; then
  echo "feed didn't advance — alert"
  exit 1
fi

When to use it

  • Confirming a writer-side workflow actually published an update (smoke test after :upload-file + a separate update_feed call).
  • CI gates that should fail when an upstream feed stops advancing (broken publisher, out-of-funds signer, etc.).
  • Investigating "is this feed alive?" without firing up a full bee-cli or bee-js setup.

What it doesn't do

  • No history walk — only the latest update is fetched. A Feed Timeline screen with epoch history is on the v1.6 roadmap.
  • No payload decoding — when reference_hex is None the verb just reports the byte size; if you want the contents, pass it through :inspect or download_data separately.
  • No write side. :feed-probe is read-only; updating a feed requires a private key + a stamp, both outside the cockpit's current write surface.

:watch-ref / :watch-ref-stop

Daemon mode for :durability-check. Runs the chunk-graph walk on a reference periodically and feeds each result into the S12 Watchlist — the same screen single-shot :durability-check already populates. Useful for "watch this ref overnight" workflows where you want to know the moment a chunk goes missing.

:watch-ref       <ref> [interval-secs]
:watch-ref-stop  [ref]

Starting a daemon

<ref> is a 64-character hex Swarm reference (32 bytes, with or without the 0x prefix).

[interval-secs] is optional, defaults to 60 s, and is clamped to the inclusive range 10..=86_400 (10 s to one day). Below 10 s the per-chunk fetch storm crowds out other cockpit polling; above one day the cockpit's tick cadence makes the daemon nearly indistinguishable from a manual re-run.

:watch-ref e7f3a201cd1f0e9b… 300
→ watch-ref e7f3a201 started — re-checking every 300s; results in S12 Watchlist

Each iteration runs the full BMT-verified durability walk shipped in v1.5; new chunks_corrupt counts surface in the S13 row alongside chunks_lost / chunks_errors.

Re-issuing :watch-ref for a ref already being watched cancels the prior daemon and starts a fresh one — convenient for changing the interval without an explicit stop:

:watch-ref e7f3a201cd1f0e9b… 60
→ watch-ref e7f3a201 started — re-checking every 60s; results in S12 Watchlist

Stopping a daemon

:watch-ref-stop                   # cancels every active daemon
:watch-ref-stop e7f3a201cd1f0…    # cancels just the one watching this ref

A daemon's tokio task observes the cancel on its next iteration boundary — up to interval-secs later if a check is in flight or the sleep is mid-window. The cockpit's hashmap entry (and the "X active daemon(s)" count in :watch-ref-stop with no arg) is updated immediately.

The cockpit's root cancellation token also fires on quit, so operators don't need to remember to issue :watch-ref-stop before exiting.

Output

The verb itself is synchronous (just spawns the loop). Each periodic check's result lands in the S12 Watchlist row history the same way a manual :durability-check does — newest first, ring-buffered to the screen's row cap.

When to use it

  • Overnight monitoring of a known ref. Pair with [alerts] to get a webhook when the durability gate flips on the last iteration's outcome (v1.4 alerting + v1.6 watch-ref are designed to compose).
  • Verifying a freshly published ref propagates. Set a 30 s interval after :upload-collection and watch the lost count converge to zero as the network catches up.
  • Catching transient peer churn. A single :durability-check may report errors=1 from a flaky peer; a daemon at 5 min intervals shows whether the issue is persistent.

What it doesn't do

  • No state persistence. Daemons live in App memory only; cockpit restart drops them. Re-issue from a startup script if you want them restored.
  • No swarmscan cross-check yet. The original v1.6 plan mentioned a swarmscan probe ("does the network see this ref independent of my local node"); deferred to v1.7. Each iteration today asks only the local Bee node.
  • No per-ref interval override after start. Re-issue :watch-ref <ref> <new-interval> to swap; the prior daemon is cancelled before the new one starts.

:context / :nodes — multi-node switching

Switch the cockpit's active node profile without restarting. The screen layout stays the same; the data behind it re-points to a different Bee endpoint.

:context              # list known profiles (no switch)
:context <name>       # switch to <name>
:ctx <name>           # alias
:nodes                # open the picker overlay (also Ctrl+N)

The picker overlay (added v1.10.0)

Ctrl+N (or :nodes) opens a centred list of every [[nodes]] entry from config.toml. The cursor lands on the active node; ↑/↓ (or j/k) move it, switches, Esc or Ctrl+N close without switching. The active node is marked and the default = true entry is marked . The picker is just a thin wrapper around the switch flow described below — same teardown, same rebuild, same status-line confirmation.

Listing profiles

With no argument, :context lists every profile from your config.toml:

usage: :context <name>  (known: prod-1, prod-2, lab)

(The "usage" wording is intentional — there's no read-only mode for the command; :context always wants either a target or to tell you it doesn't have one.)

Switching

Switching is a clean re-point:

  1. Cancel every watcher subscribed to the old hub
  2. Build a new ApiClient against the named node
  3. Spawn a fresh BeeWatch hub against the new client
  4. Rebuild the screen list against the new watch receivers

The current screen index is preserved — if you were on S6 Peers, you stay on S6 Peers, just looking at a different node's peers.

The cockpit's status line confirms:

switched to context prod-2 (http://10.0.1.6:1633)

What's preserved across a switch

  • Current screen — your Tab cursor doesn't reset.
  • Help overlay state — if ? was open, it stays open.
  • Theme + ASCII fallback — UI prefs are config-level, not profile-level.

What's lost across a switch

A switch is intentionally treated as "fresh slate" — the same way it would be on app restart. Everything per-screen that wasn't pulled from the new hub gets reset:

  • Lottery rchash benchmark history — the in-flight or completed bench from the old node is gone. Press r again to benchmark the new node.
  • Network reachability stability timer — the "stable for 9m" counter restarts at 0 because we have no signal yet from the new node.
  • Selection cursors in S2 / S6 — reset to row 0; the underlying batches / peers are different.
  • Drill panes — any open S2 bucket drill or S6 peer drill is closed.
  • Command status line — replaced with the switch confirmation.

The watch streams (S1's 2-second polls, S6's topology, etc.) re-hydrate within their normal cadence — typically the cockpit feels live within 5 seconds of the switch.

Why a switch isn't a restart

Two reasons to keep the cockpit alive across a switch:

  1. Speed. A full restart re-parses the config, re-installs the global tracing capture, re-bootstraps the terminal — 2–3 seconds of dead time. A :context switch is sub-second.
  2. Continuity. Operators frequently want to compare nodes side-by-side ("prod-1 has 142 peers; what does prod-2 have?"). The screen index preservation makes that trivial: switch, look, switch back.

What does NOT switch

  • The default = true profile in your config.toml. :context is a runtime-only override; the next launch starts on the default again. There's no "remember my last profile" persistence — by design, the default node is the one most likely to need attention, so launches snap there.

Tokens across a switch

Each profile carries its own token (or @env:VAR). The old token is dropped along with the old ApiClient; the new token is loaded from the new profile's config. Tokens never cross profiles.

Common scenarios

"Quick comparison"

:context prod-1
[look at S1 Health]
:context prod-2
[same screen, different node]
:context prod-1

"Lab → prod"

:context lab          # default on launch
:context prod-1       # promote to production node

The lab token never reaches prod (different profile, different token).

"Switch failed"

context switch failed: no node configured with name "prd-1"

Typo. The original profile is still active; the failed switch is a no-op (no partial teardown happens before the lookup).

"I have one [[nodes]] entry, no default = true set"

The cockpit refuses to start with a clear error. Add default = true to your single entry. See Configuration.

See also

--once CI mode

Run a single verb without launching the TUI. Designed for CI pipelines, cron jobs, monitoring scripts, and any situation where you want a one-shot answer with a clean exit code instead of a full-screen cockpit.

bee-tui --once <verb> [args…] [--json]

The whole TUI runtime — App, screens, ratatui, supervisor, watch hub — is bypassed. Only what the verb actually needs is built: pure-local verbs touch nothing; Bee-API verbs build a one-shot ApiClient from your active node profile and call Bee directly.

Output

By default, one human-readable line on stdout:

$ bee-tui --once readiness
readiness OK · status=ok · radius=8 · in [1,30]

With --json, a single JSON object on stdout:

$ bee-tui --once readiness --json
{"verb":"readiness","status":"ok","message":"readiness OK · status=ok · radius=8 · in [1,30]","data":{"status":"ok","radius":8}}

JSON shape (stable since v1.3 — see "Stability contract" below):

FieldMeaning
verbEcho of the requested verb
statusOne of "ok", "unhealthy", "error", "usage_error"
messageSame one-liner the non-JSON form would have printed
dataVerb-specific structured fields (object). Omitted for verbs that have nothing structured to add.

Exit codes

CodeWhen
0Verb succeeded and the answer was healthy / OK
1Verb completed but the answer is unhealthy, the gate failed, or the network said no
2Usage error — unknown verb, bad args, missing config

The split between 1 and 2 matters in CI: code 1 is "the node says no" (alert your on-call); code 2 is "the script is wrong" (fix your YAML).

Tracing

Tracing / logging is not initialised in --once mode. Stdout is reserved for the human line or JSON object — nothing else is written there. Stderr stays clean unless the verb itself prints to it. This keeps bee-tui --once safe to use inside $(), pipes, and structured-output parsers.

The 24 verbs

Pure-local (5 — no Bee call required)

VerbWhat it does
hash <path>Local Swarm-hash of a file via Mantaray (bee::manifest::MerkleTree::root). Same answer as swarm-cli hash.
cid <ref> [manifest|feed]Local Reference → CID conversion. Type defaults to manifest when omitted.
depth-tablePrint the canonical depth → capacity table. Reference data, no inputs.
pss-target <overlay>Extract the 4-hex-char target prefix Bee accepts on /pss/send from a full overlay address.
gsoc-mine <overlay> <identifier>Local CPU work — find a PrivateKey whose SOC address has the target prefix.

These do not touch the Bee API; you can run them on a build agent that has no Bee node.

Bee API (9 — connects to your active node)

VerbWhat it doesFailure code
readinessGateway-proxy-style smoke test: status == ok && radius in [1,30]. The canonical "ready for traffic?" check.1 if unhealthy
version-checkReports Bee's /health API version vs. the bee-rs client's compiled-against version.1 on mismatch
inspect <ref>Universal "what is this thing?" — fetches one chunk and detects manifest / raw / feed.1 if not retrievable
durability-check <ref>Walks the chunk graph, reports total/lost/errors/corrupt with optional BMT verify + swarmscan cross-check. See S12 Watchlist for the full model.1 if any chunk is lost / errored / corrupt
upload-file <path> <batch>Single-file POST /bzz; 256 MiB cap; ext-based content-type guess. Emits {"reference":"...","tag":N}.1 on upload failure
upload-collection <dir> <batch>Recursive directory upload as a Swarm collection (tar POST /bzz); auto-detects index.html. Caps: 256 MiB / 10 000 entries.1 on upload failure
feed-probe <owner> <topic>Read-only /feeds/{owner}/{topic} lookup of the latest update.1 on lookup failure
feed-timeline <owner> <topic> [N]Walks a feed's history (newest first). Default 50 entries; hard cap 1000.1 on walk failure
grantees-list <ref>Read-only GET /grantee/{ref} for ACT grantee inspection. Emits {"count":N,"grantees":[…]}.1 if the grantee list is empty (treat as missing)

Stamp economics (10 — fetches chain state + stamps then runs pure math)

VerbWhat it does
buy-preview <depth> <amount>Predict TTL / capacity / cost of a hypothetical fresh batch buy.
buy-suggest <size> <duration>Suggest the minimum (depth, amount) to cover a target size + TTL.
topup-preview <batch> <amount>Predict TTL + cost of topping up an existing batch.
dilute-preview <batch> <new-depth>Predict the capacity / TTL change of diluting a batch.
extend-preview <batch> <duration>Predict the cost to gain N days/hours of TTL on an existing batch.
plan-batch <prefix> [usage] [ttl] [extra-depth]Run beekeeper-stamper's Set algorithm read-only. Outputs a PlanAction: None, Topup, Dilute, or TopupThenDilute. Defaults: usage 0.85, TTL 24h, extra-depth +2. Exits 1 when an action is recommended — making this a CI gate signal: "is this batch about to need attention?"
check-versionGitHub releases API call; reports if a newer bee-tui is published.
config-doctor [path]Read-only audit of bee.yaml against swarm-desktop's migration rule set.
pricexBZZ → USD lookup via a public token service.
basefeeGnosis Chain JSON-RPC basefee + tip.

These all hit the Bee API once to fetch the inputs, then do their work in pure Rust — no chain mutation, no stamp purchase, no upload.

Stability contract

The --once surface is part of bee-tui's semver-stable surface since v1.3:

  • The four exit codes (0 ok, 1 unhealthy/failed, 2 usage) are pinned. New failure modes get one of the existing codes, never a new one.
  • The JSON shape {verb, status, message, data} is pinned. Future minor versions may grow data with new keys, but existing keys won't be renamed or removed without a v2.0.0 bump.
  • Existing verbs won't be removed. New verbs may appear in minor versions.

If you script against --once in CI, you can pin a minor version and trust the surface won't break under you.

Examples

Smoke-test a Bee node from CI

- name: Bee readiness gate
  run: bee-tui --once readiness

Exit 0 → all good. Exit 1 → fail the build.

Watch a stamp in CI

- name: Stamp plan-batch gate
  run: bee-tui --once plan-batch ee7f3a20

Exits 1 when the batch needs Topup, Dilute, or both. Hook a Slack notification onto the failure to nudge the operator before the batch hits the cliff.

Periodic durability check from cron

*/30 * * * * /usr/local/bin/bee-tui --once durability-check $REF --json \
  | jq -c '. + {timestamp:now}' >> /var/log/bee-durability.jsonl

The JSONL is append-only, parseable by anything that speaks JSON, and re-runnable without parsing TUI output.

Upload from a build pipeline

- name: Publish site to Swarm
  run: |
    REF=$(bee-tui --once upload-collection ./public $BATCH --json | jq -r .data.reference)
    echo "site_ref=$REF" >> $GITHUB_OUTPUT

The data.reference field is part of the v1.5 stable surface.

See also

Keymap cheatsheet

Every key the cockpit handles, in one place. The in-app ? overlay is the canonical source — this page mirrors it for offline reference.

Global (works everywhere)

KeyEffect
TabNext screen
Shift+TabPrevious screen
19Jump to S1 – S9
0Jump to S10 (Pins)
Alt+1Alt+4Jump to S11 – S14 (Manifest, Watchlist, FeedTimeline, Pubsub)
Ctrl+NOpen node picker (also :nodes)
[ / ]Previous / next tab on the bottom log pane (Errors / Warn / Info / Debug / Bee HTTP / bee::http / Cockpit). Persisted across launches.
+ / -Grow / shrink the bottom log pane height by one line. Clamped to 4..24. Persisted across launches.
Shift+↑ / Shift+↓Scroll the active log tab back / forward by one line. Pauses auto-tail; the title shows a paused N ↑ indicator.
Shift+PgUp / Shift+PgDnSame, ten lines at a time.
Shift+EndResume auto-tail (snap back to the latest entries).
?Toggle help overlay
:Open command bar
qqQuit — double-tap within ~1.5 s. First q shows a footer hint; second q confirms. :q also works for an unguarded quit.
Ctrl+C / Ctrl+DQuit immediately. Escape hatch if the cockpit ever stops responding to qq.
EscClose help / drill / command bar / cancel current input. Also cancels a pending q (so you can back out without committing).

Screen-specific keys

S1 / S3 / S5 / S7 / S8 are read-only — they have no screen-specific keys.

S2 — Stamps + bucket drill

KeyEffect
↑↓ / j kMove row selection
Drill into selected batch (bucket histogram + worst-N)
EscClose drill

S4 — Lottery + rchash

KeyEffect
rFire / re-fire rchash benchmark

S6 — Peers + bin saturation + drill

KeyEffect
↑↓ / j kMove cursor in peer table
PgUp / PgDnPage through peers
HomeJump to first peer
Drill into selected peer (4 endpoints in parallel)
EscClose drill

S9 — Tags / uploads

KeyEffect
↑↓ / j kScroll one row
PgUp / PgDnScroll ten rows
HomeBack to top

S11 — Pins

KeyEffect
↑↓ / j kMove cursor through the pinned-reference list
Drill into selected pin (pin detail)
EscClose drill

S12 — Manifests

KeyEffect
↑↓ / j kMove cursor through the Mantaray tree
Toggle expand / load the cursored fork (lazy fetch)

The cursored row's reference (target hex, or fork self-address) is rendered on a selected: detail line above the footer for terminal-native click-drag copy — there's no explicit copy key.

S13 — Watchlist

KeyEffect
↑↓ / j kMove cursor through :watch-ref daemons

S14 — Feed Timeline

KeyEffect
↑↓ / j kMove cursor through the feed update history
PgUp / PgDnPage ten entries

S15 — Pubsub watch

KeyEffect
↑↓ / j kMove cursor through the merged PSS / GSOC timeline
PgUp / PgDnPage ten entries
cClear the timeline (subscriptions stay open)

The command bar

: opens it. Once open:

KeyEffect
Run the typed command
EscClose without running
BackspaceDelete left
any printableAppend to command buffer

See The :command bar for what each command does.

Conventions

  • The cockpit prefers vim-style keys (j/k, :command, Esc-to-close) but every nav key has an arrow-key + named-key alias. You don't have to know vim.
  • No Ctrl+ chords for normal navigation. The cockpit reserves Ctrl-keys for terminal escape sequences (Ctrl+C exits via SIGINT, etc.). All screen actions are single keystrokes.
  • Esc is universal close. Whatever's most-recently opened — drill / help / command bar — is what Esc closes. The hierarchy is: command bar > help overlay > drill > nothing.

Why qq instead of just q

A bee-tui session is something operators leave running in the background while doing other work. A single q was found to be too easy to misclick — especially when navigating in from another shell. The double-tap guard means a stray keystroke costs you a footer hint, not a session.

If you really want unguarded quit, use :q from the command bar. Ctrl+C and Ctrl+D are also unguarded — they remain the canonical "I want out now" escape hatches and bypass the double-tap entirely.

Discovering keys

Open ? on any screen. The overlay shows the global keymap plus the keys for the current screen. So pressing ? on S6 lists peer-drill keys; pressing ? on S9 lists scroll keys. No memorisation needed.

The node picker overlay

Ctrl+N (or :nodes) opens a centred overlay listing every [[nodes]] entry from config.toml. The cursor lands on the currently active node:

KeyEffect
↑↓ / j kMove cursor through configured nodes
Switch to the cursored node (rebuilds API client + watch hub; no-op if cursor is already on the active node)
Esc / Ctrl+NClose without switching

The active node is marked and the default = true entry is marked . After switching, the metadata line at the top of the cockpit updates to show the new profile and endpoint; any :watch-ref daemons and pubsub subscriptions that were running against the previous node are cancelled (they don't follow the context — re-issue the verbs against the new node if you want them there too).

The help overlay

? opens a centred overlay with two pages:

KeyEffect
?Toggle the overlay
Tab / Shift+TabSwitch between Keys and Verbs pages
Esc / ? / qClose

The Keys page mirrors this cheatsheet (global keys + the screen-specific block for whichever screen is active). The Verbs page lists every :verb grouped by category (navigate, inspect, stamps & economics, uploads, durability, pubsub, mining, diagnostics, cockpit) so the entire surface is discoverable without leaving the cockpit.

What's not bound

The cockpit deliberately leaves these unbound:

  • Up/down arrow for screen jumpTab (or the digit keys) is the screen-jump path. Arrow keys are reserved for in-screen navigation.
  • / for search — there's no global text search yet. Most screens are too short to need one, and where they aren't (S6 peers, S9 tags), you can scroll with j/k/PgDn/Home.

Theme & accessibility

The cockpit ships two themes (default, mono) and a glyph fallback (ascii) so it works on terminals that don't render Unicode or colour cleanly — colourblind operators, screen readers, recording tools, SSH chains that mangle terminal escapes, Windows pre-Win11.

Themes

default (vibrant)

The default. Status uses semantic colour:

  • Green for Pass / healthy
  • Yellow for Warn / in-progress
  • Red for Fail / critical
  • Blue for Info / accent
  • Dim grey for Unknown / muted

This is what most operators see. It works on every modern terminal (iTerm2, Alacritty, Kitty, Wezterm, Konsole, GNOME Terminal, Windows Terminal on Win11+).

mono (greyscale)

Same layout, no colour. Status is conveyed only through glyphs and intensity. Useful when:

  • Recording the terminal to a video / GIF (colour-corrupting recorders preserve glyphs)
  • Piping through tools that strip ANSI colour
  • The terminal's colour palette is corrupted by a theme override
  • Personal preference for a calm aesthetic

Set in config:

[ui]
theme = "mono"

Or via CLI:

bee-tui --no-color
NO_COLOR=1 bee-tui    # equivalent — per <https://no-color.org>

The NO_COLOR environment variable is the standard cross-tool convention; the cockpit honours it to slot into existing setups without per-tool config.

ASCII fallback

Independent from the theme. Replaces Unicode glyphs with ASCII equivalents:

UnicodeASCIIUsed for
OKPass status
!Warn status
XFail status
·.Unknown / dim
>Selection cursor
#Filled bar segment
-Empty bar segment
=In-progress fill
*Pending
└─`-`
-Em dash / "never"

Set in config:

[ui]
ascii_fallback = true

Or via CLI:

bee-tui --ascii

When to use ascii:

  • Older Windows Terminal (pre-Win11) renders most Unicode glyphs as ?
  • Screen readers — Unicode geometric shapes are read aloud unpredictably; ASCII letters are stable
  • SSH through a non-UTF8 terminal multiplexer
  • Some vim integrations / tmux configurations

Resolution order

Multiple sources can configure theme + glyphs. The resolution is (highest priority first):

  1. --ascii flag → ASCII glyphs (regardless of config)
  2. --no-color flag OR NO_COLOR env (any non-empty value) → mono palette
  3. [ui].ascii_fallback = true from config → ASCII glyphs
  4. [ui].theme = "..." from config → palette
  5. Built-in defaults: default theme, Unicode glyphs

Accessibility checklist

The cockpit's design rules:

  • Status is conveyed redundantly. Every Pass / Warn / Fail uses both a glyph (✓ / ⚠ / ✗ or OK / ! / X) and a colour. No information is lost in mono or under ASCII.
  • No flashing or animation tied to status. The only movement is the spinner glyph for cold-start "loading…" rows, and that's a 4-frame cycle at low frequency.
  • No keystrokes require modifier keys for navigation. Tab, j/k, ↑↓, Esc, , ? — every primary action is a single key.
  • Focus is single-screen at a time. No tabbing between panes within a screen; the whole screen is the unit.

Slot-based palette

For developers — the theme system uses slots (semantic roles) rather than direct colour assignments:

SlotDefaultMonoWhat it carries
passgreenwhiteHealthy / success
warnyellowwhite-dimIn-progress / cautionary
failredwhite-dimFailure / critical
infobluewhiteAccent / informational
accentmagentawhiteHeaders / titles
dimgreygreyMuted / unknown
textwhitewhiteBody text

Components reference slots (theme::active().fail), never raw colours. Adding a new theme is a matter of mapping the slots to a new palette — see Adding a screen for the extension hook.

Glyph slots

Same idea, for symbols. 12 slots:

pass, warn, fail, bullet (·), spinner (4 frames),
selection (▶), bar_filled (▇), bar_empty (░),
bar_partial (▒), pending (⏳), continuation (└─),
em_dash (—)

The Unicode and ASCII variants are constructed via Glyphs::unicode() and Glyphs::ascii(). Code that wants to detect the active mode does it with a content equality check (active().glyphs.pass == Glyphs::unicode().pass) since pointer equality on string literals isn't reliable across optimisation boundaries.

Reporting accessibility bugs

If a screen is unreadable in mono / ASCII / a specific terminal, file an issue with:

  • Terminal + version (echo $TERM output is helpful)
  • The cockpit invocation (with what flags)
  • A screenshot or copy of the rendered output

We treat accessibility bugs as P1 — the cockpit is for operators, and operators don't always work on a colour- capable Linux laptop.

Prometheus metrics

bee-tui can expose a Prometheus /metrics endpoint with the gauges the cockpit screens already compute. The point isn't to duplicate Bee's own /metrics — Bee exposes plenty of infrastructure counters — it's to make bee-tui's unique synthesised gauges machine-readable so a Grafana board can graph them alongside Bee's:

  • Per-batch worst-bucket fill — predicts upload-failure before Bee's API admits anything is wrong.
  • Predictive stamp economics — depth, capacity bytes, TTL seconds per batch (same math as :*-preview).
  • Pending-tx age — operator-relevant signal that Bee surfaces only as a creation timestamp.
  • Depth-vs-radius gapcommitted_depth - storage_radius; positive means the node hosts chunks beyond its storage radius (chunk-loss risk during shrinkage).
  • bee-tui's own request stats — p50/p99/error-rate over the recent client-side log-capture window. Distinguishes "Bee is slow" from "the network between bee-tui and Bee is slow".

Enable it

In config.toml:

[metrics]
enabled = true
addr    = "127.0.0.1:9101"   # default; only opt into 0.0.0.0 if you mean it

Off by default — exposing an HTTP listener should be a deliberate operator opt-in, even on localhost.

Scrape config

Standard Prometheus drop-in:

scrape_configs:
  - job_name: bee-tui
    static_configs:
      - targets: ['127.0.0.1:9101']
    scrape_interval: 30s

bee-tui re-renders the metrics on every scrape, reading the latest snapshots from the same watch channels the screens use — so the values match what the operator sees in the cockpit at the moment of the scrape.

Metric reference

All metrics are namespaced bee_tui_. Gauges unless noted.

Liveness + identity

MetricLabelsDescription
bee_tui_upAlways 1 if the scrape responds
bee_tui_infoversion, overlay, bee_modeAlways 1; metadata via labels
bee_tui_resource_loadedresource1 if that resource's last poll succeeded (health / stamps / swap / lottery / topology / network / transactions)

Status (/status)

MetricDescription
bee_tui_status_connected_peersStatus.connectedPeers
bee_tui_status_neighborhood_sizeStatus.neighborhoodSize
bee_tui_status_reserve_size_chunksStatus.reserveSize
bee_tui_status_reserve_size_within_radius_chunksStatus.reserveSizeWithinRadius
bee_tui_status_storage_radiusStatus.storageRadius
bee_tui_status_committed_depthStatus.committedDepth
bee_tui_status_depth_radius_gapcommittedDepth - storageRadius (synthesised)
bee_tui_status_is_reachable0 / 1
bee_tui_status_is_warming_up0 / 1
bee_tui_status_last_synced_blockStatus.lastSyncedBlock
bee_tui_status_proximityStatus.proximity
bee_tui_status_batch_commitmentStatus.batchCommitment
bee_tui_status_pullsync_rate_per_secondStatus.pullsyncRate (chunks/sec)

Chain (/chain-state)

MetricDescription
bee_tui_chain_blockLocal block height
bee_tui_chain_tipHighest block observed
bee_tui_chain_lag_blockstip - block (synthesised)
bee_tui_chain_current_price_plurPer-chunk PLUR/block price

Postage (/stamps)

bee_tui_stamps_count is unlabelled; the per-batch metrics carry {batch_id, label} so a Grafana panel can graph them per batch.

MetricDescription
bee_tui_stamps_countTotal batches
bee_tui_stamp_worst_bucket_ratioWorst-bucket fill 0..1 (S2's worst-bucket %)
bee_tui_stamp_ttl_secondsPredicted TTL
bee_tui_stamp_depthBatch depth
bee_tui_stamp_capacity_bytes2^depth × 4096
bee_tui_stamp_immutable0 / 1
bee_tui_stamp_usable0 / 1 (chain-confirmed)

Pending transactions

MetricDescription
bee_tui_pending_tx_countNumber of pending Bee transactions
bee_tui_pending_tx_oldest_age_secondsAge of the oldest pending tx

bee-tui's own client-side requests

Same window as the S8 RPC/API screen.

MetricDescription
bee_tui_self_request_sample_sizeEntries contributing to the percentile math
bee_tui_self_request_latency_p50_secondsMedian latency (omitted when no samples)
bee_tui_self_request_latency_p99_seconds99th-percentile latency
bee_tui_self_request_error_ratioFraction 0..1 with status ≥ 400

SWAP / Lottery / Topology / Network

MetricDescription
bee_tui_swap_chequebook_total_plurTotal chequebook balance (PLUR)
bee_tui_swap_chequebook_available_plurUncashed balance (PLUR)
bee_tui_lottery_staked_plurCurrently staked BZZ in PLUR
bee_tui_topology_populationPeers known across all bins
bee_tui_topology_connectedCurrently connected peers
bee_tui_topology_depthKademlia depth
bee_tui_topology_radiusNearest-neighbour low watermark
bee_tui_network_underlay_countUnderlay multiaddr count from /addresses

Wire format

Content-Type: text/plain; version=0.0.4; charset=utf-8 — the standard Prometheus text exposition format. Each metric family emits a # HELP and # TYPE line followed by one sample line. Label values are escaped per the Prometheus spec (\\, \", \n).

Security notes

  • Default bind is 127.0.0.1. If you set addr = "0.0.0.0:...", you've opted into reachability from any interface — put a firewall in front.
  • The endpoint exposes batch IDs and the node's overlay address. These are public on-chain values but worth knowing if you proxy the endpoint through a reverse proxy you don't control.
  • No authentication. Prometheus's standard answer is to bind scrapers behind a private network or use mTLS at the proxy layer.

Operator FAQ

Questions that come up most in support threads, with the shortest accurate answer for each. Each answer points to the relevant screen + page for deeper context.

Health & gates (S1)

Why is my Reserve gate failing during warmup?

It's normal during the first 10–30 minutes. Reserve fills to 65,536 chunks at depth, and chunks only arrive as peers push them to you. See S5 Warmup; the reserve-fill row tracks this explicitly.

If it's still failing 60+ minutes in, your bins are starving (S6) or you're NAT-trapped (S7).

What does Bin saturation = STARVING mean?

Some kademlia bin near your depth has fewer than 8 connected peers — bee-go's hardcoded saturation threshold. Bee won't forward / receive chunks well in that bin. See S6. Usually self-resolves within 30 min on a public node.

Chain RPC shows Δ +5

Bee thinks the chain tip is 5 blocks ahead of what it's processed. Small lags flicker; sustained Δ ≥ +5 means slow RPC. Bee can't fix it — switch your --blockchain-rpc-endpoint.

Wallet funded gate is failing

Either BZZ is 0 (can't issue stamps) or native is 0 (can't pay gas). Top up the operator wallet. From a faucet on testnet; from your treasury on mainnet.

Stamps (S2)

What does ⏳ pending mean on a stamp?

usable = false — the on-chain batch-buy transaction hasn't been confirmed yet (~10 blocks on Gnosis ≈ 2 min). If it sits pending > 10 min, your operator wallet is out of gas; check S8 pending transactions. See S2.

My batch utilization says 14 % but uploads fail

Bee's utilization field is the peak bucket count, not the average. A batch with 1023 empty buckets and one 95 % bucket reads utilization = 14 % while the worst bucket is about to overflow. The cockpit's "WORST BUCKET" column shows the truth. Drill in (Enter) for the histogram.

Should I use immutable or mutable batches?

Default to immutable. Mutable silently overwrites old chunks when a bucket fills — you lose data without warning. Immutable rejects the upload, which is annoying but obvious. See S2 § Immutable vs mutable.

TTL is dropping faster than expected

batch_ttl = paid_balance / current_price. When network price goes up, every batch's remaining time shrinks proportionally. You didn't lose money; the batch just got shorter. Topup if needed.

SWAP / cheques (S3)

"Tight chequebook" — what now?

Cash out the largest received cheque (S3 Pane 2 top row). That moves uncashed BZZ into your available chequebook balance. There's no in-cockpit cashout — use POST /chequebook/cashout/<peer> via curl. Don't cash tiny amounts (gas eats them).

All my settlements are negative

You're forwarding more chunks than you store. Normal for low-radius nodes near the kademlia roots. Increase your radius / depth if it bothers you. See S3 § Common scenarios.

Total received BZZ is huge but available is tiny

Most of received is uncashed cheques sitting in S3 Pane 2. Cash them out.

Lottery (S4)

Why am I not earning rewards?

Walk the decision tree on S4. Tl;dr:

  1. Stake card says Unstaked → deposit stake
  2. Stake card says InsufficientGas → top up native
  3. Stake card says Frozen → wait it out
  4. Stake card says Unhealthy → see S5 Warmup
  5. Healthy + bad rchash → reserve too slow; check disk
  6. Healthy + good rchash + still unlucky → it's stochastic; wait

What's a normal rchash duration?

Below 10 s on a healthy node with SSD storage. If it's approaching the 95-second commit deadline, your node will silently miss every round. Slow disk, network-attached storage, or competing I/O are the usual causes.

My stake card says Unhealthy but I'm not frozen

Bee's is_healthy checks multiple internal preconditions (reserve, depth, samples). A node can be unfrozen but still un-healthy during warmup or if reserve drops. Wait one or two rounds; if persistent, drop to S5.

Network (S7)

Why is my reachability flickering Public ↔ Private?

Symmetric NAT. AutoNAT can't pin you down. The "stable for Xm" counter on S7 will keep resetting. Either set up port forwarding (TCP 1634), run on a public VPS, or accept that you're outbound-only. See S7.

I have 142 peers but Inbound shows 0

You're outbound-only — peers can't dial you back. Firewall or NAT. Even with high outbound, chunks won't arrive properly. Fix the firewall / NAT.

What's a "Private" underlay?

An advertised multiaddr that's RFC 1918 (10.*, 172.16.*, 192.168.*), link-local, or loopback. Bee advertises everything it binds to; private addresses are dimmed because peers outside your LAN can't dial them.

Transactions (S8)

How do I clear a stuck transaction?

The cockpit doesn't do it for you. From outside:

curl -X POST http://localhost:1633/transactions/<hash>/cancel
# or to bump gas + resend:
curl -X POST http://localhost:1633/transactions/<hash>

Add -H "Authorization: Bearer <token>" if your node has auth. Check S8 again to confirm it's gone.

p99 latency spiked to 5 s

Bee is busy. Reserve worker tick (every 30 min), large upload pushing chunks, or slow disk. If it stays high, run iotop and check disk health. See S8.

Tags / uploads (S9)

Why is my tag stuck at 99 % synced?

Last 1 % is the slow tail — receipts haven't all come back. Wait 5 min. If still stuck, check chequebook (S3) — Bee pauses pushing when it can't pay forwarders.

My tag says Pending forever

You used a streaming endpoint that doesn't pre-declare chunk count. The upload still works; the tag just doesn't track meaningfully. See S9.

Operations

How do I switch between nodes?

Two ways:

  • Ctrl+N (or :nodes, v1.10+) — opens a picker overlay listing every [[nodes]] entry; ↑/↓ to select, Enter to switch, Esc to cancel. The active row is marked and the default = true row is marked .
  • :context <name> — typed switch (alias :ctx). Same flow under the hood.

Define the nodes in config.toml. See :context.

Can bee-tui start Bee for me, or only talk to a running one?

Both. By default bee-tui connects to whatever [[nodes]] profile is active — that's the "talk to a running Bee" path and it works against local or remote nodes. Set [bee].bin and [bee].config in config.toml (or pass --bee-bin / --bee-config on the CLI) and bee-tui will spawn that binary, wait for its API to come up, then open the cockpit on top. The wrapper sits over the connect path; it isn't a separate mode.

How do I turn on webhook alerts for unhealthy gates?

Set [alerts].webhook_url in config.toml to a Slack-compatible incoming webhook URL. Optionally tune [alerts].debounce_secs (default 60). bee-tui will POST on every gate transition worth pinging on. The top bar shows alerts ● whenever it's configured. See config.md and S1 § Webhook alerts.

How do I run :durability-check continuously?

:watch-ref <ref> [interval-seconds] runs the same check as a daemon (default cadence 60 s, clamp 10..86400). The top bar chip watch N confirms how many are running. :watch-ref-stop <ref> cancels one; :watch-ref-stop with no arg cancels all. The S12 Watchlist screen shows results as they arrive.

Can I use bee-tui from CI / cron without the TUI?

bee-tui --once <verb> [args] [--json] runs a single verb, prints one line (or JSON), and exits with 0 (ok), 1 (unhealthy / failed), or 2 (usage error). 24 verbs available — readiness, inspect, durability-check, plan-batch, buy-preview, etc. See --once. The exit codes + JSON shape are part of the semver-stable surface, so CI gates that depend on them won't break across minor upgrades.

Where does the diagnostic bundle go?

$TMPDIR/bee-tui-diagnostic-<timestamp>.txt. The status line prints the full path. Bearer tokens are NEVER captured — safe to share. See :diagnose.

Can I run :pins-check while the cockpit is busy?

Yes. It runs in the background and streams to a file. You can keep navigating screens. The cockpit won't slow down. See :pins-check.

My terminal won't render Unicode

bee-tui --ascii or set [ui].ascii_fallback = true in config. See Theme & accessibility.

How do I see what HTTP calls the cockpit is making?

The bottom log pane underneath every screen — a live tail of every request, with method / path / status / elapsed. See the log-pane page (file kept at its old s10-log.md path for stable links).

Things the cockpit deliberately won't do

"Why can't I cash out cheques from the cockpit?"

Cashing is on-chain and costs gas. The cockpit is read-mostly — mutating endpoints that move money are intentionally off the keymap so you don't fat-finger them. Use curl + the /chequebook/cashout/<peer> endpoint when you mean it.

"Why can't I buy stamps from the cockpit?"

Same. Stamp purchase is on-chain, costs gas, has knobs (depth, amount) you should think about. The cockpit shows existing stamps' state; the buy itself is bee postage buy or curl.

"Why can't I configure logging persistently?"

:set-logger is runtime-only. Bee's persistent logging config is in its config.yaml; the cockpit doesn't write to it. By design — you might want push-sync at debug for 30 min, not forever.

"Why no in-cockpit connect <peer>?"

Bee's kademlia handles peer dialing automatically. Manual connect is a debugging escape hatch, not normal operator behaviour. If you need it, use curl.

Where to ask the question this FAQ doesn't answer

  • GitHub issues for bee-tui: cockpit bugs, doc errors, feature requests
  • Swarm Discord #node-operators: Bee questions that aren't cockpit-specific
  • bee-rs / bee-py / bee-go for client questions

Architecture

How the cockpit is wired internally. For developers / contributors / anyone reading the source. The design rules optimise for predictable rendering, clean shutdown, and testability — in that order.

The two-layer model

┌──────────────────────────────────────────────┐
│  COMPONENTS (per-screen)                     │
│   - hold a watch::Receiver<T>                │
│   - implement view_for(snap) -> View         │
│   - render the View into ratatui widgets    │
└──────────────────────────────────────────────┘
              ▲
              │ tokio::sync::watch
              │
┌──────────────────────────────────────────────┐
│  WATCH HUB (BeeWatch)                        │
│   - one tokio task per Bee endpoint          │
│   - each task owns a watch::Sender<T>        │
│   - all tasks under a CancellationToken      │
└──────────────────────────────────────────────┘
              ▲
              │
        ApiClient (bee-rs)

The watch hub is the single source of truth for live data. Each component is a pure renderer that takes the latest snapshot and computes a View struct, which gets rendered.

The watch hub (src/watch/)

BeeWatch::start(api, root_cancel) spawns one tokio task per resource. Each task:

  • Holds an Arc<ApiClient>
  • Owns a tokio::sync::watch::Sender<T> for its resource
  • Polls the relevant Bee endpoint at a fixed cadence
  • Calls tx.send(new_snapshot) on each tick

Resources currently watched (with cadence):

ResourceEndpoint(s)Cadence
Health/status, /wallet, /chainstate, /redistributionstate2 s
Topology/topology5 s
Stamps/stamps5 s
Swap/chequebook/balance, /chequebook/cheque, /settlements, /timesettlements, /chequebook/address30 s
Lottery/redistributionstate, /stake30 s
Tags/tags5 s
Network/addresses60 s
Transactions/transactions5 s
Economics oracle (v1.4.0, opt-in)xBZZ→USD price service + Gnosis JSON-RPC basefee60 s

The economics-oracle poller is gated by [economics].enable_market_tile and is the only watcher that talks to non-Bee endpoints; it lives in src/economics_oracle.rs rather than src/watch/ because the failure modes (third-party rate-limit, RPC outage) are unrelated to Bee health and shouldn't poison the shared hub.

Beyond the hub there are two per-verb daemon families that spawn under root_cancel but aren't part of the watch loop: :watch-ref tokio loops (tracked in App::watch_refs: HashMap<ref, CancellationToken>, v1.6) and PSS / GSOC pubsub subscriptions (App::pubsub_subs: HashMap<sub_id, CancellationToken>, v1.7). The top-bar awareness chips (v1.10) read len() on each map so the operator sees how many are running.

Cadences are tuned for the rate at which each resource actually changes. Stamps utilization grows at upload rate — 5 s is plenty. Settlement cheques change at chain rate — 30 s. Underlay addresses essentially never change — 60 s. Hammering Bee at 1 s for everything would burn CPU on both sides.

Cancellation

Every watcher inherits from a single tokio_util::sync::CancellationToken called root_cancel, owned by App. On quit:

  1. App::run() flips should_quit = true
  2. App::run() calls root_cancel.cancel()
  3. Every watcher task's loop sees the cancellation and exits
  4. The terminal is restored
  5. Process exits cleanly

:context <name> is the same pattern, scoped: the active BeeWatch::shutdown() cancels its children, a new BeeWatch::start(new_api, &self.root_cancel) spawns under the same root, and component receivers are rebuilt. Since v1.9.1, switch_context also drains the per-verb daemon maps (pubsub_subs, watch_refs) and resets alert_state — without that, daemons spawned against the previous node kept pumping wrong-node messages into the rebuilt screens, and stale gate-transition memory could fire spurious webhooks (or suppress real ones) right after the switch.

This means no orphaned tasks can outlive the cockpit. Even mid-fetch drill spawns are tied to the same tree — they get cancelled at quit / context-switch and never silently complete.

Components (src/components/)

One file per screen. Each file:

#![allow(unused)]
fn main() {
pub struct MyScreen {
    rx: watch::Receiver<MySnapshot>,
    snapshot: MySnapshot,
    // screen-local state (cursor, drill, etc.)
}

impl MyScreen {
    pub fn view_for(snap: &MySnapshot) -> MyView {
        // pure: snap → view, no I/O
    }
}

impl Component for MyScreen {
    fn update(&mut self, action: Action) -> Result<Option<Action>> {
        if matches!(action, Action::Tick) {
            self.snapshot = self.rx.borrow().clone();
        }
        Ok(None)
    }

    fn draw(&mut self, frame: &mut Frame, area: Rect) -> Result<()> {
        let view = Self::view_for(&self.snapshot);
        // render view into ratatui widgets
    }
}
}

The view_for separation is the cockpit's testability trick: tests/sN_*.rs files load fixture snapshots, call view_for, and assert against insta snapshots — without launching a TUI.

Drill panes (src/components/peers.rs, stamps.rs)

Drills are fire-and-forget spawns inside a component, not new watchers in the hub. The pattern:

#![allow(unused)]
fn main() {
enum DrillState {
    Idle,
    Loading { ... },
    Loaded { view: ... },
}

struct MyComponent {
    drill: DrillState,
    drill_rx: mpsc::UnboundedReceiver<DrillResult>,
    drill_tx: mpsc::UnboundedSender<DrillResult>,
    // ...
}
}

When the user presses :

  1. Spawn a tokio task that fans out 4 endpoint fetches in parallel via tokio::join!
  2. Send the aggregate result down drill_tx
  3. On next Tick, drain drill_rx and update drill state
  4. Render reads drill

A second while a drill is loading is a no-op (we just re-target the same Loading state). Esc clears drill to Idle and ignores any late results.

See Drill panes for the full pattern.

Pure-fn rendering for testability

Every screen has a view_for (or compute_*_view) function that takes a snapshot and produces a View struct of display-ready data: pre-formatted strings, classified statuses, sorted rows. The Component::draw method only turns View into ratatui widgets.

This means snapshot tests don't need a TUI:

#![allow(unused)]
fn main() {
#[test]
fn s2_critical_immutable_batch() {
    let snap = StampsSnapshot {
        batches: vec![/* fixture */],
        ..Default::default()
    };
    let view = Stamps::view_for(&snap);
    insta::assert_yaml_snapshot!(view);
}
}

The tests/sN_*.rs files are entirely TUI-free. They run in CI in <1 s each. When adding behaviour, write the test against view_for first — the renderer follows.

Action / Tick loop (src/action.rs, src/app.rs)

The cockpit has a single Action enum that drives every component:

#![allow(unused)]
fn main() {
pub enum Action {
    Tick,
    Render,
    Quit,
    Suspend,
    Resume,
    ClearScreen,
    Resize(u16, u16),
    // ...
}
}

App::run() is a simple loop:

loop {
    handle terminal events → push Actions onto a channel
    handle cancellation → break
    drain action channel → dispatch to components
    render
}

Components return Option<Action> from update() — a follow-up action that gets pushed back onto the channel. This is the only inter-component communication path; there are no direct mutable references between components. (The shared data lives in the watch hub, not in components.)

Theme system (src/theme.rs)

A global Theme (palette + glyphs) installed once at startup via theme::install_with_overrides(...). Components read it via theme::active(). Hot-reload isn't supported by design — the cost of supporting it (locking, redraw on change) outweighs the benefit (set the theme once and forget).

See Theme & accessibility for the slot-based palette + glyphs design.

API client (src/api/)

A thin ApiClient wrapper over bee-rs. Holds:

  • The Bee endpoint URL
  • The Bearer token (resolved from @env:VAR at startup)
  • The profile name

The wrapper is Arc<ApiClient> and gets cloned into every watcher task and drill spawn. :context switching builds a new Arc<ApiClient> and rebuilds the screen list against it; old fan-out spawns die with the old root cancel.

Logging (src/logging.rs, src/log_capture.rs)

tracing + a process-wide LogCapture ring buffer (capacity 200). Every bee-rs HTTP call emits a structured event captured here. S10 (the command log) renders the buffer; :diagnose dumps the last 50 entries; S8's call stats compute p50/p99 over the most recent 100.

Tokens are never in the buffer — only method, url, status, elapsed_ms, ts. Headers (where Bearer lives) are not captured.

Where to read for more depth

  • The docs/PLAN.md (in the repo) is the canonical pre-implementation design doc — § 6 has the watch-hub design in full
  • The tests/sN_*.rs files show how each view_for is tested — useful when adding a new screen
  • The src/components/peers.rs file is the most complex component (bin saturation strip + scrollable peer table
    • 4-way drill); it's the canonical example of "everything the cockpit can do"

Adding a screen

A practical walkthrough of adding a new screen to the cockpit. The workflow is the same one every existing screen followed: snapshot type → watcher → component → pure view fn → insta tests → wire into App.

The example in this page is hypothetical — adding an "S15 — Settlements forensics" screen on top of the current 14. Real index 10 is already Manifest, 11 is Watchlist, 12 is FeedTimeline, 13 is Pubsub; a new screen would slot in at 14. The illustrative code below uses index 14 accordingly.

1. Define the snapshot type

In src/watch/mod.rs, add a struct holding everything one poll of your endpoint produces:

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Default)]
pub struct SettlementsForensicsSnapshot {
    pub last_update: Option<Instant>,
    pub settlements: Vec<Settlement>,
    pub total_received: String,
    pub total_sent: String,
}
}

The last_update: Option<Instant> field is the convention for "did we ever poll yet?" — components use it to distinguish cold-start (Unknown) from "loaded but empty".

2. Add it to BeeWatch

Spawn a watcher task. The pattern (in src/watch/):

#![allow(unused)]
fn main() {
impl BeeWatch {
    pub fn settlements_forensics(&self) -> watch::Receiver<SettlementsForensicsSnapshot> {
        self.settlements_forensics_rx.clone()
    }
}

fn spawn_settlements_forensics_watcher(
    api: Arc<ApiClient>,
    cancel: CancellationToken,
) -> watch::Receiver<SettlementsForensicsSnapshot> {
    let (tx, rx) = watch::channel(SettlementsForensicsSnapshot::default());
    tokio::spawn(async move {
        let bee = api.bee();
        let mut interval = tokio::time::interval(Duration::from_secs(30));
        loop {
            tokio::select! {
                _ = cancel.cancelled() => break,
                _ = interval.tick() => {
                    if let Ok(s) = bee.debug().settlements().await {
                        let _ = tx.send(SettlementsForensicsSnapshot {
                            last_update: Some(Instant::now()),
                            settlements: s.peers,
                            total_received: format_bzz(s.total_received),
                            total_sent: format_bzz(s.total_sent),
                        });
                    }
                }
            }
        }
    });
    rx
}
}

Pick the cadence based on how fast the data actually changes. Settlement state changes at chain rate — 30 s is plenty.

Wire spawn_settlements_forensics_watcher into BeeWatch::start and store the receiver on BeeWatch.

3. Define the View struct

In a new file src/components/settlements_forensics.rs:

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct SettlementsForensicsView {
    pub rows: Vec<SettlementRow>,
    pub totals: SettlementsTotals,
    pub status: SettlementsStatus,
}

#[derive(Debug, Clone, PartialEq, Eq)]
pub struct SettlementRow {
    pub peer_short: String,
    pub received: String,
    pub sent: String,
    pub net: String,
    pub net_sign: NetSign,
}

#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum NetSign { Positive, Negative, Zero }

#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum SettlementsStatus {
    Unknown, // cold start
    Healthy,
    Skewed,  // some peer's |net| > threshold
}
}

The View carries display-ready data: pre-formatted strings, classified statuses, sort order. The renderer should never have to re-compute "is this row skewed" — the view fn already did it.

4. Write the pure view_for fn

#![allow(unused)]
fn main() {
pub fn view_for(snap: &SettlementsForensicsSnapshot) -> SettlementsForensicsView {
    if snap.last_update.is_none() {
        return SettlementsForensicsView {
            rows: vec![],
            totals: SettlementsTotals::default(),
            status: SettlementsStatus::Unknown,
        };
    }

    let mut rows: Vec<SettlementRow> = snap.settlements
        .iter()
        .map(SettlementRow::from)
        .collect();
    rows.sort_by_key(|r| Reverse(r.abs_net_plur()));

    let any_skewed = rows.iter().any(|r| r.is_skewed());

    SettlementsForensicsView {
        rows,
        totals: SettlementsTotals { /* ... */ },
        status: if any_skewed { Skewed } else { Healthy },
    }
}
}

Pure: takes &Snapshot, returns View. No I/O, no references to global state, no theme calls. This is the testable surface.

5. Write insta snapshot tests

In tests/s11_settlements_forensics.rs:

#![allow(unused)]
fn main() {
use bee_tui::components::settlements_forensics::*;
use bee_tui::watch::SettlementsForensicsSnapshot;
use std::time::Instant;

fn fixture(/* parameters */) -> SettlementsForensicsSnapshot {
    SettlementsForensicsSnapshot {
        last_update: Some(Instant::now()),
        settlements: vec![/* fixture data */],
        total_received: "BZZ 12.5".into(),
        total_sent: "BZZ 11.2".into(),
    }
}

#[test]
fn cold_start_is_unknown() {
    let snap = SettlementsForensicsSnapshot::default();
    let view = view_for(&snap);
    assert_eq!(view.status, SettlementsStatus::Unknown);
}

#[test]
fn skewed_when_one_peer_is_far_out_of_balance() {
    let snap = fixture(/* ... */);
    let view = view_for(&snap);
    insta::assert_yaml_snapshot!(view);
}
}

Run cargo test --test s11_settlements_forensics and use cargo insta review to accept the new snapshots. The snapshots become the contract — any future change that alters the View needs explicit re-acceptance.

6. Implement the Component

#![allow(unused)]
fn main() {
pub struct SettlementsForensics {
    rx: watch::Receiver<SettlementsForensicsSnapshot>,
    snapshot: SettlementsForensicsSnapshot,
    selected: usize,
    scroll_offset: usize,
}

impl SettlementsForensics {
    pub fn new(rx: watch::Receiver<SettlementsForensicsSnapshot>) -> Self {
        let snapshot = rx.borrow().clone();
        Self { rx, snapshot, selected: 0, scroll_offset: 0 }
    }
}

impl Component for SettlementsForensics {
    fn update(&mut self, action: Action) -> Result<Option<Action>> {
        match action {
            Action::Tick => self.snapshot = self.rx.borrow().clone(),
            // handle screen-specific keys here
            _ => {}
        }
        Ok(None)
    }

    fn draw(&mut self, frame: &mut Frame, area: Rect) -> Result<()> {
        let view = view_for(&self.snapshot);
        // render view into ratatui widgets
        Ok(())
    }
}
}

7. Wire into App

In src/app.rs:

#![allow(unused)]
fn main() {
const SCREEN_NAMES: &[&str] = &[
    "Health", "Stamps", "Swap", "Lottery", "Peers",
    "Network", "Warmup", "API", "Tags", "Pins",
    "Manifest", "Watchlist", "FeedTimeline", "Pubsub",
    "Settlements",  // NEW — index 14
];

fn build_screens(
    api: &Arc<ApiClient>,
    watch: &BeeWatch,
    market_rx: Option<watch::Receiver<crate::economics_oracle::EconomicsSnapshot>>,
) -> Vec<Box<dyn Component>> {
    // ...existing 14 screens...
    let settlements_forensics = SettlementsForensics::new(
        watch.settlements_forensics(),
    );
    vec![
        // ...existing...
        Box::new(settlements_forensics),
    ]
}
}

If your screen has screen-specific keys, add them to screen_keymap():

#![allow(unused)]
fn main() {
fn screen_keymap(active_screen: usize) -> &'static [(&'static str, &'static str)] {
    match active_screen {
        // ...existing...
        14 => &[
            ("↑↓ / j k", "scroll one row"),
            ("PgUp / PgDn", "scroll ten rows"),
        ],
        _ => &[],
    }
}
}

If your screen needs a verb category (so it appears under the right heading in the v1.10 paged help overlay), update verb_category() in src/app.rs too — the test verb_category_covers_every_known_command will fail loudly if you add a new KNOWN_COMMANDS entry without categorising it.

8. (Optional) Add a :settlements jump command

In the command bar handler in src/app.rs, the SCREEN_NAMES table makes :settlements automatically work — any name in the list becomes a valid screen-jump command. So nothing to add.

9. Add a screens entry in mdBook

Edit docs/book/src/SUMMARY.md:

- [S11 — Settlements forensics](./screens/s11-settlements.md)

Then write docs/book/src/screens/s11-settlements.md following the existing pattern: "Why this screen exists → data shape → status semantics → common scenarios → snapshot cadence → keys".

Checklist

Before opening a PR:

  • Watcher task respects cancel.cancelled() so it shuts down cleanly
  • Watcher cadence is appropriate (don't poll faster than data actually changes)
  • view_for is pure (no theme::active() calls; let the renderer do colour)
  • insta tests cover cold-start (Unknown) + healthy + at least one degraded state
  • cargo fmt && cargo clippy --all-targets --all-features -- -D warnings clean
  • mdBook page added to SUMMARY.md
  • If your screen has interactive keys, they're listed in screen_keymap() so the ? overlay finds them

Things to not do

  • Don't poll inside a Component. Components are pure renderers. Move polling to the watch hub.
  • Don't share mutable state between Components. Use the watch hub if multiple screens need the same data.
  • Don't compute layout / colour inside view_for. That belongs in draw. The View is data, the renderer is presentation.
  • Don't skip insta tests even if the screen "looks simple". The investment pays off the first time someone refactors the cockpit's wiring.

Drill panes

The cockpit has two on-demand drill panes: the S2 stamp bucket drill (Enter on a batch row) and the S6 peer drill (Enter on a peer row). Both share the same state-machine

  • async fan-out pattern. This page documents that pattern so future drills (S9 tag drill, S3 peer-cheque drill, etc.) can be added consistently.

The state machine

#![allow(unused)]
fn main() {
pub enum DrillState {
    Idle,
    Loading { /* selection identifier */ },
    Loaded   { view: DrillView },
    Failed   { error: String },     // S2 only — S6 has per-row failures
}
}

Four states, one transition diagram:

                ↵ pressed
   Idle ─────────────────────► Loading
    ▲                              │
    │                              │ async fetch completes
    │ Esc                          ▼
    └──────────── Loaded ──────────┘
                  Failed
  • Idle — regular table is rendered, no drill UI.
  • Loading — drill pane is rendered with a spinner; data is in-flight.
  • Loaded — drill pane is rendered with the result.
  • Failed (S2 only) — drill pane shows an error message. S6 takes a different approach: each of the four endpoints can fail independently, so failure is per-row, not pane.

Esc always returns to Idle. from Loaded re-fires (useful for the rchash benchmark; less common for the drill panes).

The async fan-out

S6's drill is the canonical example — four endpoints in parallel:

#![allow(unused)]
fn main() {
fn start_peer_drill(&self, peer_overlay: String, bin: Option<u8>) {
    let api = self.client.clone();
    let tx = self.drill_tx.clone();
    tokio::spawn(async move {
        let bee = api.bee();
        let debug = bee.debug();
        let (balance, cheques, settlement, ping) = tokio::join!(
            debug.peer_balance(&peer_overlay),
            debug.peer_cheques(&peer_overlay),
            debug.peer_settlement(&peer_overlay),
            debug.pingpong(&peer_overlay),
        );
        let fetch = PeerDrillFetch {
            balance: balance.map_err(|e| e.to_string()),
            cheques: cheques.map_err(|e| e.to_string()),
            settlement: settlement.map_err(|e| e.to_string()),
            ping: ping.map_err(|e| e.to_string()),
        };
        let _ = tx.send((peer_overlay, fetch));
    });
}
}

Note: each endpoint result is converted to Result<T, String> before being sent down the channel. This is critical — it means the receiving side doesn't need to handle each endpoint's specific error type, and the aggregated PeerDrillFetch can be passed to a pure compute_peer_drill_view(...) for testability.

Why mpsc, not oneshot?

Even though only one drill is in flight at a time conceptually, we use mpsc::UnboundedReceiver rather than oneshot. Reason: the user can press Esc and again quickly, kicking off a new fetch before the old one completes. The new spawn sends down the same tx; the receiver drains all of them.

Late results from cancelled drills are dropped silently:

#![allow(unused)]
fn main() {
fn pull_drill_results(&mut self) {
    while let Ok((peer, fetch)) = self.drill_rx.try_recv() {
        // Only consume if this matches the currently loading peer
        let pending_peer = match &self.drill {
            DrillState::Loading { peer, .. } => peer.clone(),
            _ => continue,  // drop late result
        };
        if peer != pending_peer { continue; }
        let bin = match &self.drill {
            DrillState::Loading { bin, .. } => *bin,
            _ => None,
        };
        let view = Self::compute_peer_drill_view(&peer, bin, &fetch);
        self.drill = DrillState::Loaded { view };
    }
}
}

The continue on drop is intentional. We don't log "dropped a stale drill result" — it's part of normal flow.

The pure compute fn

Like screens themselves, drills have a pure compute_*_drill_view(...) that takes the fetch result and produces a DrillView:

#![allow(unused)]
fn main() {
pub fn compute_peer_drill_view(
    peer: &str,
    bin: Option<u8>,
    fetch: &PeerDrillFetch,
) -> PeerDrillView {
    PeerDrillView {
        peer_overlay: peer.into(),
        bin,
        balance:   fetch.balance.as_ref()
            .map(|b| format_balance(b))
            .map_err(|e| e.clone()).into(),
        ping:      fetch.ping.clone().into(),
        // ... other fields
    }
}
}

This is the snapshot-test surface: feed it a fixture PeerDrillFetch (mix of Ok and Err per field) and assert the View renders as expected. See tests/s6_peers_drill.rs for the canonical fixture set.

Cancellation semantics

Drill spawns are not tied to the root_cancel explicitly — they're fire-and-forget. They will always complete (or error out via the underlying HTTP timeout). The cockpit doesn't care; late results land on a closed channel (silently dropped) or get filtered by the "matches current selection" check above.

The exception: when :context switches profiles, the component itself is rebuilt. The new component has a fresh mpsc::channel; old in-flight spawns send to the old tx, which is dropped, and their results vanish. Clean by design.

Adding a new drill

Three pieces:

  1. A DrillState enum in your component file with the variants you need. Reuse Idle | Loading | Loaded if the failure mode is per-pane; if it's per-row (like S6's four endpoints), make DrillField<T> like S6 does.
  2. An async spawn function that does the fetch and sends the result down an internal mpsc. Use tokio::join! to fan out parallel fetches when possible.
  3. A pure compute_*_drill_view(...) fn that takes the fetch result and produces a DrillView. Test it with insta snapshots covering: cold load (Loading), happy path (Loaded), partial failure (where applicable).

What to not do

  • Don't put the drill fetch inside update() — async doesn't work cleanly there, and you'll block tick handling. Always tokio::spawn.
  • Don't make the drill auto-refresh. Drills are on-demand by design; auto-refresh would burn API calls on data the operator may have already left.
  • Don't make the drill block the main pane — the underlying screen should keep refreshing while the drill is open. The drill is an overlay, not a modal lock.
  • Don't share drill_rx between components. Each component owns its own channel. Drills are component- local state.

Examples in the codebase

FileDrill typeEndpoints
src/components/stamps.rsBucket histogramGET /stamps/<id>/buckets (single)
src/components/peers.rsPer-peerpeer_balance, pingpong, peer_settlement, peer_cheques (4 in parallel)
src/components/lottery.rsrchash benchmarkGET /rchash/<depth>/<a1>/<a2> (single, with timing)

The Lottery rchash isn't strictly a "drill" by name but it follows the same pattern: state machine, async fan-out, pure compute fn.

See also