Radish alpha
h
rad:z3gqcJUoA1n9HaHKufZs5FCSGazv5
Radicle Heartwood Protocol & Stack
Radicle
Git
simulation: Manual simulation with Timoni & K8s
Open ade opened 17 days ago

In anticipation of cargo test based simulation tests, this patch introduces a Timoni module and justfile to manaually deploy a basic Radicle network inside of K8s defined in the instances/network.cue.

The Timoni module creates a set of roles: peer, seed, bootstrap that can be arranged into toplogies for deployment in a k8s system. Images are provided by Radicle Garden via quay.io.

Talos is the K8s cluster manager, provisioning by default on QEMU.

The network.cue topology definition creates a v1.8.0 peer and bootstrap node that can be manually interacted with inside the simulation network.

17 files changed +1022 -38 b482845e 48b68c97
modified .codespellrc
@@ -1,5 +1,6 @@
+
# See: https://github.com/codespell-project/codespell#using-a-config-file
[codespell]
-
skip = .git*,*.lock,.codespellrc,target,.jj,.direnv
+
skip = .git*,*.lock,.codespellrc,target,.jj,.direnv,simulation/modules/radicle-node/cue.mod/*
check-hidden = true
ignore-words-list = ser,set,noes
-
dictionary = .codespell-dictionary.txt,-

\ No newline at end of file
+
dictionary = .codespell-dictionary.txt,-
modified .typos.toml
@@ -17,3 +17,8 @@ extend-ignore-re = [
[type.codespell]
check-file = false
extend-glob = [".codespellrc"]
+

+
[files]
+
extend-exclude = [
+
    "simulation/modules/radicle-node/cue.mod"
+
]
added simulation/.gitignore
@@ -0,0 +1,4 @@
+
controlplane.yaml
+
worker.yaml
+
/modules/radicle-node/cue.mod/pkg/
+
/modules/radicle-node/cue.mod/gen/
modified simulation/README.md
@@ -1,15 +1,135 @@
# Simulation Environment

-
A suite of tools to create simulated Radicle networks to run tests in:
+
A suite of tools to create simulated Radicle networks to run tests in.

-
- **Talos**: A lightweight, immutable Linux operating system built specifically to run Kubernetes.
-
  It can run locally on your machine (via QEMU or Docker) or as a baremetal OS (amongst other deploy options).
-
- **Kubernetes (K8s)**: The orchestrator that runs the Radicle nodes in isolated pods and manages their networking and storage.
-
- **Timoni** & **CUE**: The configuration engine.
-
  Instead of writing YAML, we use CUE files to define network topologies.
-
  Timoni translates these into Kubernetes instructions.
-
- **Cargo test**: The test runner.
-
  Write tests in Rust that will execute over the provisioned networks.
+
This environment provisions a Kubernetes cluster, deploys a configurable topology of `radicle-node` instances, and provides a foundation for running cross-version, cross-platform, and adverse network tests.
+

+
## Prerequisites
+

+
To run the simulation environment, you need the following tools installed on your system:
+

+
- **[just](https://just.systems/)**: A command runner (replaces `make`).
+
- **[talosctl](https://talos.dev/docs/v1.12/learn-more/talosctl/)**: CLI for creating and managing Talos Linux clusters.
+
- **[kubectl](https://kubernetes.io/docs/tasks/tools/)**: CLI for interacting with Kubernetes.
+
- **[timoni](https://timoni.sh/)**: A package manager for Kubernetes, powered by CUE.
+
- **[cue](https://cuelang.org/)**: (Optional) Useful for debugging and formatting CUE files.
+
- **[QEMU](https://www.qemu.org/download/)** or **[Docker](https://www.docker.com/)**: Required by Talos to provision the local cluster nodes. (Defaults to `qemu`).
+

+
## Getting Started
+

+
The environment is managed entirely via `just`. From the `simulation` directory, you can run:
+

+
```shell
+
# Start the complete simulation (creates cluster, configures K8s, and deploys the network)
+
$ just start
+

+
# Note: To use Docker instead of QEMU, override the provisioner:
+
$ PROVISIONER=docker just start
+

+
# Inspect the cluster and see running pods
+
$ just show-cluster
+

+
# Tear down the network workloads (deletes pods and storage, keeps the cluster running)
+
$ just delete
+

+
# Destroy the entire Talos cluster and clean up your kubeconfig
+
$ just destroy
+
```
+

+
Run `just` by itself to see a list of all available commands.
+

+
## Architecture Overview
+

+
Here is how the different tools interact to build the simulation:
+

+
1. **Just (`justfile`)**: Acts as the orchestrator. It runs the bash scripts required to bootstrap the environment, verify tools are installed, and execute the correct CLI commands in sequence.
+
2. **Talos (`talosctl`)**: A lightweight, immutable Linux operating system built specifically to run Kubernetes. `just` uses `talosctl` to spin up a local K8s cluster inside QEMU or Docker.
+
3. **Kubernetes (`kubectl`)**: The orchestrator that runs the Radicle nodes in isolated pods. It manages their networking (DNS resolution between nodes) and persistent storage.
+
4. **Timoni & CUE**: The configuration engine. Instead of writing verbose YAML, we use CUE files to define network topologies. Timoni reads these CUE files, transpiles them into Kubernetes object definitions (StatefulSets, Services, ConfigMaps), and applies them to the cluster.
+

+
## Defining a Topology
+

+
Network topologies are defined in `instances/network.cue`. This file dictates how many nodes exist, what roles they play (e.g., `bootstrap`, `peer`), and how they connect to each other.
+

+
Here is an example of how the topology is structured:
+

+
```cue
+
package main
+

+
// ...
+

+
// Declare instances to deploy
+
values: {
+
	topology: {
+
		// A bootstrap node
+
		"bootstrap-v1-8-0": {
+
			role:          "bootstrap"
+
			version:       "1.8.0"
+
			replicas:      1
+
			nodeIdSeed:    "bootstrap-0" // Deterministically generates the NID above
+
			radicleConfig: #BaseBootstrapSeedConfig
+
		}
+
		
+
		// A peer node that connects to the bootstrap node
+
		"seed-v1-8-0": {
+
			role:          "seed"
+
			version:       "1.8.0"
+
			replicas:      1
+
			radicleConfig: #BasePeerConfig & {
+
				preferredSeeds: [
+
					// Uses a helper to format the K8s internal DNS address
+
					(#SeedAddress & {nid: #BootstrapNIDs["bootstrap-0"], name: "bootstrap-v1-8-0"}).out,
+
				]
+
			}
+
		}
+
	}
+
}
+
```
+

+
When you run `just start-network`, Timoni reads this file, merges it with the module definitions in `modules/radicle-node`, and deploys the resulting pods to Kubernetes.
+

+
## Helpful Commands
+

+
**Execute a command inside a node:**
+

+
```bash
+
$ kubectl exec peer-v1-8-0-0 -c node -- rad self
+

+
$ kubectl exec peer-v1-8-0-0 -it -c node -- sh
+
```
+

+
**Follow Radicle node events (from the sidecar):**
+

+
```bash
+
$ kubectl logs peer-v1-8-0-0 -c events -f
+
```
+

+
**View standard node logs:**
+

+
```bash
+
$ kubectl logs -f bootstrap-v1-8-0-0 -c node
+
```
+

+
**Describe a pod (Useful for debugging `CrashLoopBackOff` errors):**
+

+
```bash
+
$ kubectl describe pod bootstrap-v1-8-0-0
+
```
+

+
**Watch all cluster events in real-time:**
+

+
```bash
+
$ kubectl get events --watch
+
```
+

+
## Deterministic Keys
+

+
To ensure nodes can reliably connect to each other across restarts, the `bootstrap` nodes are configured with deterministic Node IDs (NIDs).
+

+
`bootstrap-0`: `did:key:z6MkhJ3cwzpAoNjFnJXWETSPHcDyw2HuBVEhgkyTfbjQHY1B`
+
`bootstrap-1`: `did:key:z6MkjcaeSHhQVJU1UeXpnHHZ6mp67zDfQYNMDotHGxbrk7Nj`
+
`bootstrap-2`: `did:key:z6MkjNGhuJvdp2noidRMLqco4jFnNNSWzCxSZH5nJV1pGrwQ`
+
`bootstrap-3`: `did:key:z6MkpEsXUMSnmyfwdEVkAKijTxGy9WKmNoHWpoxxLM6bbz9M`

## Why?

@@ -24,27 +144,6 @@ However we can only run them on the currently checked out version of `heartwood`
The simulation environment is intended to remedy these gaps and more.
See the [Goals] section for more info.

-
## Overview
-

-
The Garden team currently deploys containerised versions of `radicle-node` into [Quay.io](https://quay.io/repository/radicle_garden/radicle-node?tab=tags&tag=latest).
-
We can utilise these containers inside of K8s configuration files to compose sets of pods.
-
These pods encapsulate `radicle-node` processes in different configurations, e.g. peer, seed or bootstrap.
-
Also, they might run different versions of `heartwood` (to facilitate cross-version testing),
-
and on different platforms (to facilitate cross-platform testing).
-
Each of these 'sets of pods' configuration will be considered a network topology, and defined in [CUE](https://cuelang.org/).
-
It allows us to write type safe configuration definitions instead of YAML.
-
We will then use [Timoni](https://timoni.sh/) to transpile these CUE defined network topologies into [K8s object definition files](https://kubernetes.io/docs/concepts/overview/working-with-objects/) and deploy them.
-
[Talos](https://talos.dev) will be used to run the K8s pods on; so we can easily switch between locally deployed, via QEMU or Docker, to baremetal on SBC's like Raspberry Pi's, or remotely in cloud environments.
-
Then with some glue and orchestration code we can utilise the `cargo test` runner to provision a network topology, run tests over it and tear it down again.
-
Finally we can insert observability systems into K8s so we can inspect and compare metrics and logs from different test runs.
-

-
This will give us the following workflow for constructing test scenarios:
-

-
1. Define a network topology of `radicle-node`'s on some platform(s) in CUE.
-
2. Write tests that interact with the `radicle-nodes` in Rust.
-
3. Run the tests.
-
4. Inspect / Debug via observability systems.
-

## Constraints

### Non-Goals:
@@ -56,8 +155,8 @@ This will give us the following workflow for constructing test scenarios:

### Goals:

-
- [ ] Isolation between simulations and main network.
-
- [ ] Different node versions within a simulation.
+
- [X] Isolation between simulations and main network.
+
- [X] Different node versions within a simulation.
- [ ] Cross platform ([Windows](https://github.com/dockur/windows), Linux & [MacOS](https://github.com/dockur/macos)).
- [ ] Realistic load generation.
- [ ] Invariant assertion across simulation network.
@@ -66,7 +165,7 @@ This will give us the following workflow for constructing test scenarios:
- [ ] Realtime Observability.
- [ ] CI/CD Integration.
- [ ] Cross simulation comparative insights e.g. CPU pressure change from version A to version B.
-
- [ ] Flexibility to define network topologies.
+
- [X] Flexibility to define network topologies.
- [ ] Easy to construct and run new simulations.
- [ ] Reproducible starting state.
- [ ] Adverse network emulation e.g. dropped packets, network delays...
@@ -74,9 +173,9 @@ This will give us the following workflow for constructing test scenarios:
## Plan

- [ ] Migrate existing [simulation environment repo](https://app.radicle.xyz/nodes/iris.radicle.xyz/rad%3Az2CzknCvAq9jSCpKdyjMppbvGmxyZ) into `heartwood`.
-
  1. [ ] `radicle-node` timoni module.
+
  1. [X] `radicle-node` timoni module.
  2. [ ] `radicle-node` custom container builder.
-
  3. [ ] `instances` topology definition files.
+
  3. [X] `instances` topology definition files.
  4. [ ] `sim-tests` rust crate.
-
  5. [ ] `Makefile`.
+
  5. [X] `justfile` orchestration.
  6. [ ] `observability` definition files.
added simulation/instances/network.cue
@@ -0,0 +1,80 @@
+
package main
+

+
// Pre-calculated NIDs.
+
#BootstrapNIDs: {
+
	"bootstrap-0": "z6MkhJ3cwzpAoNjFnJXWETSPHcDyw2HuBVEhgkyTfbjQHY1B"
+
	"bootstrap-1": "z6MkjcaeSHhQVJU1UeXpnHHZ6mp67zDfQYNMDotHGxbrk7Nj"
+
	"bootstrap-2": "z6MkjNGhuJvdp2noidRMLqco4jFnNNSWzCxSZH5nJV1pGrwQ"
+
	"bootstrap-3": "z6MkpEsXUMSnmyfwdEVkAKijTxGy9WKmNoHWpoxxLM6bbz9M"
+
}
+

+
// Shared configs
+
#SeedAddress: {
+
	nid:   string
+
	name:  string
+
	role:  string | *"bootstrap"
+
	index: int | *0
+
	out:   "\(nid)@\(name)-\(index).\(role).default.svc.cluster.local:8776"
+
}
+

+
#BaseBootstrapSeedConfig: {
+
	node: {
+
		listen: ["0.0.0.0:8776"]
+
		seedingPolicy: {
+
			default: "allow"
+
			scope:   "all"
+
		}
+
		...
+
	}
+
	...
+
}
+

+
#BasePeerConfig: {
+
	node: {
+
		listen: []
+
		peers: type: "dynamic"
+
		connect: []
+
		externalAddresses: []
+
		log:   "INFO"
+
		relay: "auto"
+
		limits: {
+
			routingMaxSize:   1000
+
			routingMaxAge:    604800
+
			gossipMaxAge:     1209600
+
			fetchConcurrency: 1
+
			maxOpenFiles:     4096
+
			rate: {
+
				inbound: {fillRate: 5.0, capacity: 1024}
+
				outbound: {fillRate: 10.0, capacity: 2048}
+
			}
+
			connection: {inbound: 128, outbound: 16}
+
			fetchPackReceive: "500.0 MiB"
+
		}
+
		seedingPolicy: default: "block"
+
		...
+
	}
+
	...
+
}
+

+
values: {
+
	topology: {
+
		// Instances
+
		"bootstrap-v1-8-0": {
+
			role:          "bootstrap"
+
			version:       "1.8.0"
+
			replicas:      1
+
			nodeIdSeed:    "bootstrap-0"
+
			radicleConfig: #BaseBootstrapSeedConfig
+
		}
+
		"peer-v1-8-0": {
+
			role:          "peer"
+
			version:       "1.8.0"
+
			replicas:      1
+
			radicleConfig: #BasePeerConfig & {
+
				preferredSeeds: [
+
					(#SeedAddress & {nid: #BootstrapNIDs["bootstrap-0"], name: "bootstrap-v1-8-0"}).out,
+
				]
+
			}
+
		}
+
	}
+
}
added simulation/justfile
@@ -0,0 +1,151 @@
+
provisioner := env_var_or_default("PROVISIONER", "qemu")
+
cluster_name := "radicle-" + provisioner
+
clusters_dir := env_var("HOME") + "/.talos/clusters"
+
radicle_node_module := "modules/radicle-node"
+
module_pkg := "cue.mod/pkg"
+
module_gen := "cue.mod/gen"
+
kubectl_context := `kubectl config current-context 2>/dev/null || echo "none"`
+

+
SUCCESS := "✅ " + GREEN + BOLD
+
CHECK := "🔄 " + BOLD
+
WARN := "⚠️ " + YELLOW + BOLD
+
ERROR := "❌ " + RED + BOLD
+
HINT := "💡 " + BOLD
+

+
default:
+
    @just --list
+

+
# Setup and start the complete simulation environment
+
[group('start')]
+
[group('setup')]
+
start: setup start-network
+
    @echo ""
+
    @echo "{{SUCCESS}}Simulation started!{{NORMAL}}"
+
    @echo ""
+

+
# Setup cluster and dependencies
+
[group('setup')]
+
setup: configure-cluster
+
    @echo "{{SUCCESS}}Setup complete{{NORMAL}}"
+

+
# Create the Talos cluster if it doesn't exist
+
[private]
+
create-cluster: (verify-tool "talosctl")
+
    #!/usr/bin/env bash
+
    set -e
+
    if [ ! -d "{{clusters_dir}}/{{cluster_name}}" ]; then
+
        echo "{{CHECK}}Creating Talos cluster '{{cluster_name}}' using {{provisioner}}...{{NORMAL}}"
+
        mkdir -p "{{clusters_dir}}"
+
        if [ "{{provisioner}}" = "qemu" ]; then
+
            sudo --preserve-env=HOME talosctl cluster create --name={{cluster_name}} {{provisioner}} --config-patch-controlplanes '{"cluster": {"allowSchedulingOnControlPlanes": true}}'
+
        else
+
            talosctl cluster create --name={{cluster_name}} {{provisioner}} --config-patch-controlplanes '{"cluster": {"allowSchedulingOnControlPlanes": true}}'
+
        fi
+
    else
+
        echo "{{SUCCESS}}Cluster '{{cluster_name}}' already exists.{{NORMAL}}"
+
    fi
+

+
# Configure the Kubernetes cluster
+
[private]
+
configure-cluster: (verify-tool "kubectl") create-cluster
+
    @echo "{{CHECK}}Configuring cluster...{{NORMAL}}"
+
    # Add local-path storage system
+
    @kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
+
    # Relax security on namespaces
+
    @kubectl label namespace local-path-storage pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/warn=privileged pod-security.kubernetes.io/audit=privileged --overwrite
+
    @kubectl label namespace default pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/warn=privileged pod-security.kubernetes.io/audit=privileged --overwrite
+
    # Set default storage class
+
    @kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
+

+
# Start the simulation network only
+
[group('start')]
+
start-network: (verify-tool "timoni") vendor-timoni-dependencies
+
    @echo "{{CHECK}}Starting simulation network...{{NORMAL}}"
+
    @timoni apply radicle-network {{radicle_node_module}} -f instances/network.cue
+
    @just show-cluster
+

+
# Vendor Timoni dependencies
+
[private]
+
vendor-timoni-dependencies: (verify-tool "timoni")
+
    #!/usr/bin/env bash
+
    set -e
+
    cd {{radicle_node_module}}
+
    if [ ! -d "{{module_pkg}}" ]; then
+
        echo "{{CHECK}}Fetching Timoni pkg files...{{NORMAL}}"
+
         timoni artifact pull oci://ghcr.io/stefanprodan/timoni/schemas -o cue.mod/pkg
+
    fi
+
    if [ ! -d "{{module_gen}}" ]; then
+
        echo "{{CHECK}}Fetching Timoni k8s gen files...{{NORMAL}}"
+
        timoni mod vendor k8s
+
    fi
+

+
# Show cluster status
+
[group('inspect')]
+
show-cluster: (verify-tool "kubectl") (verify-tool "talosctl")
+
    @echo "Cluster: {{cluster_name}}"
+
    @echo "Context: {{kubectl_context}}"
+
    @talosctl cluster show --name {{cluster_name}} --provisioner {{provisioner}} || true
+
    @kubectl get pods -o wide
+

+
# Delete simulation pods and resources
+
[group('delete')]
+
delete: delete-pods delete-pvc
+
    @echo "{{SUCCESS}}Simulation cleaned up{{NORMAL}}"
+

+
# Delete pods only
+
[group('delete')]
+
delete-pods: (verify-tool "kubectl")
+
    @echo "{{CHECK}}Deleting pods...{{NORMAL}}"
+
    @kubectl delete pods -l app=radicle-node --wait=false
+

+
# Delete storage volumes
+
[group('delete')]
+
delete-pvc: (verify-tool "kubectl")
+
    @echo "{{CHECK}}Deleting storage volumes...{{NORMAL}}"
+
    @kubectl delete pvc -l app=radicle-node --wait=false
+

+
# Destroy the Talos cluster and clean up kubeconfig
+
[group('delete')]
+
destroy: (verify-tool "kubectl") (verify-tool "talosctl") show-cluster
+
    #!/usr/bin/env bash
+
    set -e
+
    echo ""
+
    echo -n "Are you sure you want to destroy the cluster and remove kubeconfig entries? [y/N] "
+
    read answer
+
    if [ "${answer:-N}" != "y" ]; then
+
        echo "Aborted."
+
        exit 1
+
    fi
+
    
+
    echo "{{CHECK}}Destroying talos cluster '{{cluster_name}}'...{{NORMAL}}"
+
    if [ "{{provisioner}}" = "qemu" ]; then
+
        sudo --preserve-env=HOME talosctl cluster destroy --name {{cluster_name}} --provisioner {{provisioner}}
+
    else
+
        talosctl cluster destroy --name {{cluster_name}} --provisioner {{provisioner}}
+
    fi
+
    
+
    echo "{{CHECK}}Removing kube config entries...{{NORMAL}}"
+
    CONTEXT=$(kubectl config current-context 2>/dev/null || echo "")
+
    if [ -n "$CONTEXT" ]; then
+
        CLUSTER=$(echo "$CONTEXT" | cut -d '@' -f 2)
+
        kubectl config delete-context "$CONTEXT" || true
+
        kubectl config delete-cluster "$CLUSTER" || true
+
        kubectl config unset "users.$CONTEXT" || true
+
    fi
+
    echo "{{WARN}}Make sure you remove the '{{cluster_name}}' entry from: ~/.talos/config{{NORMAL}}"
+
    echo "{{SUCCESS}}Cluster destroyed.{{NORMAL}}"
+

+
# Check if required tools are in PATH.
+
[private]
+
verify-tool tool package_name="":
+
    #!/usr/bin/env bash
+
    set -e
+
    if ! command -v {{tool}} >/dev/null 2>&1; then
+
        PKG="{{package_name}}"
+
        if [ -z "$PKG" ]; then
+
            PKG="{{tool}}"
+
        fi
+
        echo "{{ERROR}}Missing required tool: {{tool + NORMAL}}"
+
        echo "{{HINT}}Use your systems package manager to install '$PKG'.{{NORMAL}}"
+
        exit 1
+
    fi
added simulation/modules/radicle-node/README.md
@@ -0,0 +1,96 @@
+
# radicle-node
+

+
A [timoni.sh](http://timoni.sh) module for deploying radicle-node to Kubernetes clusters.
+

+
## Prerequisites
+

+
* [Timoni](https://timoni.sh/) installed.
+
* A valid `KUBECONFIG` pointing to your Kubernetes cluster.
+
* A default `StorageClass` available in your cluster (used by the StatefulSet for persistent storage).
+

+
## Install
+

+
To create an instance using the default values:
+

+
```shell
+
timoni -n default apply radicle-node oci://<container-registry-url>
+
```
+

+
To change the [default configuration](#configuration),
+
create one or more `values.cue` files and apply them to the instance.
+

+
For example, create a file `my-values.cue` with the following content:
+

+
```cue
+
values: {
+
	topology: {
+
		"my-seed-node": {
+
			role: "seed"
+
			replicas: 1
+
			storage: size: "5Gi"
+
			sidecars: events: true
+
		}
+
	}
+
}
+
```
+

+
And apply the values with:
+

+
```shell
+
timoni -n default apply radicle-node oci://<container-registry-url> \
+
--values ./my-values.cue
+
```
+

+
## Uninstall
+

+
To uninstall an instance and delete all its Kubernetes resources:
+

+
```shell
+
timoni -n default delete radicle-node
+
```
+

+
## Module Structure
+

+
This Timoni module is organized into several CUE files that define the schema, templates, and deployment logic:
+

+
* **`timoni.cue`**: The entry point of the module. It defines the user-supplied values schema and the Timoni workflow (how to build, validate, and apply the Kubernetes resources).
+
* **`templates/config.cue`**: Contains the core `#Config` and `#NodeGroup` schemas. It defines default values (including startup scripts) and the `#Instance` logic that iterates over the defined `topology` to generate the required Kubernetes objects.
+
* **`templates/statefulset.cue`**: Defines the Kubernetes `StatefulSet` template, configuring the Radicle node container, init containers for configuration prep, optional sidecars (like the events logger), and persistent volume claims.
+
* **`templates/configmap.cue`**: Defines the Kubernetes `ConfigMap` template used to inject the `config.json` into the Radicle node pods.
+
* **`templates/service.cue`**: Defines a Kubernetes Headless `Service` template. This is created per "role" to allow direct pod-to-pod DNS resolution for Radicle's gossip network.
+
* **`debug_tool.cue`**: Provides CUE CLI commands (`build` and `ls`) to render and inspect the generated Kubernetes manifests locally without applying them to a cluster.
+
* **`timoni.ignore`**: Lists files and directories that should be excluded when packaging or applying the module.
+

+
## Configuration
+

+
### Topology Configuration
+

+
This module uses a `topology` map to define one or more groups of Radicle nodes. Each group generates its own `StatefulSet` and `ConfigMap`, and groups sharing the same `role` will share a headless `Service`.
+

+
| Key                               | Type     | Default                                 | Description                                                                 |
+
|-----------------------------------|----------|-----------------------------------------|-----------------------------------------------------------------------------|
+
| `topology: [name]: role`          | `string` | **Required**                            | The role of the node group (e.g. `seed`, `peer`). Used for DNS resolution. |
+
| `topology: [name]: replicas`      | `int`    | `1`                                     | Number of pod replicas for this group.                                      |
+
| `topology: [name]: repository`    | `string` | `quay.io/radicle_garden/radicle-node`   | Container image repository.                                                 |
+
| `topology: [name]: version`       | `string` | `latest`                                | Container image tag.                                                        |
+
| `topology: [name]: storage`       | `object` | `{className: "local-path", size: "1Gi"}`| Persistent Volume Claim configuration for `/home/radicle/.radicle`.         |
+
| `topology: [name]: sidecars`      | `object` | `{events: true}`                        | Toggles sidecar containers (e.g. the events logger).                       |
+
| `topology: [name]: radicleConfig` | `object` | `{node: {network: "test", ...}}`        | The contents of the `config.json` injected into the node.                   |
+

+
## Node Identity / Secrets
+

+
The node's identity (cryptographic keys) is generated automatically on startup if it doesn't exist. You can control this generation deterministically using the `NODE_ID_SEED` environment variable.
+

+
If `NODE_ID_SEED` is provided in the configuration, the startup script hashes it to generate a 32-byte seed for `rad auth`. If omitted, the pod's hostname is hashed instead. This ensures that if a pod restarts, it can regenerate the same identity, or you can explicitly pass a seed to persist identities across complete cluster teardowns.
+

+
## Debugging
+

+
You can render and inspect the generated Kubernetes manifests locally without applying them to a cluster using the included CUE debug tool:
+

+
```shell
+
# Print a summary of the resources that will be created
+
cue cmd -t debug -t name=my-node -t namespace=default -t mv=1.0.0 -t kv=1.28.0 ls
+

+
# Output the full multi-doc YAML
+
cue cmd -t debug -t name=my-node -t namespace=default -t mv=1.0.0 -t kv=1.28.0 build
+
```
added simulation/modules/radicle-node/cue.mod/module.cue
@@ -0,0 +1,2 @@
+
module: "timoni.sh/radicle-node"
+
language: version: "v0.15.0"
added simulation/modules/radicle-node/debug_tool.cue
@@ -0,0 +1,36 @@
+
package main
+

+
import (
+
	"list"
+
	"tool/cli"
+
	"encoding/yaml"
+
	"text/tabwriter"
+
)
+

+
_resources: list.Concat([timoni.apply.app, timoni.apply.test])
+

+
// The build command generates the Kubernetes manifests and prints the multi-docs YAML to stdout.
+
// Example 'cue cmd -t debug -t name=test -t namespace=test -t mv=1.0.0 -t kv=1.28.0 build'.
+
command: build: {
+
	task: print: cli.Print & {
+
		text: yaml.MarshalStream(_resources)
+
	}
+
}
+

+
// The ls command prints a table with the Kubernetes resources kind, namespace, name and version.
+
// Example 'cue cmd -t debug -t name=test -t namespace=test -t mv=1.0.0 -t kv=1.28.0 ls'.
+
command: ls: {
+
	task: print: cli.Print & {
+
		text: tabwriter.Write([
+
			"RESOURCE \tAPI VERSION",
+
			for r in _resources {
+
				if r.metadata.namespace == _|_ {
+
					"\(r.kind)/\(r.metadata.name) \t\(r.apiVersion)"
+
				}
+
				if r.metadata.namespace != _|_ {
+
					"\(r.kind)/\(r.metadata.namespace)/\(r.metadata.name)  \t\(r.apiVersion)"
+
				}
+
			},
+
		])
+
	}
+
}
added simulation/modules/radicle-node/debug_values.cue
@@ -0,0 +1,30 @@
+
@if(debug)
+

+
package main
+

+
// Values used by debug_tool.cue.
+
// Debug example 'cue cmd -t debug -t name=test -t namespace=test -t mv=1.0.0 -t kv=1.28.0 build'.
+
values: {
+
	podAnnotations: "cluster-autoscaler.kubernetes.io/safe-to-evict": "true"
+
	message: "Hello Debug"
+
	image: {
+
		repository: "docker.io/nginx"
+
		tag:        "1-alpine"
+
		digest:     ""
+
	}
+
	test: {
+
		enabled: true
+
		image: {
+
			repository: "docker.io/curlimages/curl"
+
			tag:        "latest"
+
			digest:     ""
+
		}
+
	}
+
	affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: [{
+
		matchExpressions: [{
+
			key:      "kubernetes.io/os"
+
			operator: "In"
+
			values: ["linux"]
+
		}]
+
	}]
+
}
added simulation/modules/radicle-node/templates/config.cue
@@ -0,0 +1,207 @@
+
package templates
+

+
import (
+
	timoniv1 "timoni.sh/core/v1alpha1"
+
)
+

+
#NodeGroup: {
+
	role:       string
+
	replicas:   int | *1
+
	repository: string | *"quay.io/radicle_garden/radicle-node"
+
	pullPolicy: string | *"IfNotPresent"
+
	version:    string | *"latest"
+
	nodeIdSeed: string | *""
+

+
	storage: {
+
		className: string | *"local-path"
+
		size:      string | *"1Gi"
+
	}
+

+
	resources: {...}
+

+
	sidecars: {
+
		events: bool | *true
+
	}
+

+
	scripts: {
+
		init: string | *"""
+
			#!/bin/sh
+
			set -e
+

+
			KUBE_CONFIG_DIR=/tmp/config-source
+
			RAD_HOME=/home/radicle/.radicle
+
			RAD_CONFIG=${RAD_HOME}/config.json
+

+
			echo "[INIT] Hostname: $(hostname)"
+
			# --- STANDARD INIT LOGIC (User 11011) ---
+
			mkdir -p "${RAD_HOME}"
+

+
			if [ -f "${KUBE_CONFIG_DIR}/config.json" ]; then
+
			  cp "${KUBE_CONFIG_DIR}/config.json" "${RAD_CONFIG}"
+
			  echo "[INIT] Config copied successfully."
+
			else
+
			  echo "[INIT] ERROR: Source config not found."
+
			  exit 1
+
			fi
+
			"""
+
		start: string | *"""
+
			#!/bin/sh
+
			set -e
+

+
			RAD_HOME=/home/radicle/.radicle
+
			RAD_ALIAS=$(hostname)
+
			RAD_KEY=${RAD_HOME}/keys/radicle
+
			RAD_CONFIG=${RAD_HOME}/config.json
+

+
			# Configure the external address by prepending the pod's hostname.
+
			# We only do this for seeds and bootstraps to ensure proper routing.
+
			configure_external_address() {
+
			  # Extract the first external address, stripping JSON formatting
+
			  EXT_ADDRESS=$(rad config | jq -r '.node.externalAddresses[0]')
+
			  
+
			  if [ -n "$EXT_ADDRESS" ]; then
+
			    # Check if it already starts with the pod's hostname to prevent stuttering
+
			    case "$EXT_ADDRESS" in
+
			      ${RAD_ALIAS}.*)
+
			        echo "[START] External address already correct: ${EXT_ADDRESS}"
+
			        ;;
+
			      *)
+
			        rad config remove node.externalAddresses "${EXT_ADDRESS}"
+
			        NEW_ADDRESS="${RAD_ALIAS}.${EXT_ADDRESS}"
+
			        rad config push node.externalAddresses "${NEW_ADDRESS}"
+
			        echo "[START] Node's external address updated to: ${NEW_ADDRESS}"
+
			        ;;
+
			    esac
+
			  fi
+
			}
+

+
			#
+
			# Generate keys
+
			#
+
			if [ ! -f "${RAD_KEY}" ]; then
+
			  echo "[START] Generating identity for: ${RAD_ALIAS}..."
+
			  # We move the config out of the way so 'rad auth' doesn't complain
+
			  if [ -f "${RAD_CONFIG}" ]; then
+
			     mv "${RAD_CONFIG}" "${RAD_CONFIG}.bak"
+
			  fi
+

+
			  # RAD_KEYGEN_SEED requires a 32-byte hex string.
+
			  # We hash either the injected NODE_ID_SEED or the hostname to generate it.
+
			  if [ -n "${NODE_ID_SEED}" ]; then
+
			    export RAD_KEYGEN_SEED=$(echo "${NODE_ID_SEED}" | sha256sum | tr -d "\\n *-")
+
			  else
+
			    export RAD_KEYGEN_SEED=$(hostname | sha256sum | tr -d "\\n *-")
+
			  fi
+

+
			  rad auth --alias "${RAD_ALIAS}"
+

+
			  if [ -f "${RAD_CONFIG}.bak" ]; then
+
			     mv "${RAD_CONFIG}.bak" "${RAD_CONFIG}"
+
			  fi
+
			  echo "[START] Identity generated"
+
			fi
+

+
			#
+
			# Update config settings
+
			#
+
			echo "[START] Node's alias set to: $(rad config set node.alias "${RAD_ALIAS}")"
+

+
			if [ "\(role)" = "seed" ] || [ "\(role)" = "bootstrap" ]; then
+
			  configure_external_address
+
			fi
+

+
			#
+
			# Start node
+
			#
+
			echo "[START] Starting Radicle node..."
+
			exec rad node start --foreground
+
			"""
+
		events: string | *"""
+
			#!/bin/sh
+

+
			RAD_HOME=/home/radicle/.radicle
+
			RAD_NODE_SOCKET=${RAD_HOME}/node/control.sock
+

+
			echo "[EVENTS] Waiting for node socket..."
+
			while [ ! -S ${RAD_NODE_SOCKET} ]; do
+
			  sleep 1
+
			done
+
			echo "[EVENTS] Socket found. Streaming events..."
+
			exec rad node events
+
			"""
+
	}
+

+
	radicleConfig: {
+
		node: {
+
			// Automatically generate the base external address using the role.
+
			// The start.sh script will dynamically prepend the pod's hostname to this at boot.
+
			externalAddresses: [...string] | *["\(role).default.svc.cluster.local:8776"]
+
			// Set network to "test" by default.
+
			network: "test"
+
			...
+
		}
+
		...
+
	}
+
}
+

+
// Config defines the schema and defaults for the Instance values.
+
#Config: {
+
	kubeVersion!: string
+
	clusterVersion: timoniv1.#SemVer & {#Version: kubeVersion, #Minimum: "1.20.0"}
+
	moduleVersion!: string
+
	metadata: timoniv1.#Metadata & {#Version: moduleVersion}
+
	metadata: labels: timoniv1.#Labels
+
	metadata: annotations?: timoniv1.#Annotations
+

+
	// The topology map is merged with the #NodeGroup schema
+
	topology: [string]: #NodeGroup
+

+
	// Helper to generate metadata with a specific name
+
	#Meta: {
+
		name: string
+
		out: {
+
			"name": name
+
			namespace: metadata.namespace
+
			if metadata.annotations != _|_ {
+
				annotations: metadata.annotations
+
			}
+
		}
+
	}
+
}
+

+
// Instance takes the config values and outputs the Kubernetes objects.
+
#Instance: {
+
	config: #Config
+

+
	// Extract unique roles to create headless services
+
	let _roles = {
+
		for name, group in config.topology {
+
			"\(group.role)": true
+
		}
+
	}
+

+
	objects: {
+
		// Generate one Headless Service per role (e.g. seed, peer, bootstrap)
+
		for roleName, _ in _roles {
+
			"svc-\(roleName)": #Service & {
+
				#config: config
+
				#role:   roleName
+
			}
+
		}
+

+
		// Generate a StatefulSet and ConfigMap for each group in the topology
+
		for name, group in config.topology {
+
			"cm-\(name)": #ConfigMap & {
+
				#config: config
+
				#name:   name
+
				#group:  group
+
			}
+
			"sts-\(name)": #StatefulSet & {
+
				#config: config
+
				#name:   name
+
				#group:  group
+
				#cmName: name + "-config"
+
			}
+
		}
+
	}
+
}
added simulation/modules/radicle-node/templates/configmap.cue
@@ -0,0 +1,18 @@
+
package templates
+

+
import (
+
	"encoding/json"
+
	corev1 "k8s.io/api/core/v1"
+
)
+

+
#ConfigMap: corev1.#ConfigMap & {
+
	#config: #Config
+
	#name:   string
+
	#group:  #NodeGroup
+
	apiVersion: "v1"
+
	kind:       "ConfigMap"
+
	metadata:   (#config.#Meta & {name: #name + "-config"}).out
+
	data: {
+
		"config.json": json.Marshal(#group.radicleConfig)
+
	}
+
}
added simulation/modules/radicle-node/templates/service.cue
@@ -0,0 +1,27 @@
+
package templates
+

+
import (
+
	corev1 "k8s.io/api/core/v1"
+
)
+

+
#Service: corev1.#Service & {
+
	#config: #Config
+
	#role:   string
+
	apiVersion: "v1"
+
	kind:       "Service"
+
	metadata:   (#config.#Meta & {name: #role}).out
+
	spec: corev1.#ServiceSpec & {
+
		clusterIP: "None" // Headless service for direct pod DNS resolution
+
		selector: {
+
			"app":  "radicle-node"
+
			"role": #role
+
		}
+
		ports: [
+
			{
+
				name:       "gossip"
+
				port:       8776
+
				targetPort: 8776
+
			},
+
		]
+
	}
+
}
added simulation/modules/radicle-node/templates/statefulset.cue
@@ -0,0 +1,147 @@
+
package templates
+

+
import (
+
	appsv1 "k8s.io/api/apps/v1"
+
	corev1 "k8s.io/api/core/v1"
+
)
+

+
#StatefulSet: appsv1.#StatefulSet & {
+
	#config: #Config
+
	#name:   string
+
	#group:  #NodeGroup
+
	#cmName: string
+
	apiVersion: "apps/v1"
+
	kind:       "StatefulSet"
+
	metadata:   (#config.#Meta & {name: #name}).out
+
	spec: appsv1.#StatefulSetSpec & {
+
		serviceName: #group.role
+
		replicas:    #group.replicas
+
		selector: matchLabels: {
+
			"app":      "radicle-node"
+
			"instance": #name
+
		}
+
		template: {
+
			metadata: labels: {
+
				"app":      "radicle-node"
+
				"role":     #group.role
+
				"instance": #name
+
			}
+
			spec: corev1.#PodSpec & {
+
				securityContext: {
+
					fsGroup: 11011
+
					seccompProfile: type: "RuntimeDefault"
+
					runAsNonRoot: true
+
					runAsUser:    11011
+
					runAsGroup:   11011
+
				}
+
				initContainers: [
+
					{
+
						name:  "config-prep"
+
						image: "busybox"
+
						command: ["sh", "-c"]
+
						args: [#group.scripts.init]
+
						volumeMounts: [
+
							{
+
								name:      "config-template"
+
								mountPath: "/tmp/config-source"
+
							},
+
							{
+
								name:      "radicle-home"
+
								mountPath: "/home/radicle/.radicle"
+
							},
+
						]
+
						securityContext: {
+
							runAsUser:                11011
+
							runAsNonRoot:             true
+
							allowPrivilegeEscalation: false
+
							capabilities: drop: ["ALL"]
+
							seccompProfile: type: "RuntimeDefault"
+
						}
+
					},
+
				]
+
				containers: [
+
					{
+
						name:            "node"
+
						image:           "\(#group.repository):\(#group.version)"
+
						imagePullPolicy: #group.pullPolicy
+
						command: ["/bin/sh", "-c"]
+
						args: [#group.scripts.start]
+
						env: [
+
							{
+
								name:  "RAD_PASSPHRASE"
+
								value: ""
+
							},
+
							{
+
								name:  "NODE_ID_SEED"
+
								value: #group.nodeIdSeed
+
							},
+
						]
+
						securityContext: {
+
							allowPrivilegeEscalation: false
+
							capabilities: drop: ["ALL"]
+
							privileged:             false
+
							readOnlyRootFilesystem: false
+
						}
+
						ports: [
+
							{
+
								containerPort: 8776
+
								name:          "gossip"
+
							},
+
						]
+
						volumeMounts: [
+
							{
+
								name:      "radicle-home"
+
								mountPath: "/home/radicle/.radicle"
+
							},
+
						]
+
					},
+
					if #group.sidecars.events {
+
						{
+
							name:  "events"
+
							image: "\(#group.repository):\(#group.version)"
+
							command: ["/bin/sh", "-c"]
+
							args: [#group.scripts.events]
+
							securityContext: {
+
								runAsNonRoot:             true
+
								runAsUser:                11011
+
								runAsGroup:               11011
+
								allowPrivilegeEscalation: false
+
								capabilities: drop: ["ALL"]
+
								readOnlyRootFilesystem: false
+
							}
+
							volumeMounts: [
+
								{
+
									name:      "radicle-home"
+
									mountPath: "/home/radicle/.radicle"
+
								},
+
							]
+
						}
+
					},
+
				]
+
				volumes: [
+
					{
+
						name: "config-template"
+
						configMap: name: #cmName
+
					},
+
				]
+
			}
+
		}
+
		volumeClaimTemplates: [
+
			{
+
				metadata: {
+
					name: "radicle-home"
+
					labels: {
+
						"app":      "radicle-node"
+
						"role":     #group.role
+
						"instance": #name
+
					}
+
				}
+
				spec: {
+
					storageClassName: #group.storage.className
+
					accessModes: ["ReadWriteOnce"]
+
					resources: requests: storage: #group.storage.size
+
				}
+
			},
+
		]
+
	}
+
}
added simulation/modules/radicle-node/timoni.cue
@@ -0,0 +1,42 @@
+
// Code generated by timoni.
+
// Note that this file is required and should contain
+
// the values schema and the timoni workflow.
+

+
package main
+

+
import (
+
	templates "timoni.sh/radicle-node/templates"
+
)
+

+
// Define the schema for the user-supplied values.
+
// At runtime, Timoni injects the supplied values
+
// and validates them according to the Config schema.
+
values: templates.#Config
+

+
// Define how Timoni should build, validate and
+
// apply the Kubernetes resources.
+
timoni: {
+
	apiVersion: "v1alpha1"
+

+
	// Define the instance that outputs the Kubernetes resources.
+
	// At runtime, Timoni builds the instance and validates
+
	// the resulting resources according to their Kubernetes schema.
+
	instance: templates.#Instance & {
+
		// The user-supplied values are merged with the
+
		// default values at runtime by Timoni.
+
		config: values
+
		// These values are injected at runtime by Timoni.
+
		config: {
+
			metadata: {
+
				name:      string @tag(name)
+
				namespace: string @tag(namespace)
+
			}
+
			moduleVersion: string @tag(mv, var=moduleVersion)
+
			kubeVersion:   string @tag(kv, var=kubeVersion)
+
		}
+
	}
+

+
	// Pass Kubernetes resources outputted by the instance
+
	// to Timoni's multi-step apply.
+
	apply: app: [for obj in instance.objects {obj}]
+
}
added simulation/modules/radicle-node/timoni.ignore
@@ -0,0 +1,14 @@
+
# VCS
+
.git/
+
.gitignore
+
.gitmodules
+
.gitattributes
+

+
# Go
+
vendor/
+
go.mod
+
go.sum
+

+
# CUE
+
*_tool.cue
+
debug_values.cue
added simulation/modules/radicle-node/values.cue
@@ -0,0 +1,25 @@
+
// Code generated by timoni.
+
// Note that this file must have no imports and all values must be concrete.
+

+
@if(!debug)
+

+
package main
+

+
// Defaults
+
values: {
+
	// The topology map allows defining multiple node groups with different versions
+
	topology: [string]: {
+
		//
+
		// Open enum for allowed image versions (allows custom local builds via `string`)
+
		// This is kept here so the update-image-tags.sh script can easily modify it.
+
		//
+
		version: "1.2.0" | "1.4.0" | "1.5.0" | "1.5.0-" | "1.5.0-amd64" | "1.5.0-arm64" | "1.6.0" | "1.6.1" | "1.7.0" | "1.7.1" | "1.8.0" | "latest" | "main" | "production" | "sqlite-patch" | string | *"latest"
+
		...
+
	}
+

+
	// Provide a default node so that a basic install works out-of-the-box
+
	topology: "default-node": {
+
		role: "seed"
+
		replicas: 1
+
	}
+
}