Radish alpha
h
Radicle Heartwood Protocol & Stack
Radicle
Git (anonymous pull)
Log in to clone via SSH
simulation: Introduce particle CUE module for radicle-node
Adrian Duke committed 22 days ago
commit dd310e93252b19d61c8c28509f10f8c75290a4ee
parent ef452e82d1810a7d6e4899497fd134342ad7bc10
11 files changed +636 -0
added simulation/modules/radicle-node/README.md
@@ -0,0 +1,96 @@
+
# radicle-node
+

+
A [timoni.sh](http://timoni.sh) module for deploying radicle-node to Kubernetes clusters.
+

+
## Prerequisites
+

+
* [Timoni](https://timoni.sh/) installed.
+
* A valid `KUBECONFIG` pointing to your Kubernetes cluster.
+
* A default `StorageClass` available in your cluster (used by the StatefulSet for persistent storage).
+

+
## Install
+

+
To create an instance using the default values:
+

+
```shell
+
timoni -n default apply radicle-node oci://<container-registry-url>
+
```
+

+
To change the [default configuration](#configuration),
+
create one or more `values.cue` files and apply them to the instance.
+

+
For example, create a file `my-values.cue` with the following content:
+

+
```cue
+
values: {
+
	topology: {
+
		"my-seed-node": {
+
			role: "seed"
+
			replicas: 1
+
			storage: size: "5Gi"
+
			sidecars: events: true
+
		}
+
	}
+
}
+
```
+

+
And apply the values with:
+

+
```shell
+
timoni -n default apply radicle-node oci://<container-registry-url> \
+
--values ./my-values.cue
+
```
+

+
## Uninstall
+

+
To uninstall an instance and delete all its Kubernetes resources:
+

+
```shell
+
timoni -n default delete radicle-node
+
```
+

+
## Module Structure
+

+
This Timoni module is organized into several CUE files that define the schema, templates, and deployment logic:
+

+
* **`timoni.cue`**: The entry point of the module. It defines the user-supplied values schema and the Timoni workflow (how to build, validate, and apply the Kubernetes resources).
+
* **`templates/config.cue`**: Contains the core `#Config` and `#NodeGroup` schemas. It defines default values (including startup scripts) and the `#Instance` logic that iterates over the defined `topology` to generate the required Kubernetes objects.
+
* **`templates/statefulset.cue`**: Defines the Kubernetes `StatefulSet` template, configuring the Radicle node container, init containers for configuration prep, optional sidecars (like the events logger), and persistent volume claims.
+
* **`templates/configmap.cue`**: Defines the Kubernetes `ConfigMap` template used to inject the `config.json` into the Radicle node pods.
+
* **`templates/service.cue`**: Defines a Kubernetes Headless `Service` template. This is created per "role" to allow direct pod-to-pod DNS resolution for Radicle's gossip network.
+
* **`debug_tool.cue`**: Provides CUE CLI commands (`build` and `ls`) to render and inspect the generated Kubernetes manifests locally without applying them to a cluster.
+
* **`timoni.ignore`**: Lists files and directories that should be excluded when packaging or applying the module.
+

+
## Configuration
+

+
### Topology Configuration
+

+
This module uses a `topology` map to define one or more groups of Radicle nodes. Each group generates its own `StatefulSet` and `ConfigMap`, and groups sharing the same `role` will share a headless `Service`.
+

+
| Key                               | Type     | Default                                 | Description                                                                 |
+
|-----------------------------------|----------|-----------------------------------------|-----------------------------------------------------------------------------|
+
| `topology: [name]: role`          | `string` | **Required**                            | The role of the node group (e.g. `seed`, `peer`). Used for DNS resolution. |
+
| `topology: [name]: replicas`      | `int`    | `1`                                     | Number of pod replicas for this group.                                      |
+
| `topology: [name]: repository`    | `string` | `quay.io/radicle_garden/radicle-node`   | Container image repository.                                                 |
+
| `topology: [name]: version`       | `string` | `latest`                                | Container image tag.                                                        |
+
| `topology: [name]: storage`       | `object` | `{className: "local-path", size: "1Gi"}`| Persistent Volume Claim configuration for `/home/radicle/.radicle`.         |
+
| `topology: [name]: sidecars`      | `object` | `{events: true}`                        | Toggles sidecar containers (e.g. the events logger).                       |
+
| `topology: [name]: radicleConfig` | `object` | `{node: {network: "test", ...}}`        | The contents of the `config.json` injected into the node.                   |
+

+
## Node Identity / Secrets
+

+
The node's identity (cryptographic keys) is generated automatically on startup if it doesn't exist. You can control this generation deterministically using the `NODE_ID_SEED` environment variable.
+

+
If `NODE_ID_SEED` is provided in the configuration, the startup script hashes it to generate a 32-byte seed for `rad auth`. If omitted, the pod's hostname is hashed instead. This ensures that if a pod restarts, it can regenerate the same identity, or you can explicitly pass a seed to persist identities across complete cluster teardowns.
+

+
## Debugging
+

+
You can render and inspect the generated Kubernetes manifests locally without applying them to a cluster using the included CUE debug tool:
+

+
```shell
+
# Print a summary of the resources that will be created
+
cue cmd -t debug -t name=my-node -t namespace=default -t mv=1.0.0 -t kv=1.28.0 ls
+

+
# Output the full multi-doc YAML
+
cue cmd -t debug -t name=my-node -t namespace=default -t mv=1.0.0 -t kv=1.28.0 build
+
```
added simulation/modules/radicle-node/cue.mod/module.cue
@@ -0,0 +1,2 @@
+
module: "timoni.sh/radicle-node"
+
language: version: "v0.15.0"
added simulation/modules/radicle-node/debug_tool.cue
@@ -0,0 +1,36 @@
+
package main
+

+
import (
+
	"list"
+
	"tool/cli"
+
	"encoding/yaml"
+
	"text/tabwriter"
+
)
+

+
_resources: list.Concat([timoni.apply.app, timoni.apply.test])
+

+
// The build command generates the Kubernetes manifests and prints the multi-docs YAML to stdout.
+
// Example 'cue cmd -t debug -t name=test -t namespace=test -t mv=1.0.0 -t kv=1.28.0 build'.
+
command: build: {
+
	task: print: cli.Print & {
+
		text: yaml.MarshalStream(_resources)
+
	}
+
}
+

+
// The ls command prints a table with the Kubernetes resources kind, namespace, name and version.
+
// Example 'cue cmd -t debug -t name=test -t namespace=test -t mv=1.0.0 -t kv=1.28.0 ls'.
+
command: ls: {
+
	task: print: cli.Print & {
+
		text: tabwriter.Write([
+
			"RESOURCE \tAPI VERSION",
+
			for r in _resources {
+
				if r.metadata.namespace == _|_ {
+
					"\(r.kind)/\(r.metadata.name) \t\(r.apiVersion)"
+
				}
+
				if r.metadata.namespace != _|_ {
+
					"\(r.kind)/\(r.metadata.namespace)/\(r.metadata.name)  \t\(r.apiVersion)"
+
				}
+
			},
+
		])
+
	}
+
}
added simulation/modules/radicle-node/debug_values.cue
@@ -0,0 +1,30 @@
+
@if(debug)
+

+
package main
+

+
// Values used by debug_tool.cue.
+
// Debug example 'cue cmd -t debug -t name=test -t namespace=test -t mv=1.0.0 -t kv=1.28.0 build'.
+
values: {
+
	podAnnotations: "cluster-autoscaler.kubernetes.io/safe-to-evict": "true"
+
	message: "Hello Debug"
+
	image: {
+
		repository: "docker.io/nginx"
+
		tag:        "1-alpine"
+
		digest:     ""
+
	}
+
	test: {
+
		enabled: true
+
		image: {
+
			repository: "docker.io/curlimages/curl"
+
			tag:        "latest"
+
			digest:     ""
+
		}
+
	}
+
	affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: [{
+
		matchExpressions: [{
+
			key:      "kubernetes.io/os"
+
			operator: "In"
+
			values: ["linux"]
+
		}]
+
	}]
+
}
added simulation/modules/radicle-node/templates/config.cue
@@ -0,0 +1,199 @@
+
package templates
+

+
import (
+
	timoniv1 "timoni.sh/core/v1alpha1"
+
)
+

+
#NodeGroup: {
+
	role:       string
+
	replicas:   int | *1
+
	repository: string | *"quay.io/radicle_garden/radicle-node"
+
	pullPolicy: string | *"IfNotPresent"
+
	version:    string | *"latest"
+
	nodeIdSeed: string | *""
+

+
	storage: {
+
		className: string | *"local-path"
+
		size:      string | *"1Gi"
+
	}
+

+
	resources: {...}
+

+
	sidecars: {
+
		events: bool | *true
+
	}
+

+
	scripts: {
+
		init: string | *"""
+
			#!/bin/sh
+
			set -e
+

+
			KUBE_CONFIG_DIR=/tmp/config-source
+
			RAD_HOME=/home/radicle/.radicle
+
			RAD_CONFIG=${RAD_HOME}/config.json
+

+
			echo "[INIT] Hostname: $(hostname)"
+
			# --- STANDARD INIT LOGIC (User 11011) ---
+
			mkdir -p "${RAD_HOME}"
+

+
			if [ -f "${KUBE_CONFIG_DIR}/config.json" ]; then
+
			  cp "${KUBE_CONFIG_DIR}/config.json" "${RAD_CONFIG}"
+
			  echo "[INIT] Config copied successfully."
+
			else
+
			  echo "[INIT] ERROR: Source config not found."
+
			  exit 1
+
			fi
+
			"""
+
		start: string | *"""
+
			#!/bin/sh
+
			set -e
+

+
			RAD_HOME=/home/radicle/.radicle
+
			RAD_ALIAS=$(hostname)
+
			RAD_KEY=${RAD_HOME}/keys/radicle
+
			RAD_CONFIG=${RAD_HOME}/config.json
+

+
			#
+
			# Generate keys
+
			#
+
			if [ ! -f "${RAD_KEY}" ]; then
+
			  echo "[START] Generating identity for: ${RAD_ALIAS}..."
+
			  # We move the config out of the way so 'rad auth' doesn't complain
+
			  if [ -f "${RAD_CONFIG}" ]; then
+
			     mv "${RAD_CONFIG}" "${RAD_CONFIG}.bak"
+
			  fi
+

+
			  # RAD_KEYGEN_SEED requires a 32-byte hex string.
+
			  # We hash either the injected NODE_ID_SEED or the hostname to generate it.
+
			  if [ -n "${NODE_ID_SEED}" ]; then
+
			    export RAD_KEYGEN_SEED=$(echo "${NODE_ID_SEED}" | sha256sum | tr -d "\\n *-")
+
			  else
+
			    export RAD_KEYGEN_SEED=$(hostname | sha256sum | tr -d "\\n *-")
+
			  fi
+

+
			  rad auth --alias "${RAD_ALIAS}"
+

+
			  if [ -f "${RAD_CONFIG}.bak" ]; then
+
			     mv "${RAD_CONFIG}.bak" "${RAD_CONFIG}"
+
			  fi
+
			  echo "[START] Identity generated"
+
			fi
+

+
			#
+
			# Update config settings
+
			#
+
			echo "[START] Node's alias set to: $(rad config set node.alias "${RAD_ALIAS}")"
+

+
			# Extract the first external address, stripping JSON formatting
+
			EXT_ADDRESS=$(rad config get node.externalAddresses | tr -d '[]" \\n' | cut -d',' -f1)
+
			
+
			if [ -n "$EXT_ADDRESS" ]; then
+
			  # Check if it already starts with the pod's hostname to prevent stuttering
+
			  case "$EXT_ADDRESS" in
+
			    ${RAD_ALIAS}.*)
+
			      echo "[START] External address already correct: ${EXT_ADDRESS}"
+
			      ;;
+
			    *)
+
			      rad config remove node.externalAddresses "${EXT_ADDRESS}"
+
			      NEW_ADDRESS="${RAD_ALIAS}.${EXT_ADDRESS}"
+
			      rad config push node.externalAddresses "${NEW_ADDRESS}"
+
			      echo "[START] Node's external address updated to: ${NEW_ADDRESS}"
+
			      ;;
+
			  esac
+
			fi
+

+
			#
+
			# Start node
+
			#
+
			echo "[START] Starting Radicle node..."
+
			exec rad node start --foreground
+
			"""
+
		events: string | *"""
+
			#!/bin/sh
+

+
			RAD_HOME=/home/radicle/.radicle
+
			RAD_NODE_SOCKET=${RAD_HOME}/node/control.sock
+

+
			echo "[EVENTS] Waiting for node socket..."
+
			while [ ! -S ${RAD_NODE_SOCKET} ]; do
+
			  sleep 1
+
			done
+
			echo "[EVENTS] Socket found. Streaming events..."
+
			exec rad node events
+
			"""
+
	}
+

+
	radicleConfig: {
+
		node: {
+
			// Automatically generate the base external address using the role.
+
			// The start.sh script will dynamically prepend the pod's hostname to this at boot.
+
			externalAddresses: [...string] | *["\(role).default.svc.cluster.local:8776"]
+
			// Set network to "test" by default.
+
			network: "test"
+
			...
+
		}
+
		...
+
	}
+
}
+

+
// Config defines the schema and defaults for the Instance values.
+
#Config: {
+
	kubeVersion!: string
+
	clusterVersion: timoniv1.#SemVer & {#Version: kubeVersion, #Minimum: "1.20.0"}
+
	moduleVersion!: string
+
	metadata: timoniv1.#Metadata & {#Version: moduleVersion}
+
	metadata: labels: timoniv1.#Labels
+
	metadata: annotations?: timoniv1.#Annotations
+

+
	// The topology map is merged with the #NodeGroup schema
+
	topology: [string]: #NodeGroup
+

+
	// Helper to generate metadata with a specific name
+
	#Meta: {
+
		name: string
+
		out: {
+
			"name": name
+
			namespace: metadata.namespace
+
			if metadata.annotations != _|_ {
+
				annotations: metadata.annotations
+
			}
+
		}
+
	}
+
}
+

+
// Instance takes the config values and outputs the Kubernetes objects.
+
#Instance: {
+
	config: #Config
+

+
	// Extract unique roles to create headless services
+
	let _roles = {
+
		for name, group in config.topology {
+
			"\(group.role)": true
+
		}
+
	}
+

+
	objects: {
+
		// Generate one Headless Service per role (e.g. seed, peer, bootstrap)
+
		for roleName, _ in _roles {
+
			"svc-\(roleName)": #Service & {
+
				#config: config
+
				#role:   roleName
+
			}
+
		}
+

+
		// Generate a StatefulSet and ConfigMap for each group in the topology
+
		for name, group in config.topology {
+
			"cm-\(name)": #ConfigMap & {
+
				#config: config
+
				#name:   name
+
				#group:  group
+
			}
+
			"sts-\(name)": #StatefulSet & {
+
				#config: config
+
				#name:   name
+
				#group:  group
+
				#cmName: name + "-config"
+
			}
+
		}
+
	}
+
}
added simulation/modules/radicle-node/templates/configmap.cue
@@ -0,0 +1,18 @@
+
package templates
+

+
import (
+
	"encoding/json"
+
	corev1 "k8s.io/api/core/v1"
+
)
+

+
#ConfigMap: corev1.#ConfigMap & {
+
	#config: #Config
+
	#name:   string
+
	#group:  #NodeGroup
+
	apiVersion: "v1"
+
	kind:       "ConfigMap"
+
	metadata:   (#config.#Meta & {name: #name + "-config"}).out
+
	data: {
+
		"config.json": json.Marshal(#group.radicleConfig)
+
	}
+
}
added simulation/modules/radicle-node/templates/service.cue
@@ -0,0 +1,27 @@
+
package templates
+

+
import (
+
	corev1 "k8s.io/api/core/v1"
+
)
+

+
#Service: corev1.#Service & {
+
	#config: #Config
+
	#role:   string
+
	apiVersion: "v1"
+
	kind:       "Service"
+
	metadata:   (#config.#Meta & {name: #role}).out
+
	spec: corev1.#ServiceSpec & {
+
		clusterIP: "None" // Headless service for direct pod DNS resolution
+
		selector: {
+
			"app":  "radicle-node"
+
			"role": #role
+
		}
+
		ports: [
+
			{
+
				name:       "gossip"
+
				port:       8776
+
				targetPort: 8776
+
			},
+
		]
+
	}
+
}
added simulation/modules/radicle-node/templates/statefulset.cue
@@ -0,0 +1,147 @@
+
package templates
+

+
import (
+
	appsv1 "k8s.io/api/apps/v1"
+
	corev1 "k8s.io/api/core/v1"
+
)
+

+
#StatefulSet: appsv1.#StatefulSet & {
+
	#config: #Config
+
	#name:   string
+
	#group:  #NodeGroup
+
	#cmName: string
+
	apiVersion: "apps/v1"
+
	kind:       "StatefulSet"
+
	metadata:   (#config.#Meta & {name: #name}).out
+
	spec: appsv1.#StatefulSetSpec & {
+
		serviceName: #group.role
+
		replicas:    #group.replicas
+
		selector: matchLabels: {
+
			"app":      "radicle-node"
+
			"instance": #name
+
		}
+
		template: {
+
			metadata: labels: {
+
				"app":      "radicle-node"
+
				"role":     #group.role
+
				"instance": #name
+
			}
+
			spec: corev1.#PodSpec & {
+
				securityContext: {
+
					fsGroup: 11011
+
					seccompProfile: type: "RuntimeDefault"
+
					runAsNonRoot: true
+
					runAsUser:    11011
+
					runAsGroup:   11011
+
				}
+
				initContainers: [
+
					{
+
						name:  "config-prep"
+
						image: "busybox"
+
						command: ["sh", "-c"]
+
						args: [#group.scripts.init]
+
						volumeMounts: [
+
							{
+
								name:      "config-template"
+
								mountPath: "/tmp/config-source"
+
							},
+
							{
+
								name:      "radicle-home"
+
								mountPath: "/home/radicle/.radicle"
+
							},
+
						]
+
						securityContext: {
+
							runAsUser:                11011
+
							runAsNonRoot:             true
+
							allowPrivilegeEscalation: false
+
							capabilities: drop: ["ALL"]
+
							seccompProfile: type: "RuntimeDefault"
+
						}
+
					},
+
				]
+
				containers: [
+
					{
+
						name:            "node"
+
						image:           "\(#group.repository):\(#group.version)"
+
						imagePullPolicy: #group.pullPolicy
+
						command: ["/bin/sh", "-c"]
+
						args: [#group.scripts.start]
+
						env: [
+
							{
+
								name:  "RAD_PASSPHRASE"
+
								value: ""
+
							},
+
							{
+
								name:  "NODE_ID_SEED"
+
								value: #group.nodeIdSeed
+
							},
+
						]
+
						securityContext: {
+
							allowPrivilegeEscalation: false
+
							capabilities: drop: ["ALL"]
+
							privileged:             false
+
							readOnlyRootFilesystem: false
+
						}
+
						ports: [
+
							{
+
								containerPort: 8776
+
								name:          "gossip"
+
							},
+
						]
+
						volumeMounts: [
+
							{
+
								name:      "radicle-home"
+
								mountPath: "/home/radicle/.radicle"
+
							},
+
						]
+
					},
+
					if #group.sidecars.events {
+
						{
+
							name:  "events"
+
							image: "\(#group.repository):\(#group.version)"
+
							command: ["/bin/sh", "-c"]
+
							args: [#group.scripts.events]
+
							securityContext: {
+
								runAsNonRoot:             true
+
								runAsUser:                11011
+
								runAsGroup:               11011
+
								allowPrivilegeEscalation: false
+
								capabilities: drop: ["ALL"]
+
								readOnlyRootFilesystem: false
+
							}
+
							volumeMounts: [
+
								{
+
									name:      "radicle-home"
+
									mountPath: "/home/radicle/.radicle"
+
								},
+
							]
+
						}
+
					},
+
				]
+
				volumes: [
+
					{
+
						name: "config-template"
+
						configMap: name: #cmName
+
					},
+
				]
+
			}
+
		}
+
		volumeClaimTemplates: [
+
			{
+
				metadata: {
+
					name: "radicle-home"
+
					labels: {
+
						"app":      "radicle-node"
+
						"role":     #group.role
+
						"instance": #name
+
					}
+
				}
+
				spec: {
+
					storageClassName: #group.storage.className
+
					accessModes: ["ReadWriteOnce"]
+
					resources: requests: storage: #group.storage.size
+
				}
+
			},
+
		]
+
	}
+
}
added simulation/modules/radicle-node/timoni.cue
@@ -0,0 +1,42 @@
+
// Code generated by timoni.
+
// Note that this file is required and should contain
+
// the values schema and the timoni workflow.
+

+
package main
+

+
import (
+
	templates "timoni.sh/radicle-node/templates"
+
)
+

+
// Define the schema for the user-supplied values.
+
// At runtime, Timoni injects the supplied values
+
// and validates them according to the Config schema.
+
values: templates.#Config
+

+
// Define how Timoni should build, validate and
+
// apply the Kubernetes resources.
+
timoni: {
+
	apiVersion: "v1alpha1"
+

+
	// Define the instance that outputs the Kubernetes resources.
+
	// At runtime, Timoni builds the instance and validates
+
	// the resulting resources according to their Kubernetes schema.
+
	instance: templates.#Instance & {
+
		// The user-supplied values are merged with the
+
		// default values at runtime by Timoni.
+
		config: values
+
		// These values are injected at runtime by Timoni.
+
		config: {
+
			metadata: {
+
				name:      string @tag(name)
+
				namespace: string @tag(namespace)
+
			}
+
			moduleVersion: string @tag(mv, var=moduleVersion)
+
			kubeVersion:   string @tag(kv, var=kubeVersion)
+
		}
+
	}
+

+
	// Pass Kubernetes resources outputted by the instance
+
	// to Timoni's multi-step apply.
+
	apply: app: [for obj in instance.objects {obj}]
+
}
added simulation/modules/radicle-node/timoni.ignore
@@ -0,0 +1,14 @@
+
# VCS
+
.git/
+
.gitignore
+
.gitmodules
+
.gitattributes
+

+
# Go
+
vendor/
+
go.mod
+
go.sum
+

+
# CUE
+
*_tool.cue
+
debug_values.cue
added simulation/modules/radicle-node/values.cue
@@ -0,0 +1,25 @@
+
// Code generated by timoni.
+
// Note that this file must have no imports and all values must be concrete.
+

+
@if(!debug)
+

+
package main
+

+
// Defaults
+
values: {
+
	// The topology map allows defining multiple node groups with different versions
+
	topology: [string]: {
+
		//
+
		// Open enum for allowed image versions (allows custom local builds via `string`)
+
		// This is kept here so the update-image-tags.sh script can easily modify it.
+
		//
+
		version: "1.2.0" | "1.4.0" | "1.5.0" | "1.5.0-" | "1.5.0-amd64" | "1.5.0-arm64" | "1.6.0" | "1.6.1" | "1.7.0" | "1.7.1" | "1.8.0" | "latest" | "main" | "production" | "sqlite-patch" | string | *"latest"
+
		...
+
	}
+

+
	// Provide a default node so that a basic install works out-of-the-box
+
	topology: "default-node": {
+
		role: "seed"
+
		replicas: 1
+
	}
+
}