Radish alpha
h
rad:z3gqcJUoA1n9HaHKufZs5FCSGazv5
Radicle Heartwood Protocol & Stack
Radicle
Git
Introduce Canonical Reference Rules
Merged fintohaps opened 10 months ago

Introduce canonical reference rules via a payload entry in the identity document. The payload is identified by xyz.radicle.crefs, and the payload currently contains one key rules, which is followed by the set of rules. For each rule, there is a reference pattern string to identify the rule, which in turn is composed of the allow and threshold values. The canonical reference rules are now used to check for canonical updates. However, if none are available, then the threshold, delegates, and the project defaultBranch are used to construct a single rule for the default branch as a fallback. Note that if the rules are present, and there is no default branch rule then the canonical reference for the default branch will not be computed.

32 files changed +2577 -413 fb8681f5 408d4f27
modified CHANGELOG.md
@@ -7,6 +7,17 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## [Unreleased]

+
- Introduce canonical reference rules via a payload entry in the identity
+
  document. The payload is identified by `xyz.radicle.crefs`, and the payload
+
  currently contains one key `rules`, which is followed by the set of rules. For
+
  each rule, there is a reference pattern string to identify the rule, which in
+
  turn is composed of the `allow` and `threshold` values. The canonical
+
  reference rules are now used to check for canonical updates. The rule for the
+
  `defaultBranch` of an `xyz.radicle.project` is synthesized from the identity
+
  document fields: `threshold` and `delegates`. This means that a rule for that
+
  reference is not allowed within the rule set. This checked when performing a
+
  `rad id update`.
+

## Release Highlights

## Deprecations
modified Cargo.lock
@@ -754,6 +754,12 @@ dependencies = [
]

[[package]]
+
name = "fast-glob"
+
version = "0.3.3"
+
source = "registry+https://github.com/rust-lang/crates.io-index"
+
checksum = "3afcf4effa2c44390b9912544582d5af29e10dc4c816c5dbebf748e1c7416faa"
+

+
[[package]]
name = "faster-hex"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
@@ -2353,6 +2359,7 @@ dependencies = [
 "crossbeam-channel",
 "cyphernet",
 "emojis",
+
 "fast-glob",
 "fastrand",
 "git2",
 "jsonschema",
modified crates/radicle-cli/examples/git/git-push-amend.md
@@ -21,7 +21,7 @@ $ git commit --amend -m "Neue Änderungen" --allow-empty -q

``` ~alice (stderr)
$ git push rad master -f
-
✓ Canonical head updated to 9170c8795d3a78f0381a0ffafb20ea69fb0f5b6b
+
✓ Canonical head for refs/heads/master updated to 9170c8795d3a78f0381a0ffafb20ea69fb0f5b6b
✓ Synced with 1 seed(s)
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
 + fb25886...9170c87 master -> master (forced update)
added crates/radicle-cli/examples/git/git-push-canonical-tags.md
@@ -0,0 +1,213 @@
+
In this example, we will show how we can make other references become canonical.
+
To illustrate, we will use Git tags as an example. The storage of the repository
+
should look something like this by the end of the example:
+

+
~~~
+
storage/z6cFWeWpnZNHh9rUW8phgA3b5yGt/refs
+
├── heads
+
│   └── main
+
├── namespaces
+
│   ├── z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
+
│   │   └── refs
+
│   │       ├── cobs
+
│   │       │   └── xyz.radicle.id
+
│   │       │       └── 865c48204bd7bb7f088b8db90ffdccb48cfa0a50
+
│   │       ├── heads
+
│   │       │   └── master
+
│   │       ├── tags
+
│   │       │   ├── v1.0-hotfix
+
│   │       │   └── v1.0
+
│   │       └── rad
+
│   │           ├── id
+
│   │           └── sigrefs
+
│   └── z6Mkt67GdsW7715MEfRuP4pSZxJRJh6kj6Y48WRqVv4N1tRk
+
│       └── refs
+
│           ├── heads
+
│           │   └── master
+
│           ├── tags
+
│           │   ├── v1.0-hotfix
+
│           │   └── v1.0
+
│           └── rad
+
│               ├── id
+
│               └── sigrefs
+
├── rad
+
│   └── id
+
└── tags
+
    ├── v1.0-hotfix
+
    └── v1.0
+
~~~
+

+
Noting that there are tags under `refs/tags` now.
+

+
To start, Alice will add a new payload to the repository identity. The
+
identifier for this payload is `xyz.radicle.crefs`. It contains a single field
+
with the key `rules`, and the value for this key is an array of rules. In this
+
case, we will have two rules: one for `refs/tags/*` and one for `refs/tags/qa/*`
+
(see RIP-0004 for more information on the rules).
+

+
``` ~alice
+
$ rad id update --title "Add canonical reference rules" --payload xyz.radicle.crefs rules '{ "refs/tags/*": { "threshold": 1, "allow": "delegates" }, "refs/tags/qa/*": { "threshold": 1, "allow": "delegates" }}'
+
✓ Identity revision [..] created
+
╭────────────────────────────────────────────────────────────────────────╮
+
│ Title    Add canonical reference rules                                 │
+
│ Revision c3349f07bfe6a82bbeb2989d2de4a918408f9831                      │
+
│ Blob     85fa09e2de93b825d5231778dbb34143004a4bca                      │
+
│ Author   did:key:z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi      │
+
│ State    accepted                                                      │
+
│ Quorum   yes                                                           │
+
├────────────────────────────────────────────────────────────────────────┤
+
│ ✓ did:key:z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi alice (you) │
+
╰────────────────────────────────────────────────────────────────────────╯
+

+
@@ -1,13 +1,25 @@
+
 {
+
   "payload": {
+
+    "xyz.radicle.crefs": {
+
+      "rules": {
+
+        "refs/tags/*": {
+
+          "allow": "delegates",
+
+          "threshold": 1
+
+        },
+
+        "refs/tags/qa/*": {
+
+          "allow": "delegates",
+
+          "threshold": 1
+
+        }
+
+      }
+
+    },
+
     "xyz.radicle.project": {
+
       "defaultBranch": "master",
+
       "description": "Radicle Heartwood Protocol & Stack",
+
       "name": "heartwood"
+
     }
+
   },
+
   "delegates": [
+
     "did:key:z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi"
+
   ],
+
   "threshold": 1
+
 }
+
```
+

+
Now, Alice will create a tag and push it:
+

+
``` ~alice
+
$ git tag v1.0-hotfix
+
```
+

+
``` ~alice (stderr)
+
$ git push rad --tags
+
✓ Canonical head for refs/tags/v1.0-hotfix updated to f2de534b5e81d7c6e2dcaf58c3dd91573c0a0354
+
✓ Synced with 1 seed(s)
+
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
+
 * [new tag]         v1.0-hotfix -> v1.0-hotfix
+
```
+

+
Notice that the output included a message about a canonical reference being
+
updated:
+

+
~~~
+
✓ Canonical reference refs/tags/v1.0-hotfix updated to f2de534b5e81d7c6e2dcaf58c3dd91573c0a0354
+
~~~
+

+
On the other side, Bob performs a fetch and now has the tags locally:
+

+
``` ~bob (stderr)
+
$ cd heartwood
+
$ git fetch rad
+
From rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji
+
 * [new tag]         v1.0-hotfix -> rad/tags/v1.0-hotfix
+
 * [new tag]         v1.0-hotfix -> v1.0-hotfix
+
```
+

+
In the next portion of this example, we want to show that using a `threshold` of
+
`2` requires both delegates. To do this, Bob creates a `master` reference, Alice
+
adds him as a remote, and adds him to the identity delegates, as well as setting
+
the `threshold` to `2` for the `refs/tags/*` rule:
+

+
``` ~bob
+
$ rad remote add z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi --name alice
+
✓ Follow policy updated for z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi (alice)
+
Fetching rad:z42hL2jL4XNk6K8oHQaSWfMgCL7ji from the network, found 1 potential seed(s).
+
✓ Target met: 1 seed(s)
+
✓ Remote alice added
+
✓ Remote-tracking branch alice/master created for z6MknSL…StBU8Vi
+
$ git push rad master
+
```
+

+
``` ~alice
+
$ rad remote add z6Mkt67GdsW7715MEfRuP4pSZxJRJh6kj6Y48WRqVv4N1tRk --name bob
+
✓ Follow policy updated for z6Mkt67GdsW7715MEfRuP4pSZxJRJh6kj6Y48WRqVv4N1tRk (bob)
+
Fetching rad:z42hL2jL4XNk6K8oHQaSWfMgCL7ji from the network, found 1 potential seed(s).
+
✓ Target met: 1 seed(s)
+
✓ Remote bob added
+
✓ Remote-tracking branch bob/master created for z6Mkt67…v4N1tRk
+
$ rad id update --title "Add Bob" --delegate did:key:z6Mkt67GdsW7715MEfRuP4pSZxJRJh6kj6Y48WRqVv4N1tRk --no-confirm -q
+
27ab0d77a95581c59ca9d30e679ceb06a9f758db
+
$ rad id update --title "Update canonical reference rules" --payload xyz.radicle.crefs rules '{ "refs/tags/*": { "threshold": 2, "allow": "delegates" }, "refs/tags/qa/*": { "threshold": 1, "allow": "delegates" } }' -q
+
dace164ba43fa51802697ec28d0b1965a9d7808b
+
```
+

+
**Note:** here we have to specify all the rules again to update the `threshold`.
+
In reality, you can use `rad id update --edit` and edit the payload in your
+
editor instead.
+

+
``` ~bob
+
$ rad sync -f
+
Fetching rad:z42hL2jL4XNk6K8oHQaSWfMgCL7ji from the network, found 1 potential seed(s).
+
✓ Target met: 1 seed(s)
+
🌱 Fetched from z6MknSL…StBU8Vi
+
$ rad id accept dace164ba43fa51802697ec28d0b1965a9d7808b -q
+
```
+

+
When Bob creates a new tag and pushes it, we see that there's a warning that
+
no quorum was found for the new tag:
+

+
``` ~bob (stderr)
+
$ git tag v2.0
+
$ git push rad --tags
+
warn: could not determine commit for canonical reference 'refs/tags/v2.0', no commit with at least 2 vote(s) found (threshold not met)
+
warn: it is recommended to find a commit to agree upon
+
✓ Synced with 1 seed(s)
+
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6Mkt67GdsW7715MEfRuP4pSZxJRJh6kj6Y48WRqVv4N1tRk
+
 * [new tag]         v1.0-hotfix -> v1.0-hotfix
+
 * [new tag]         v2.0 -> v2.0
+
```
+

+
Alice can then fetch and checkout the new tag, create one on her side, and push
+
it:
+

+
``` ~alice (stderr)
+
$ git fetch bob
+
From rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6Mkt67GdsW7715MEfRuP4pSZxJRJh6kj6Y48WRqVv4N1tRk
+
 * [new tag]         v1.0-hotfix -> bob/tags/v1.0-hotfix
+
 * [new tag]         v2.0        -> bob/tags/v2.0
+
```
+

+
``` ~alice
+
$ git checkout bob/tags/v2.0
+
$ git tag v2.0
+
```
+

+
``` ~alice (stderr)
+
$ git push rad --tags
+
✓ Canonical head for refs/tags/v2.0 updated to f2de534b5e81d7c6e2dcaf58c3dd91573c0a0354
+
✓ Synced with 1 seed(s)
+
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
+
 * [new tag]         v2.0 -> v2.0
+
```
+

+
Now that Bob has also pushed this tag, we can see that the tag was made
+
canonical.
+

+
For the final portion of the example, we will show that both delegates aren't
+
required for pushing tags that match the rule `refs/tags/qa/*`. To show this,
+
Bob will create a tag and push it, and we should see that the canonical
+
reference is created:
+

+
``` ~bob (stderr)
+
$ git tag qa/v2.1
+
$ git push rad --tags
+
✓ Canonical head for refs/tags/qa/v2.1 updated to f2de534b5e81d7c6e2dcaf58c3dd91573c0a0354
+
✓ Synced with 1 seed(s)
+
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6Mkt67GdsW7715MEfRuP4pSZxJRJh6kj6Y48WRqVv4N1tRk
+
 * [new tag]         qa/v2.1 -> qa/v2.1
+
```
modified crates/radicle-cli/examples/git/git-push-converge.md
@@ -91,8 +91,7 @@ commit:

``` ~alice (stderr)
$ git push rad -f
-
warn: could not determine canonical tip for `refs/heads/master`
-
warn: no commit found with at least 3 vote(s) (threshold not met)
+
warn: could not determine commit for canonical reference 'refs/heads/master', no commit with at least 3 vote(s) found (threshold not met)
warn: it is recommended to find a commit to agree upon
✓ Synced with 2 seed(s)
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
@@ -117,7 +116,7 @@ become the canonical `master`.

``` ~bob (stderr)
$ git push rad
-
✓ Canonical head updated to 3a75f66dd0020c9a0355cc6ec21f15de989e2001
+
✓ Canonical head for refs/heads/master updated to 3a75f66dd0020c9a0355cc6ec21f15de989e2001
✓ Synced with 2 seed(s)
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6Mkt67GdsW7715MEfRuP4pSZxJRJh6kj6Y48WRqVv4N1tRk
   2a37862..0f9bd80  master -> master
@@ -138,7 +137,7 @@ HEAD is now at 0f9bd80 Merge remote-tracking branch 'eve/master'

``` ~eve (stderr)
$ git push rad
-
✓ Canonical head updated to 0f9bd8035c04b3f73f5408e73e8454879b20800b
+
✓ Canonical head for refs/heads/master updated to 0f9bd8035c04b3f73f5408e73e8454879b20800b
✓ Synced with 2 seed(s)
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6Mkux1aUQD2voWWukVb5nNUR7thrHveQG4pDQua8nVhib7Z
   3a75f66..0f9bd80  master -> master
modified crates/radicle-cli/examples/git/git-push-diverge.md
@@ -44,7 +44,7 @@ integrate Bob's changes before pushing ours:

``` ~alice (stderr) (fail) RAD_HINT=1
$ git push rad
-
hint: you are attempting to push a commit that would cause your upstream to diverge from the canonical head
+
hint: you are attempting to push a commit that would cause your upstream to diverge from the canonical reference refs/heads/master
hint: to integrate the remote changes, run `git pull --rebase` and try again
error: refusing to update branch to commit that is not a descendant of canonical head
error: failed to push some refs to 'rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi'
@@ -62,7 +62,7 @@ f2de534 Second commit
```
``` ~alice RAD_SOCKET=/dev/null (stderr)
$ git push rad
-
✓ Canonical head updated to f6cff86594495e9beccfeda7c20173e55c1dd9fc
+
✓ Canonical head for refs/heads/master updated to f6cff86594495e9beccfeda7c20173e55c1dd9fc
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
   f2de534..f6cff86  master -> master
```
@@ -75,7 +75,7 @@ $ git reset --hard HEAD^ -q
```
``` ~alice RAD_SOCKET=/dev/null (stderr)
$ git push -f
-
✓ Canonical head updated to 319a7dc3b195368ded4b099f8c90bbb80addccd3
+
✓ Canonical head for refs/heads/master updated to 319a7dc3b195368ded4b099f8c90bbb80addccd3
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
 + f6cff86...319a7dc master -> master (forced update)
```
modified crates/radicle-cli/examples/git/git-push-rollback.md
@@ -35,7 +35,7 @@ Fast-forward

``` ~alice (stderr)
$ git push rad
-
✓ Canonical head updated to 319a7dc3b195368ded4b099f8c90bbb80addccd3
+
✓ Canonical head for refs/heads/master updated to 319a7dc3b195368ded4b099f8c90bbb80addccd3
✓ Synced with 1 seed(s)
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
   f2de534..319a7dc  master -> master
@@ -54,7 +54,7 @@ push and the new canonical head becomes the previous commit again:

``` ~alice (stderr)
$ git push rad -f
-
✓ Canonical head updated to f2de534b5e81d7c6e2dcaf58c3dd91573c0a0354
+
✓ Canonical head for refs/heads/master updated to f2de534b5e81d7c6e2dcaf58c3dd91573c0a0354
✓ Synced with 1 seed(s)
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
 + 319a7dc...f2de534 master -> master (forced update)
modified crates/radicle-cli/examples/rad-id-private.md
@@ -79,7 +79,8 @@ then the command will fail since there is no longer an allow list to work with:

``` (fails)
$ rad id update --visibility public --allow did:key:z6Mkt67GdsW7715MEfRuP4pSZxJRJh6kj6Y48WRqVv4N1tRk
-
✗ Error: `--allow` and `--disallow` cannot be used with `--visibility public`
+
✗ Error: `--allow` and `--disallow` should only be used for private repositories
+
✗ Hint: use `--visibility private` to make the repository private, or perhaps you meant to use `--delegate`/`--rescind`
```

Let's change the repository to `public`:
modified crates/radicle-cli/examples/rad-id-threshold.md
@@ -64,7 +64,7 @@ $ git commit -v -m "Define power requirements"

``` ~alice (stderr) RAD_SOCKET=/dev/null
$ git push rad master
-
✓ Canonical head updated to 3e674d1a1df90807e934f9ae5da2591dd6848a33
+
✓ Canonical head for refs/heads/master updated to f2de534b5e81d7c6e2dcaf58c3dd91573c0a0354
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
   f2de534..3e674d1  master -> master
```
modified crates/radicle-cli/examples/rad-merge-after-update.md
@@ -16,7 +16,7 @@ $ git commit --amend --allow-empty -q -m "Amended change"
$ git checkout master -q
$ git merge feature/1 -q
$ git push rad master
-
✓ Canonical head updated to 954bcdb5008447ce294a61a21d7eb87afbe7f4a6
+
✓ Canonical head for refs/heads/master updated to 954bcdb5008447ce294a61a21d7eb87afbe7f4a6
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
   f2de534..954bcdb  master -> master
```
modified crates/radicle-cli/examples/rad-merge-no-ff.md
@@ -37,7 +37,7 @@ Finally, we push master and expect the patch to be merged.
``` (stderr) RAD_SOCKET=/dev/null
$ git push rad master
✓ Patch 696ec5508494692899337afe6713fe1796d0315c merged
-
✓ Canonical head updated to 737a10cfa29111afeb0d43cf3545cee386b939ec
+
✓ Canonical head for refs/heads/master updated to 737a10cfa29111afeb0d43cf3545cee386b939ec
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
   f2de534..737a10c  master -> master
```
modified crates/radicle-cli/examples/rad-merge-via-push.md
@@ -70,7 +70,7 @@ When we push to `rad/master`, we automatically merge the patches:
$ git push rad master
✓ Patch 356f73863a8920455ff6e77cd9c805d68910551b merged
✓ Patch 696ec5508494692899337afe6713fe1796d0315c merged
-
✓ Canonical head updated to d6399c71702b40bae00825b3c444478d06b4e91c
+
✓ Canonical head for refs/heads/master updated to d6399c71702b40bae00825b3c444478d06b4e91c
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
   f2de534..d6399c7  master -> master
```
@@ -148,7 +148,7 @@ the first patch, even though they were pushed together.
$ git reset --hard HEAD^
$ git push -f rad
! Patch 356f73863a8920455ff6e77cd9c805d68910551b reverted at revision 356f738
-
✓ Canonical head updated to 20aa5dde6210796c3a2f04079b42316a31d02689
+
✓ Canonical head for refs/heads/master updated to 20aa5dde6210796c3a2f04079b42316a31d02689
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
 + d6399c7...20aa5dd master -> master (forced update)
```
modified crates/radicle-cli/examples/rad-patch-merge-draft.md
@@ -14,7 +14,7 @@ $ git checkout master -q
$ git merge feature/1
$ git push rad master
✓ Patch 8dfb4dcafc4346158c8160410dd3f2b0616ad4fe merged
-
✓ Canonical head updated to 20aa5dde6210796c3a2f04079b42316a31d02689
+
✓ Canonical head for refs/heads/master updated to 20aa5dde6210796c3a2f04079b42316a31d02689
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
   f2de534..20aa5dd  master -> master
```
modified crates/radicle-cli/examples/rad-patch-open-explore.md
@@ -38,7 +38,7 @@ $ git checkout master -q
$ git merge changes -q
$ git push rad master
✓ Patch acab0ec777a97d013f30be5d5d1aec32562ecb02 merged
-
✓ Canonical head updated to b2b6432af93f8fe188e32d400263021b602cfec8
+
✓ Canonical head for refs/heads/master updated to b2b6432af93f8fe188e32d400263021b602cfec8
✓ Synced with 1 seed(s)

  https://app.radicle.xyz/nodes/[..]/rad:z3yXbb1sR6UG6ixxV2YF9jUP7ABra/tree/b2b6432af93f8fe188e32d400263021b602cfec8
modified crates/radicle-cli/examples/rad-patch-revert-merge.md
@@ -12,7 +12,7 @@ Switched to branch 'master'
$ git merge feature/1
$ git push rad master
✓ Patch 696ec5508494692899337afe6713fe1796d0315c merged
-
✓ Canonical head updated to 20aa5dde6210796c3a2f04079b42316a31d02689
+
✓ Canonical head for refs/heads/master updated to 20aa5dde6210796c3a2f04079b42316a31d02689
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
   f2de534..20aa5dd  master -> master
```
@@ -50,7 +50,7 @@ When pushing, notice that we're told our patch is reverted.
``` (stderr) RAD_SOCKET=/dev/null
$ git push rad master --force
! Patch 696ec5508494692899337afe6713fe1796d0315c reverted at revision 696ec55
-
✓ Canonical head updated to f2de534b5e81d7c6e2dcaf58c3dd91573c0a0354
+
✓ Canonical head for refs/heads/master updated to f2de534b5e81d7c6e2dcaf58c3dd91573c0a0354
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
 + 20aa5dd...f2de534 master -> master (forced update)
```
modified crates/radicle-cli/examples/workflow/5-patching-maintainer.md
@@ -92,7 +92,7 @@ Fast-forward
``` (stderr)
$ git push rad master
✓ Patch e4934b6d9dbe01ce3c7fbb5b77a80d5f1dacdc46 merged at revision 9d62420
-
✓ Canonical head updated to f567f695d25b4e8fb63b5f5ad2a584529826e908
+
✓ Canonical head for refs/heads/master updated to f567f695d25b4e8fb63b5f5ad2a584529826e908
✓ Synced with 1 seed(s)
To rad://z42hL2jL4XNk6K8oHQaSWfMgCL7ji/z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi
   f2de534..f567f69  master -> master
modified crates/radicle-cli/src/commands/id.rs
@@ -1,16 +1,16 @@
use std::collections::BTreeSet;
-
use std::str::FromStr;
use std::{ffi::OsString, io};

use anyhow::{anyhow, Context};

use radicle::cob::identity::{self, IdentityMut, Revision, RevisionId};
-
use radicle::identity::{doc, Doc, Identity, PayloadError, RawDoc, Visibility};
+
use radicle::identity::doc::update;
+
use radicle::identity::doc::update::EditVisibility;
+
use radicle::identity::{doc, Doc, Identity, RawDoc};
use radicle::node::device::Device;
use radicle::node::NodeId;
use radicle::prelude::{Did, RepoId};
-
use radicle::storage::refs;
-
use radicle::storage::{ReadRepository, ReadStorage as _, WriteRepository};
+
use radicle::storage::{ReadStorage as _, WriteRepository};
use radicle::{cob, crypto, Profile};
use radicle_surf::diff::Diff;
use radicle_term::Element;
@@ -88,29 +88,6 @@ pub enum Operation {
    ListRevisions,
}

-
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq)]
-
pub enum EditVisibility {
-
    #[default]
-
    Public,
-
    Private,
-
}
-

-
#[derive(thiserror::Error, Debug)]
-
#[error("'{0}' is not a valid visibility type")]
-
pub struct EditVisibilityParseError(String);
-

-
impl FromStr for EditVisibility {
-
    type Err = EditVisibilityParseError;
-

-
    fn from_str(s: &str) -> Result<Self, Self::Err> {
-
        match s {
-
            "public" => Ok(EditVisibility::Public),
-
            "private" => Ok(EditVisibility::Private),
-
            _ => Err(EditVisibilityParseError(s.to_owned())),
-
        }
-
    }
-
}
-

#[derive(Default, PartialEq, Eq)]
pub enum OperationName {
    Accept,
@@ -398,76 +375,38 @@ pub fn run(options: Options, ctx: impl term::Context) -> anyhow::Result<()> {
                let mut proposal = current.doc.clone().edit();
                proposal.threshold = threshold.unwrap_or(proposal.threshold);

-
                if !allow.is_disjoint(&disallow) {
-
                    let overlap = allow
-
                        .intersection(&disallow)
-
                        .map(Did::to_string)
-
                        .collect::<Vec<_>>();
-
                    anyhow::bail!("`--allow` and `--disallow` must not overlap: {overlap:?}")
-
                }
-

-
                match (&mut proposal.visibility, visibility) {
-
                    (Visibility::Public, None | Some(EditVisibility::Public)) if !allow.is_empty() || !disallow.is_empty() => {
-
                        return Err(Error::WithHint {
+
                let proposal = match visibility {
+
                    Some(edit) => update::visibility(proposal, edit),
+
                    None => proposal,
+
                };
+
                let proposal = match update::privacy_allow_list(proposal, allow, disallow) {
+
                    Ok(proposal) => proposal,
+
                    Err(e) => match e {
+
                        update::error::PrivacyAllowList::Overlapping(overlap) =>                     anyhow::bail!("`--allow` and `--disallow` must not overlap: {overlap:?}"),
+
                        update::error::PrivacyAllowList::PublicVisibility =>                         return Err(Error::WithHint {
                            err:
                            anyhow!("`--allow` and `--disallow` should only be used for private repositories"),
                            hint: "use `--visibility private` to make the repository private, or perhaps you meant to use `--delegate`/`--rescind`",
                        }.into())
                    }
-
                    (Visibility::Public, None | Some(EditVisibility::Public)) => { /* no-op */ },
-
                    (Visibility::Private { allow: existing }, None | Some(EditVisibility::Private)) => {
-
                        for did in allow {
-
                            existing.insert(did);
-
                        }
-
                        for did in disallow {
-
                            existing.remove(&did);
+
                };
+
                let threshold = proposal.threshold;
+
                let proposal = match update::delegates(proposal, delegates, rescind, &repo)? {
+
                    Ok(proposal) => proposal,
+
                    Err(errs) => {
+
                        term::error(format!("failed to verify delegates for {rid}"));
+
                        term::error(format!(
+
                            "the threshold of {} delegates cannot be met..",
+
                            threshold
+
                        ));
+
                        for e in errs {
+
                            print_delegate_verification_error(&e);
                        }
+
                        anyhow::bail!("fatal: refusing to update identity document");
                    }
-
                    (Visibility::Public, Some(EditVisibility::Private)) => {
-
                        // We ignore disallow since only allowing matters and the sets are disjoint.
-
                        proposal.visibility = Visibility::Private { allow };
-
                    }
-
                    (Visibility::Private { .. }, Some(EditVisibility::Public)) if !allow.is_empty() || !disallow.is_empty() => {
-
                        anyhow::bail!("`--allow` and `--disallow` cannot be used with `--visibility public`")
-
                    }
-
                    (Visibility::Private { .. }, Some(EditVisibility::Public)) => {
-
                        proposal.visibility = Visibility::Public;
-
                    }
-
                }
-
                proposal.delegates = proposal
-
                    .delegates
-
                    .into_iter()
-
                    .chain(delegates)
-
                    .filter(|d| !rescind.contains(d))
-
                    .collect::<Vec<_>>();
-
                if let Some(errs) = verify_delegates(&proposal, &repo)? {
-
                    term::error(format!("failed to verify delegates for {rid}"));
-
                    term::error(format!(
-
                        "the threshold of {} delegates cannot be met..",
-
                        proposal.threshold
-
                    ));
-
                    for e in errs {
-
                        e.print();
-
                    }
-
                    anyhow::bail!("fatal: refusing to update identity document");
-
                }
+
                };

-
                for (id, key, val) in payload {
-
                    if let Some(ref mut payload) = proposal.payload.get_mut(&id) {
-
                        if let Some(obj) = payload.as_object_mut() {
-
                            if val.is_null() {
-
                                obj.remove(&key);
-
                            } else {
-
                                obj.insert(key, val);
-
                            }
-
                        } else {
-
                            anyhow::bail!("payload `{id}` is not a map");
-
                        }
-
                    } else {
-
                        anyhow::bail!("payload `{id}` not found in identity document");
-
                    }
-
                }
-
                proposal
+
                update::payload(proposal, payload)?
            };

            // If `--edit` is specified, the document can also be edited via a text edit.
@@ -489,11 +428,7 @@ pub fn run(options: Options, ctx: impl term::Context) -> anyhow::Result<()> {
                proposal
            };

-
            // Verify that the project payload can still be parsed into the `Project` type.
-
            if let Err(PayloadError::Json(e)) = proposal.project() {
-
                anyhow::bail!("failed to verify `xyz.radicle.project`, {e}");
-
            }
-
            let proposal = proposal.verified()?;
+
            let proposal = update::verify(proposal)?;
            if proposal == current.doc {
                if !options.quiet {
                    term::print(term::format::italic(
@@ -793,62 +728,19 @@ fn print_diff(
    Ok(())
}

-
#[derive(Clone)]
-
enum VerificationError {
-
    MissingDefaultBranch {
-
        branch: radicle::git::RefString,
-
        did: Did,
-
    },
-
    MissingDelegate {
-
        did: Did,
-
    },
-
}
-

-
impl VerificationError {
-
    fn print(&self) {
-
        match self {
-
            VerificationError::MissingDefaultBranch { branch, did } => term::error(format!(
-
                "missing {} for {} in local storage",
-
                term::format::secondary(branch),
-
                term::format::did(did)
-
            )),
-
            VerificationError::MissingDelegate { did } => {
-
                term::error(format!("the delegate {did} is missing"));
-
                term::hint(format!(
-
                    "run `rad follow {did}` to follow this missing peer"
-
                ));
-
            }
-
        }
-
    }
-
}
-

-
fn verify_delegates<S>(
-
    proposal: &RawDoc,
-
    repo: &S,
-
) -> anyhow::Result<Option<Vec<VerificationError>>>
-
where
-
    S: ReadRepository,
-
{
-
    let dids = &proposal.delegates;
-
    let threshold = proposal.threshold;
-
    let (canonical, _) = repo.canonical_head()?;
-
    let mut missing = Vec::with_capacity(dids.len());
-

-
    for did in dids {
-
        match refs::SignedRefsAt::load((*did).into(), repo)? {
-
            None => {
-
                missing.push(VerificationError::MissingDelegate { did: *did });
-
            }
-
            Some(refs::SignedRefsAt { sigrefs, .. }) => {
-
                if sigrefs.get(&canonical).is_none() {
-
                    missing.push(VerificationError::MissingDefaultBranch {
-
                        branch: canonical.to_ref_string(),
-
                        did: *did,
-
                    });
-
                }
-
            }
+
fn print_delegate_verification_error(err: &update::error::DelegateVerification) {
+
    use update::error::DelegateVerification::*;
+
    match err {
+
        MissingDefaultBranch { branch, did } => term::error(format!(
+
            "missing {} for {} in local storage",
+
            term::format::secondary(branch),
+
            term::format::did(did)
+
        )),
+
        MissingDelegate { did } => {
+
            term::error(format!("the delegate {did} is missing"));
+
            term::hint(format!(
+
                "run `rad follow {did}` to follow this missing peer"
+
            ));
        }
    }
-

-
    Ok((dids.len() - missing.len() < threshold).then_some(missing))
}
modified crates/radicle-cli/tests/commands.rs
@@ -3101,6 +3101,48 @@ fn git_tag() {
}

#[test]
+
fn git_push_canonical_tags() {
+
    let mut environment = Environment::new();
+
    let alice = environment.node(Config::test(Alias::new("alice")));
+
    let bob = environment.node(Config::test(Alias::new("bob")));
+
    let working = environment.tmp().join("working");
+

+
    let rid = RepoId::from_str("z42hL2jL4XNk6K8oHQaSWfMgCL7ji").unwrap();
+
    fixtures::repository(working.join("alice"));
+

+
    test(
+
        "examples/rad-init.md",
+
        working.join("alice"),
+
        Some(&alice.home),
+
        [],
+
    )
+
    .unwrap();
+

+
    let alice = alice.spawn();
+
    let mut bob = bob.spawn();
+

+
    bob.connect(&alice).converge([&alice]);
+
    bob.clone(rid, working.join("bob")).unwrap();
+
    formula(
+
        &environment.tmp(),
+
        "examples/git/git-push-canonical-tags.md",
+
    )
+
    .unwrap()
+
    .home(
+
        "alice",
+
        working.join("alice"),
+
        [("RAD_HOME", alice.home.path().display())],
+
    )
+
    .home(
+
        "bob",
+
        working.join("bob"),
+
        [("RAD_HOME", bob.home.path().display())],
+
    )
+
    .run()
+
    .unwrap();
+
}
+

+
#[test]
fn rad_workflow() {
    let mut environment = Environment::new();
    let alice = environment.node(Config::test(Alias::new("alice")));
modified crates/radicle-node/src/worker/fetch.rs
@@ -7,15 +7,18 @@ use localtime::LocalTime;

use radicle::cob::TypedId;
use radicle::crypto::PublicKey;
+
use radicle::identity::crefs::GetCanonicalRefs as _;
use radicle::identity::DocAt;
use radicle::prelude::NodeId;
use radicle::prelude::RepoId;
+
use radicle::storage::git::Repository;
use radicle::storage::refs::RefsAt;
use radicle::storage::{
    ReadRepository, ReadStorage as _, RefUpdate, RemoteRepository, RepositoryError,
    WriteRepository as _,
};
use radicle::{cob, git, node, Storage};
+
use radicle_fetch::git::refs::Applied;
use radicle_fetch::{Allowed, BlockList, FetchLimit};

use super::channels::ChannelsFlush;
@@ -89,8 +92,6 @@ impl Handle {
        remote: PublicKey,
        refs_at: Option<Vec<RefsAt>>,
    ) -> Result<FetchResult, error::Fetch> {
-
        use git::canonical::QuorumError::{Diverging, NoCandidates};
-

        let (result, clone, notifs) = match self {
            Self::Clone { mut handle, tmp } => {
                log::debug!(target: "worker", "{} cloning from {remote}", handle.local());
@@ -145,15 +146,19 @@ impl Handle {
                            log::trace!(target: "worker", "Set HEAD to {}", head.new);
                        }
                    }
-
                    Err(RepositoryError::Quorum(Diverging(e))) => {
-
                        log::warn!(target: "worker", "Fetch could not set HEAD: {e}")
+
                    Err(RepositoryError::Quorum(radicle::git::canonical::QuorumError::Git(e))) => {
+
                        return Err(e.into())
                    }
-
                    Err(RepositoryError::Quorum(NoCandidates(e))) => {
+
                    Err(RepositoryError::Quorum(e)) => {
                        log::warn!(target: "worker", "Fetch could not set HEAD: {e}")
                    }
                    Err(e) => return Err(e.into()),
                }

+
                if let Err(e) = set_canonical_refs(&repo, &applied) {
+
                    log::warn!(target: "worker", "Failed to set canonical references: {e}");
+
                }
+

                // Notifications are only posted for pulls, not clones.
                if let Some(mut store) = notifs {
                    // Only create notifications for repos that we have
@@ -377,3 +382,71 @@ where

    Ok(())
}
+

+
fn set_canonical_refs(repo: &Repository, applied: &Applied) -> Result<(), error::Canonical> {
+
    let identity = repo.identity()?;
+
    let rules = match identity
+
        .canonical_refs()?
+
        .map(|crefs| crefs.rules().clone())
+
        .filter(|rules| !rules.is_empty())
+
    {
+
        None => return Ok(()),
+
        Some(rules) => rules,
+
    };
+

+
    for update in applied.updated.iter() {
+
        let name = match update {
+
            RefUpdate::Updated { name, .. } | RefUpdate::Created { name, .. } => name,
+
            _ => {
+
                log::trace!(target: "worker", "Skipping update {update}");
+
                continue;
+
            }
+
        };
+
        let Some(name) = name.clone().into_qualified() else {
+
            log::warn!(target: "worker", "Skipping update for canonical reference '{name}' because it is not qualified.");
+
            continue;
+
        };
+
        let Some(name) = name.to_namespaced() else {
+
            log::warn!(target: "worker", "Skipping update for canonical reference '{name}' because it is not namespaced.");
+
            continue;
+
        };
+

+
        let name = name.strip_namespace();
+

+
        let canonical = match rules.canonical(name.clone(), repo) {
+
            Ok(Some(canonical)) => canonical,
+
            Ok(None) => continue,
+
            Err(e) => {
+
                log::warn!(target: "worker", "Failed to get canonical reference rule for {name}: {e}");
+
                continue;
+
            }
+
        };
+

+
        match canonical.quorum(&repo.backend) {
+
            Err(err) => {
+
                log::warn!(
+
                    target: "worker",
+
                    "Failed to calculate canonical reference: {}",
+
                    err,
+
                );
+
                continue;
+
            }
+
            Ok((refname, oid)) => {
+
                if let Err(e) = repo.backend.reference(
+
                    refname.clone().as_str(),
+
                    *oid,
+
                    true,
+
                    "set-canonical-reference from fetch (radicle)",
+
                ) {
+
                    log::warn!(
+
                        target: "worker",
+
                        "Failed to set canonical reference {}->{}: {e}",
+
                        refname,
+
                        oid
+
                    );
+
                }
+
            }
+
        }
+
    }
+
    Ok(())
+
}
modified crates/radicle-node/src/worker/fetch/error.rs
@@ -65,3 +65,11 @@ pub enum Handle {
    #[error(transparent)]
    Repository(#[from] radicle::storage::RepositoryError),
}
+

+
#[derive(Debug, Error)]
+
pub enum Canonical {
+
    #[error(transparent)]
+
    Identity(#[from] radicle::storage::RepositoryError),
+
    #[error(transparent)]
+
    CanonicalRefs(#[from] radicle::identity::doc::CanonicalRefsError),
+
}
modified crates/radicle-remote-helper/src/push.rs
@@ -5,6 +5,8 @@ use std::path::{Path, PathBuf};
use std::str::FromStr;
use std::{assert_eq, io};

+
use radicle::identity::crefs::GetCanonicalRefs as _;
+
use radicle::identity::doc::CanonicalRefsError;
use radicle::node::device::Device;
use thiserror::Error;

@@ -15,8 +17,7 @@ use radicle::cob::patch::cache::Patches as _;
use radicle::crypto;
use radicle::explorer::ExplorerResource;
use radicle::git::canonical;
-
use radicle::git::canonical::Canonical;
-
use radicle::identity::Did;
+
use radicle::identity::{CanonicalRefs, Did};
use radicle::node;
use radicle::node::{Handle, NodeId};
use radicle::storage;
@@ -116,6 +117,8 @@ pub enum Error {
    /// Quorum error.
    #[error(transparent)]
    Quorum(#[from] radicle::git::canonical::QuorumError),
+
    #[error(transparent)]
+
    CanonicalRefs(#[from] radicle::identity::doc::CanonicalRefsError),
}

/// Push command.
@@ -203,6 +206,10 @@ pub fn run(
        }
    }
    let delegates = stored.delegates()?;
+
    let identity = stored.identity()?;
+
    let project = identity.project()?;
+
    let canonical_ref = git::refs::branch(project.default_branch());
+
    let mut set_canonical_refs: Vec<(git::Qualified, git::Oid)> = Vec::with_capacity(specs.len());

    // For each refspec, push a ref or delete a ref.
    for spec in specs {
@@ -262,8 +269,11 @@ pub fn run(
                        )
                    } else {
                        let identity = stored.identity()?;
-
                        let project = identity.project()?;
-
                        let canonical_ref = git::refs::branch(project.default_branch());
+
                        let crefs = identity.canonical_refs_or_default(|| {
+
                            let rule = identity.doc().default_branch_rule()?;
+
                            Ok::<_, CanonicalRefsError>(CanonicalRefs::from_iter([rule]))
+
                        })?;
+
                        let rules = crefs.rules();
                        let me = Did::from(nid);

                        // If we're trying to update the canonical head, make sure
@@ -272,66 +282,51 @@ pub fn run(
                        //
                        // Note that we *do* allow rolling back to a previous commit on the
                        // canonical branch.
-
                        if dst == canonical_ref && delegates.contains(&me) && delegates.len() > 1 {
-
                            let head = working.find_reference(src.as_str())?;
-
                            let head = head.peel_to_commit()?.id();
-

-
                            let mut canonical = Canonical::default_branch(
-
                                stored,
-
                                &project,
-
                                identity.delegates().as_ref(),
-
                                identity.threshold(),
-
                            )?;
-
                            let converges = canonical::converges(
-
                                canonical
-
                                    .tips()
-
                                    .filter_map(|(did, tip)| (*did != me).then_some(tip)),
-
                                head.into(),
-
                                &working,
-
                            )?;
-
                            if converges {
-
                                canonical.modify_vote(me, head.into());
-
                            }
+
                        if let Some(mut canonical) = rules.canonical(dst.clone(), stored)? {
+
                            if canonical.is_allowed(&me) {
+
                                let head = working.find_reference(src.as_str())?;
+
                                let head = head.peel_to_commit()?.id();
+
                                let converges =
+
                                    canonical.converges(&working, (&me, &head.into()))?;
+

+
                                // If `canonical` is empty then we're creating a new reference.
+
                                // If we're the only delegate then we need to modify our vote.
+
                                if converges || canonical.has_no_tips() || canonical.is_only(&me) {
+
                                    canonical.modify_vote(me, head.into());
+
                                }

-
                            match canonical.quorum(&working) {
-
                                Ok(canonical_oid) => {
-
                                    // Canonical head is an ancestor of head.
-
                                    let is_ff = head == *canonical_oid
-
                                        || working.graph_descendant_of(head, *canonical_oid)?;
-

-
                                    if !is_ff && !converges {
-
                                        if hints {
-
                                            hint(
-
                                                "you are attempting to push a commit that would cause \
-
                                                 your upstream to diverge from the canonical head",
-
                                            );
-
                                            hint(
-
                                                "to integrate the remote changes, run `git pull --rebase` \
-
                                                 and try again",
-
                                            );
+
                                match canonical.quorum(&working) {
+
                                    Ok((dst, canonical_oid)) => {
+
                                        // Canonical head is an ancestor of head.
+
                                        let is_ff = head == *canonical_oid
+
                                            || working.graph_descendant_of(head, *canonical_oid)?;
+

+
                                        if !is_ff && !converges {
+
                                            if hints {
+
                                                hint(
+
                                                    format!("you are attempting to push a commit that would cause \
+
                                                    your upstream to diverge from the canonical reference {dst}"),
+
                                                );
+
                                                hint(
+
                                                    "to integrate the remote changes, run `git pull --rebase` \
+
                                                    and try again",
+
                                                );
+
                                            }
+
                                            return Err(Error::HeadsDiverge(
+
                                                head.into(),
+
                                                canonical_oid,
+
                                            ));
                                        }
-
                                        return Err(Error::HeadsDiverge(
-
                                            head.into(),
-
                                            canonical_oid,
-
                                        ));
+
                                        set_canonical_refs
+
                                            .push((dst.clone().to_owned(), canonical_oid));
+
                                    }
+
                                    Err(canonical::QuorumError::Git(e)) => return Err(e.into()),
+
                                    Err(e) => {
+
                                        warn(e.to_string());
+
                                        warn("it is recommended to find a commit to agree upon");
                                    }
                                }
-
                                Err(canonical::QuorumError::Diverging(e)) => {
-
                                    warn(format!(
-
                                        "could not determine canonical tip for `{canonical_ref}`"
-
                                    ));
-
                                    warn(e.to_string());
-
                                    warn("it is recommended to find a commit to agree upon");
-
                                }
-
                                Err(canonical::QuorumError::NoCandidates(e)) => {
-
                                    warn(format!(
-
                                        "could not determine canonical tip for `{canonical_ref}`"
-
                                    ));
-
                                    warn(e.to_string());
-
                                    warn("it is recommended to find a commit to agree upon");
-
                                }
-
                                Err(e) => return Err(e.into()),
-
                            };
+
                            }
                        }
                        push(src, &dst, *force, &nid, &working, stored, patches, &signer)
                    }
@@ -354,16 +349,50 @@ pub fn run(
    if !ok.is_empty() {
        let _ = stored.sign_refs(&signer)?;

-
        // N.b. if an error occurs then there may be no quorum
-
        if let Ok(head) = stored.set_head() {
-
            if head.is_updated() {
+
        for (refname, oid) in &set_canonical_refs {
+
            let print_update = || {
                eprintln!(
-
                    "{} Canonical head updated to {}",
+
                    "{} Canonical head for {} updated to {}",
                    term::format::positive("✓"),
-
                    term::format::secondary(head.new),
-
                );
+
                    term::format::secondary(refname),
+
                    term::format::secondary(oid),
+
                )
+
            };
+

+
            // N.b. special case for handling the canonical ref, since it
+
            // creates a symlink to HEAD
+
            if *refname == canonical_ref
+
                && stored
+
                    .set_head()
+
                    .map(|head| head.is_updated())
+
                    .unwrap_or(false)
+
            {
+
                print_update();
+
                continue;
            }
-
        };
+

+
            match stored.backend.refname_to_id(refname.as_str()) {
+
                Ok(new) if new != **oid => {
+
                    stored.backend.reference(
+
                        refname.as_str(),
+
                        **oid,
+
                        true,
+
                        "set-canonical-reference from git-push (radicle)",
+
                    )?;
+
                    print_update();
+
                }
+
                Err(e) if e.code() == git::raw::ErrorCode::NotFound => {
+
                    stored.backend.reference(
+
                        refname.as_str(),
+
                        **oid,
+
                        true,
+
                        "set-canonical-reference from git-push (radicle)",
+
                    )?;
+
                    print_update();
+
                }
+
                _ => {}
+
            }
+
        }

        if !opts.no_sync {
            if profile.policies()?.is_seeding(&stored.id)? {
modified crates/radicle/Cargo.toml
@@ -22,6 +22,7 @@ chrono = { workspace = true, features = ["clock"], optional = true }
colored = { workspace = true, optional = true }
crossbeam-channel = { workspace = true }
cyphernet = { workspace = true, features = ["tor", "dns", "p2p-ed25519"] }
+
fast-glob = { version = "0.3.2" }
fastrand = { workspace = true }
git2 = { workspace = true, features = ["vendored-libgit2"] }
libc = { workspace = true }
modified crates/radicle/src/git/canonical.rs
@@ -1,124 +1,65 @@
+
pub mod rules;
+
pub use rules::{MatchedRule, RawRule, Rules, ValidRule};
+

use std::collections::BTreeMap;
-
use std::fmt;

-
use nonempty::NonEmpty;
use raw::Repository;
use thiserror::Error;

use crate::prelude::Did;
-
use crate::prelude::Project;
use crate::storage::ReadRepository;

use super::raw;
-
use super::{lit, Oid, Qualified};
+
use super::{Oid, Qualified};

/// A collection of [`Did`]s and their [`Oid`]s that is the tip for a given
/// reference for that [`Did`].
///
-
/// The general construction of `Canonical` is by using the
-
/// [`Canonical::reference`] constructor. For the default branch of a
-
/// [`Project`], use [`Canonical::default_branch`].
+
/// The general construction of `Canonical` is by using the [`Canonical::new`]
+
/// constructor.
///
/// `Canonical` can then be used for performing calculations about the
/// canonicity of the reference, most importantly the [`Canonical::quorum`].
+
///
+
/// References to the refname and the matched rule are kept, as they
+
/// are very handy for generating error messages.
#[derive(Debug)]
-
pub struct Canonical {
+
pub struct Canonical<'a, 'b> {
+
    refname: Qualified<'a>,
+
    rule: &'b ValidRule,
    tips: BTreeMap<Did, Oid>,
-
    threshold: usize,
}

/// Error that can occur when calculation the [`Canonical::quorum`].
#[derive(Debug, Error)]
pub enum QuorumError {
    /// Could not determine a quorum [`Oid`], due to diverging tips.
-
    #[error("could not determine canonical reference tip, {0}")]
-
    Diverging(Diverging),
+
    #[error("could not determine commit for canonical reference '{refname}', found diverging commits {longest} and {head}, with base commit {base} and threshold {threshold}")]
+
    Diverging {
+
        refname: String,
+
        threshold: usize,
+
        base: Oid,
+
        longest: Oid,
+
        head: Oid,
+
    },
    /// Could not determine a base candidate from the given set of delegates.
-
    #[error("could not determine canonical reference tip, {0}")]
-
    NoCandidates(NoCandidates),
+
    #[error("could not determine commit for canonical reference '{refname}', no commit with at least {threshold} vote(s) found (threshold not met)")]
+
    NoCandidates { refname: String, threshold: usize },
    /// An error occurred from [`git2`].
    #[error(transparent)]
    Git(#[from] git2::Error),
}

-
/// No candidates were found for the [`Canonical::quorum`] calculation.
-
///
-
/// The [`fmt::Display`] is used in [`QuorumError`], to provide information on
-
/// the threshold and delegates in the calculation.
-
#[derive(Debug)]
-
pub struct NoCandidates {
-
    threshold: usize,
-
}
-

-
impl fmt::Display for NoCandidates {
-
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
-
        let NoCandidates { threshold } = self;
-
        write!(
-
            f,
-
            "no commit found with at least {threshold} vote(s) (threshold not met)"
-
        )
-
    }
-
}
-

-
/// Diverging commits were found during the [`Canonical::quorum`] calculation.
-
///
-
/// The [`fmt::Display`] is used in [`QuorumError`], to provide information on
-
/// the threshold, base commit, and the two diverging commits, in the
-
/// calculation.
-
#[derive(Debug)]
-
pub struct Diverging {
-
    threshold: usize,
-
    base: Oid,
-
    longest: Oid,
-
    head: Oid,
-
}
-

-
impl fmt::Display for Diverging {
-
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
-
        let Diverging {
-
            threshold,
-
            base,
-
            longest,
-
            head,
-
        } = self;
-
        write!(f, "found diverging commits {longest} and {head}, with base commit {base} and threshold {threshold}")
-
    }
-
}
-

-
impl Canonical {
-
    /// Construct the set of canonical tips of the `Project::default_branch` for
-
    /// the given `delegates`.
-
    pub fn default_branch<S>(
-
        repo: &S,
-
        project: &Project,
-
        delegates: &NonEmpty<Did>,
-
        threshold: usize,
-
    ) -> Result<Self, raw::Error>
-
    where
-
        S: ReadRepository,
-
    {
-
        Self::reference(
-
            repo,
-
            &lit::refs_heads(project.default_branch()).into(),
-
            delegates,
-
            threshold,
-
        )
-
    }
-

-
    /// Construct the set of canonical tips given for the given `delegates` and
-
    /// the reference `name`.
-
    pub fn reference<S>(
-
        repo: &S,
-
        refname: &Qualified,
-
        delegates: &NonEmpty<Did>,
-
        threshold: usize,
-
    ) -> Result<Self, raw::Error>
+
impl<'a, 'b> Canonical<'a, 'b> {
+
    /// Construct the set of canonical tips given for the given `rule` and
+
    /// the reference `refname`.
+
    pub fn new<S>(repo: &S, refname: Qualified<'a>, rule: &'b ValidRule) -> Result<Self, raw::Error>
    where
        S: ReadRepository,
    {
        let mut tips = BTreeMap::new();
-
        for delegate in delegates.iter() {
-
            match repo.reference_oid(delegate, refname) {
+
        for delegate in rule.allowed().iter() {
+
            match repo.reference_oid(delegate, &refname) {
                Ok(tip) => {
                    tips.insert(*delegate, tip);
                }
@@ -132,7 +73,11 @@ impl Canonical {
                Err(e) => return Err(e),
            }
        }
-
        Ok(Canonical { tips, threshold })
+
        Ok(Canonical {
+
            refname,
+
            tips,
+
            rule,
+
        })
    }

    /// Return the set of [`Did`]s and their [`Oid`] tip.
@@ -140,36 +85,18 @@ impl Canonical {
        self.tips.iter()
    }

-
    /// Returns `true` is there were no tips found for any of the delegates for
+
    /// Returns `true` if there were no tips found for any of the DIDs for
    /// the given reference.
    ///
    /// N.b. this may be the case when a new reference is being created.
-
    pub fn is_empty(&self) -> bool {
+
    pub fn has_no_tips(&self) -> bool {
        self.tips.is_empty()
    }
-
}

-
/// Check that a given `target` converges with any of the provided `tips`.
-
///
-
/// It converges if the `target` is either equal to, ahead of, or behind any of
-
/// the tips.
-
pub fn converges<'a>(
-
    tips: impl Iterator<Item = &'a Oid>,
-
    target: Oid,
-
    repo: &Repository,
-
) -> Result<bool, raw::Error> {
-
    for tip in tips {
-
        match repo.graph_ahead_behind(*target, **tip)? {
-
            (0, 0) => return Ok(true),
-
            (ahead, behind) if ahead > 0 && behind == 0 => return Ok(true),
-
            (ahead, behind) if behind > 0 && ahead == 0 => return Ok(true),
-
            (_, _) => {}
-
        }
+
    pub fn refname(&self) -> &Qualified {
+
        &self.refname
    }
-
    Ok(false)
-
}

-
impl Canonical {
    /// In some cases, we allow the vote to be modified. For example, when the
    /// `did` is pushing a new commit, we may want to see if the new commit will
    /// reach a quorum.
@@ -177,6 +104,41 @@ impl Canonical {
        self.tips.insert(did, new);
    }

+
    /// Check that the provided `did` is part of the set of allowed
+
    /// DIDs of the matching rule.
+
    pub fn is_allowed(&self, did: &Did) -> bool {
+
        self.rule.allowed().contains(did)
+
    }
+

+
    /// Check that the provided `did` is the only DID in the set of allowed
+
    /// DIDs of the matching rule.
+
    pub fn is_only(&self, did: &Did) -> bool {
+
        self.rule.allowed().is_only(did)
+
    }
+

+
    /// Checks that setting the given candidate tip would converge with at least
+
    /// one other known tip.
+
    ///
+
    /// It converges if the candidate Oid is either equal to, ahead of, or behind any of
+
    /// the tips.
+
    pub fn converges(
+
        &self,
+
        repo: &Repository,
+
        candidate: (&Did, &Oid),
+
    ) -> Result<bool, raw::Error> {
+
        for tip in self
+
            .tips
+
            .iter()
+
            .filter_map(|(did, tip)| (did != candidate.0).then_some(tip))
+
        {
+
            let (ahead, behind) = repo.graph_ahead_behind(**candidate.1, **tip)?;
+
            if ahead * behind == 0 {
+
                return Ok(true);
+
            }
+
        }
+
        Ok(false)
+
    }
+

    /// Computes the quorum or "canonical" tip based on the tips, of `Canonical`,
    /// and the threshold. This can be described as the latest commit that is
    /// included in at least `threshold` histories. In case there are multiple tips
@@ -184,7 +146,7 @@ impl Canonical {
    ///
    /// Also returns an error if `heads` is empty or `threshold` cannot be
    /// satisified with the number of heads given.
-
    pub fn quorum(self, repo: &raw::Repository) -> Result<Oid, QuorumError> {
+
    pub fn quorum(self, repo: &raw::Repository) -> Result<(Qualified<'a>, Oid), QuorumError> {
        let mut candidates = BTreeMap::<_, usize>::new();

        // Build a list of candidate commits and count how many "votes" each of them has.
@@ -209,14 +171,12 @@ impl Canonical {
            }
        }
        // Keep commits which pass the threshold.
-
        candidates.retain(|_, votes| *votes >= self.threshold);
+
        candidates.retain(|_, votes| *votes >= self.threshold());

-
        let (mut longest, _) =
-
            candidates
-
                .pop_first()
-
                .ok_or(QuorumError::NoCandidates(NoCandidates {
-
                    threshold: self.threshold,
-
                }))?;
+
        let (mut longest, _) = candidates.pop_first().ok_or(QuorumError::NoCandidates {
+
            refname: self.refname.to_string(),
+
            threshold: self.threshold(),
+
        })?;

        // Now that all scores are calculated, figure out what is the longest branch
        // that passes the threshold. In case of divergence, return an error.
@@ -250,15 +210,20 @@ impl Canonical {
                //            o (base)
                //            |
                //
-
                return Err(QuorumError::Diverging(Diverging {
-
                    threshold: self.threshold,
+
                return Err(QuorumError::Diverging {
+
                    refname: self.refname.to_string(),
+
                    threshold: self.threshold(),
                    base: base.into(),
                    longest,
                    head: *head,
-
                }));
+
                });
            }
        }
-
        Ok((*longest).into())
+
        Ok((self.refname, (*longest).into()))
+
    }
+

+
    fn threshold(&self) -> usize {
+
        (*self.rule.threshold()).into()
    }
}

@@ -278,7 +243,7 @@ mod tests {
        threshold: usize,
        repo: &git::raw::Repository,
    ) -> Result<Oid, QuorumError> {
-
        let tips = heads
+
        let tips: BTreeMap<Did, Oid> = heads
            .iter()
            .enumerate()
            .map(|(i, head)| {
@@ -287,7 +252,24 @@ mod tests {
                (did, (*head).into())
            })
            .collect();
-
        Canonical { tips, threshold }.quorum(repo)
+

+
        let refname =
+
            git::refs::branch(git_ext::ref_format::RefStr::try_from_str("master").unwrap());
+

+
        let rule: RawRule = crate::git::canonical::rules::Rule::new(
+
            crate::git::canonical::rules::Allowed::Delegates,
+
            threshold,
+
        );
+
        let delegates = crate::identity::doc::Delegates::new(tips.keys().cloned()).unwrap();
+
        let rule = rule.validate(&mut || delegates.clone()).unwrap();
+

+
        Canonical {
+
            refname,
+
            tips,
+
            rule: &rule,
+
        }
+
        .quorum(repo)
+
        .map(|(_, oid)| oid)
    }

    #[test]
@@ -349,9 +331,6 @@ mod tests {
        assert_eq!(quorum(&[*c0], 1, &repo).unwrap(), c0);
        assert_eq!(quorum(&[*c1], 1, &repo).unwrap(), c1);
        assert_eq!(quorum(&[*c2], 1, &repo).unwrap(), c2);
-
        assert_eq!(quorum(&[*c0], 0, &repo).unwrap(), c0);
-
        assert_matches!(quorum(&[], 0, &repo), Err(QuorumError::NoCandidates(_)));
-
        assert_matches!(quorum(&[*c0], 2, &repo), Err(QuorumError::NoCandidates(_)));

        //  C1
        //  |
@@ -377,23 +356,23 @@ mod tests {
        //  C0
        assert_matches!(
            quorum(&[*c1, *c2, *b2], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*c2, *b2], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*b2, *c2], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*c2, *b2], 2, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );
        assert_matches!(
            quorum(&[*b2, *c2], 2, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );
        assert_eq!(quorum(&[*c1, *c2, *b2], 2, &repo).unwrap(), c1);
        assert_eq!(quorum(&[*c1, *c2, *b2], 3, &repo).unwrap(), c1);
@@ -401,7 +380,7 @@ mod tests {
        assert_eq!(quorum(&[*b2, *c2, *c2], 2, &repo).unwrap(), c2);
        assert_matches!(
            quorum(&[*b2, *b2, *c2, *c2], 2, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );

        // B2 C2 C3
@@ -412,15 +391,15 @@ mod tests {
        assert_eq!(quorum(&[*b2, *c2, *c2], 2, &repo).unwrap(), c2);
        assert_matches!(
            quorum(&[*b2, *c2, *c2], 3, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );
        assert_matches!(
            quorum(&[*b2, *c2, *b2, *c2], 3, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );
        assert_matches!(
            quorum(&[*c3, *b2, *c2, *b2, *c2, *c3], 3, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );

        //  B2 C2
@@ -430,19 +409,19 @@ mod tests {
        //   C0
        assert_matches!(
            quorum(&[*c2, *b2, *a1], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*c2, *b2, *a1], 2, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );
        assert_matches!(
            quorum(&[*c2, *b2, *a1], 3, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );
        assert_matches!(
            quorum(&[*c1, *c2, *b2, *a1], 4, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );
        assert_eq!(quorum(&[*c0, *c1, *c2, *b2, *a1], 2, &repo).unwrap(), c1,);
        assert_eq!(quorum(&[*c0, *c1, *c2, *b2, *a1], 3, &repo).unwrap(), c1,);
@@ -450,23 +429,23 @@ mod tests {
        assert_eq!(quorum(&[*c0, *c1, *c2, *b2, *a1], 4, &repo).unwrap(), c0,);
        assert_matches!(
            quorum(&[*a1, *a1, *c2, *c2, *c1], 2, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*a1, *a1, *c2, *c2, *c1], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*a1, *a1, *c2], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*b2, *b2, *c2, *c2], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*b2, *b2, *c2, *c2, *a1], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );

        //    M2  M1
@@ -479,27 +458,27 @@ mod tests {
        assert_eq!(quorum(&[*m1], 1, &repo).unwrap(), m1);
        assert_matches!(
            quorum(&[*m1, *m2], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*m2, *m1], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*m1, *m2], 2, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );
        assert_matches!(
            quorum(&[*m1, *m2, *c2], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*m1, *a1], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*m1, *a1], 2, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );
        assert_eq!(quorum(&[*m1, *m2, *b2, *c1], 4, &repo).unwrap(), c1);
        assert_eq!(quorum(&[*m1, *m1, *b2], 2, &repo).unwrap(), m1);
@@ -538,11 +517,11 @@ mod tests {
        //      C0
        assert_matches!(
            quorum(&[*m1, *m2], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*m1, *m2], 2, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );

        let m3 = fixtures::commit("M3", &[*c2, *c1], &repo);
@@ -554,27 +533,27 @@ mod tests {
        //      C0
        assert_matches!(
            quorum(&[*m1, *m3], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*m1, *m3], 2, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );
        assert_matches!(
            quorum(&[*m3, *m1], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*m3, *m1], 2, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );
        assert_matches!(
            quorum(&[*m3, *m2], 1, &repo),
-
            Err(QuorumError::Diverging(_))
+
            Err(QuorumError::Diverging { .. })
        );
        assert_matches!(
            quorum(&[*m3, *m2], 2, &repo),
-
            Err(QuorumError::NoCandidates(_))
+
            Err(QuorumError::NoCandidates { .. })
        );
    }
}
added crates/radicle/src/git/canonical/rules.rs
@@ -0,0 +1,1218 @@
+
//! Implementation of RIP-0004 Canonical References
+
//!
+
//! [`RawRules`] is intended to be deserialized and then validated into a set of
+
//! [`Rules`]. These can then be used to see if a [`Qualified`] reference
+
//! matches any of the rules, using [`Rules::matches`]. Using [`Canonical`] with
+
//! the first matched rule, and this can be used to calculate the
+
//! [`Canonical::quorum`].
+

+
use core::fmt;
+
use std::cmp::Ordering;
+
use std::collections::BTreeMap;
+
use std::sync::LazyLock;
+

+
use nonempty::NonEmpty;
+
use serde::{Deserialize, Serialize};
+
use serde_json as json;
+
use thiserror::Error;
+

+
use crate::git;
+
use crate::git::canonical::Canonical;
+
use crate::git::fmt::{refname, RefString};
+
use crate::git::refspec::QualifiedPattern;
+
use crate::git::Qualified;
+
use crate::identity::{doc, Did};
+
use crate::storage::git::Repository;
+

+
const ASTERISK: char = '*';
+

+
static REFS_RAD: LazyLock<RefString> = LazyLock::new(|| refname!("refs/rad"));
+

+
/// Private trait to ensure that not any `Rule` can be deserialized.
+
/// Implementations are provided for `Allowed` and `usize` so that `RawRule`s
+
/// can be deserialized, while `ValidRule`s cannot – preventing deserialization
+
/// bugs for that type.
+
trait Sealed {}
+
impl Sealed for Allowed {}
+
impl Sealed for usize {}
+

+
/// A `Pattern` is a `QualifiedPattern` reference, however, it disallows any
+
/// references under the `refs/rad` hierarchy.
+
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize)]
+
#[serde(into = "QualifiedPattern", try_from = "QualifiedPattern")]
+
pub struct Pattern(QualifiedPattern<'static>);
+

+
impl fmt::Display for Pattern {
+
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+
        f.write_str(self.0.as_str())
+
    }
+
}
+

+
impl From<Pattern> for QualifiedPattern<'static> {
+
    fn from(Pattern(pattern): Pattern) -> Self {
+
        pattern
+
    }
+
}
+

+
impl<'a> TryFrom<QualifiedPattern<'a>> for Pattern {
+
    type Error = PatternError;
+

+
    fn try_from(pattern: QualifiedPattern<'a>) -> Result<Self, Self::Error> {
+
        if pattern.starts_with(REFS_RAD.as_str()) {
+
            Err(PatternError::ProtectedRef {
+
                prefix: (*REFS_RAD).clone(),
+
                pattern: pattern.to_owned(),
+
            })
+
        } else {
+
            Ok(Self(pattern.to_owned()))
+
        }
+
    }
+
}
+

+
impl<'a> TryFrom<Qualified<'a>> for Pattern {
+
    type Error = PatternError;
+

+
    fn try_from(name: Qualified<'a>) -> Result<Self, Self::Error> {
+
        Self::try_from(QualifiedPattern::from(name))
+
    }
+
}
+

+
impl Pattern {
+
    /// Check if the `refname` matches the rule's `refspec`.
+
    pub fn matches(&self, refname: &Qualified) -> bool {
+
        // N.b. Git's refspecs do not quite match with glob-star semantics. A
+
        // single `*` in a refspec is expected to match all references under
+
        // that namespace, even if they are further down the hierarchy.
+
        // Thus, the following rules are applied:
+
        //
+
        //   - a trailing `*` changes to `**/*`
+
        //   - a `*` in between path components changes to `**`
+
        let spec = match self.0.as_str().split_once(ASTERISK) {
+
            None => self.0.to_string(),
+
            // Expand `refs/tags/*` to `refs/tags/**/*`
+
            Some((prefix, "")) => {
+
                let mut spec = prefix.to_string();
+
                spec.push_str("**/*");
+
                spec
+
            }
+
            // Expand `refs/tags/*/v1.0` to `refs/tags/**/v1.0`
+
            Some((prefix, suffix)) => {
+
                let mut spec = prefix.to_string();
+
                spec.push_str("**");
+
                spec.push_str(suffix);
+
                spec
+
            }
+
        };
+
        fast_glob::glob_match(&spec, refname.as_str())
+
    }
+
}
+

+
impl AsRef<QualifiedPattern<'static>> for Pattern {
+
    fn as_ref(&self) -> &QualifiedPattern<'static> {
+
        &self.0
+
    }
+
}
+

+
impl PartialOrd for Pattern {
+
    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
+
        Some(self.cmp(other))
+
    }
+
}
+

+
/// Patterns are ordered by their specificity.
+
///
+
/// This is heavily influenced by the evaluation priority of Rules. For a
+
/// candidate reference name, we want the rule associated with the most specific
+
/// pattern to apply, i.e. to take priority over all other rules with less
+
/// specific patterns.
+
///
+
/// For two patterns `φ` and `ψ`, we say that "`φ` is more specific than `ψ`", denoted
+
/// `φ < ψ` if:
+
///
+
///  1. The number of components in `φ` is larger than the number of components
+
///     in `ψ`. (Note that the number of components is equal to the number of
+
///     occurrences of the symbol '/' in the pattern, plus 1).
+
///     The justification is, that refnames might be interpreted as a hierarchy
+
///     where a match on more components would mean a match at a lower level in
+
///     the hierarchy, thus being more specific.
+
///     Imagine a refname hierarchy that maps to a corporate hierarchy.
+
///     The pattern "department-1" matches all refnames that are administered
+
///     by a particular department, and thus is not very specific.
+
///     To contrast, the pattern "department-1/team-a/project-i/nice-feature"
+
///     is very specific as it matches all refnames that relate to the
+
///     development of a particular feature for a particular project by a
+
///     particular team.
+
///     Note that this would also apply when the connection between the `φ` and `ψ`
+
///     is not as obvious, e.g. also `a/b/c/d/* < */x`.
+
///
+
/// (Note that for the following items, one may assume that `φ` and `ψ` have the
+
/// same number of components.)
+
///
+
///  2. If path component i of `φ`, denoted `φ[i]`, is more specific than path
+
///     component i of `ψ`, denoted `ψ[i]`. This is the case if:
+
///      a. `φ[i]` does not contain an asterisk and `ψ[i]` contains an asterisk,
+
///         i.e. the symbol `*`, e.g. `a < * and abc < a*`.
+
///         Note that this is important to capture specificity accross
+
///         components, i.e. to conclude that `a/b/* < a/*/c`.
+
///      b. Both `φ[i]` and `ψ[i]` contain an asterisk.
+
///          A. The asterisk in `φ[i]` is further right than the asterisk in `φ[i]`,
+
///             e.g. `aa* < a*`.
+
///          B. The asterisk in `φ[i]` and `ψ[i]` is equally far to the right,
+
///             and `φ[i]` is longer than `ψ[i]`, e.g. `a*b < a*`.
+
///
+
///  3. Otherwise, fall back to a lexicographic ordering.
+
///
+
/// Some examples (justification in parentheses):
+
///
+
/// ```text, no_run
+
/// refs/tags/release/candidates/* <(1.)   refs/tags/release/* <(1.) refs/tags/*
+
/// refs/tags/v1.0                 <(2.a.) refs/tags/*
+
/// refs/heads/*                   <(3.)   refs/tags/*
+
/// refs/heads/main                <(3.)   refs/tags/v1.0
+
/// ```
+
impl Ord for Pattern {
+
    fn cmp(&self, other: &Self) -> Ordering {
+
        #[derive(Debug, Clone, Copy)]
+
        #[repr(i8)]
+
        enum ComponentOrdering {
+
            MatchLength(Ordering),
+
            Lexicographic(Ordering),
+
        }
+

+
        impl ComponentOrdering {
+
            fn merge(&mut self, other: Self) {
+
                *self = match (*self, other) {
+
                    (Self::Lexicographic(Ordering::Equal), Self::Lexicographic(other)) => {
+
                        Self::Lexicographic(other)
+
                    }
+
                    (Self::Lexicographic(_), Self::MatchLength(other)) => Self::MatchLength(other),
+
                    (Self::MatchLength(Ordering::Equal), Self::MatchLength(other)) => {
+
                        Self::MatchLength(other)
+
                    }
+
                    (clone, _) => clone,
+
                }
+
            }
+
        }
+

+
        impl From<ComponentOrdering> for Ordering {
+
            fn from(value: ComponentOrdering) -> Self {
+
                match value {
+
                    ComponentOrdering::MatchLength(ordering) => ordering,
+
                    ComponentOrdering::Lexicographic(ordering) => ordering,
+
                }
+
            }
+
        }
+

+
        impl Default for ComponentOrdering {
+
            /// The weakest value of Self, which will be absorbed by any
+
            /// other in [`ComponentOrdering::merge`].
+
            fn default() -> Self {
+
                Self::Lexicographic(Ordering::Equal)
+
            }
+
        }
+

+
        use git::refspec::Component;
+

+
        fn cmp_component(lhs: Component<'_>, rhs: Component<'_>) -> ComponentOrdering {
+
            let (l, r) = (lhs.as_str(), rhs.as_str());
+
            match (l.find(ASTERISK), r.find(ASTERISK)) {
+
                // (2.a.)
+
                (Some(_), None) => ComponentOrdering::MatchLength(Ordering::Greater),
+
                // (2.a.)
+
                (None, Some(_)) => ComponentOrdering::MatchLength(Ordering::Less),
+
                (Some(li), Some(ri)) => {
+
                    if li != ri {
+
                        // (2.b.A)
+
                        ComponentOrdering::MatchLength(li.cmp(&ri).reverse())
+
                    } else if l.len() != r.len() {
+
                        // (2.b.B)
+
                        ComponentOrdering::MatchLength(l.len().cmp(&r.len()).reverse())
+
                    } else {
+
                        // (3.)
+
                        ComponentOrdering::Lexicographic(l.cmp(r))
+
                    }
+
                }
+
                // (3.)
+
                (None, None) => ComponentOrdering::Lexicographic(l.cmp(r)),
+
            }
+
        }
+

+
        let mut result = ComponentOrdering::default();
+
        let mut lhs = self.0.components();
+
        let mut rhs = other.0.components();
+
        loop {
+
            match (lhs.next(), rhs.next()) {
+
                (None, Some(_)) => return Ordering::Greater, // (1.)
+
                (Some(_), None) => return Ordering::Less,    // (1.)
+
                (Some(lhs), Some(rhs)) => {
+
                    result.merge(cmp_component(lhs, rhs));
+
                }
+
                (None, None) => return result.into(),
+
            }
+
        }
+
    }
+
}
+

+
/// A [`Rule`] that can be serialized and deserialized safely.
+
///
+
/// Should be converted to a [`ValidRule`] via [`Rule::validate`].
+
pub type RawRule = Rule<Allowed, usize>;
+

+
impl RawRule {
+
    /// Validate the `Rule` into a form that can be used for calculating
+
    /// canonical references.
+
    ///
+
    /// The `resolve` callback is to allow the caller to specify the DIDs of the
+
    /// identity document, in the case that the allowed value is
+
    /// [`Allowed::Delegates`].
+
    pub fn validate<R>(self, resolve: &mut R) -> Result<ValidRule, ValidationError>
+
    where
+
        R: Fn() -> doc::Delegates,
+
    {
+
        let Self {
+
            allow: delegates,
+
            threshold,
+
            ..
+
        } = self;
+
        let allow = match &delegates {
+
            Allowed::Delegates => ResolvedDelegates::Delegates(resolve()),
+
            Allowed::Set(delegates) => {
+
                let valid =
+
                    doc::Delegates::new(delegates.clone()).map_err(ValidationError::from)?;
+
                ResolvedDelegates::Set(valid)
+
            }
+
        };
+
        let threshold = doc::Threshold::new(threshold, &allow)?;
+
        Ok(Rule {
+
            allow,
+
            threshold,
+
            extensions: self.extensions,
+
        })
+
    }
+
}
+

+
/// A set of `RawRule`s that can be serialized and deserialized.
+
#[derive(Clone, Debug, Default, PartialEq, Eq, Serialize, Deserialize)]
+
pub struct RawRules {
+
    /// The reference pattern that this rule applies to.
+
    ///
+
    /// Note that this can be a fully-qualified pattern, e.g. `refs/heads/qa`,
+
    /// as well as a wild-card pattern, e.g. `refs/tags/*`.
+
    #[serde(flatten)]
+
    pub rules: BTreeMap<Pattern, RawRule>,
+
}
+

+
impl RawRules {
+
    /// Returns an iterator over the [`Pattern`] and [`RawRule`] in the set of
+
    /// rules.
+
    pub fn iter(&self) -> impl Iterator<Item = (&Pattern, &RawRule)> {
+
        self.rules.iter()
+
    }
+

+
    /// Add a new [`RawRule`] to the set of rules.
+
    ///
+
    /// Returns the replaced rule, if it existed.
+
    pub fn insert(&mut self, pattern: Pattern, rule: RawRule) -> Option<RawRule> {
+
        self.rules.insert(pattern, rule)
+
    }
+

+
    /// Remove the rule that matches the `pattern` parameter.
+
    ///
+
    /// Returns the rule if it existed.
+
    pub fn remove(&mut self, pattern: &Pattern) -> Option<RawRule> {
+
        self.rules.remove(pattern)
+
    }
+

+
    /// Check to see if there is an exact match for `refname` in the rules.
+
    pub fn exact_match(&self, refname: &Qualified) -> bool {
+
        let refname = refname.as_str();
+
        self.rules
+
            .iter()
+
            .any(|(pattern, _)| pattern.0.as_str() == refname)
+
    }
+

+
    /// Check if the `refname` matches any existing rules, including glob
+
    /// matches.
+
    pub fn matches<'a, 'b>(
+
        &self,
+
        refname: &Qualified<'b>,
+
    ) -> impl Iterator<Item = (&Pattern, &RawRule)> + use<'a, '_, 'b> {
+
        let refname = refname.clone();
+
        self.rules
+
            .iter()
+
            .filter(move |(pattern, _)| pattern.matches(&refname))
+
    }
+
}
+

+
impl Extend<(Pattern, RawRule)> for RawRules {
+
    fn extend<T: IntoIterator<Item = (Pattern, RawRule)>>(&mut self, iter: T) {
+
        self.rules.extend(iter)
+
    }
+
}
+

+
impl From<BTreeMap<Pattern, RawRule>> for RawRules {
+
    fn from(rules: BTreeMap<Pattern, RawRule>) -> Self {
+
        RawRules { rules }
+
    }
+
}
+

+
impl FromIterator<(Pattern, RawRule)> for RawRules {
+
    fn from_iter<T: IntoIterator<Item = (Pattern, RawRule)>>(iter: T) -> Self {
+
        iter.into_iter().collect::<BTreeMap<_, _>>().into()
+
    }
+
}
+

+
impl IntoIterator for RawRules {
+
    type Item = (Pattern, RawRule);
+
    type IntoIter = std::collections::btree_map::IntoIter<Pattern, RawRule>;
+

+
    fn into_iter(self) -> Self::IntoIter {
+
        self.rules.into_iter()
+
    }
+
}
+

+
/// A [`Rule`] that has been validated. See [`Rules`] and [`Rules::matches`] for
+
/// its main usage.
+
///
+
/// N.b. a `ValidRule` can be serialized, however, it cannot be deserialized.
+
/// This is due to the fact that the `allow` field may have a value of
+
/// `delegates`. In those cases the value needs to be looked up via the identity
+
/// document and validated.
+
pub type ValidRule = Rule<ResolvedDelegates, doc::Threshold>;
+

+
impl ValidRule {
+
    /// Initialize a `ValidRule` for the default branch, given by `name`. The
+
    /// rule will contain the single `did` as the allowed DID, and use a
+
    /// threshold of `1`.
+
    ///
+
    /// Note that the serialization of the rule will use the `delegates` token
+
    /// for the rule. E.g.
+
    /// ```json
+
    /// {
+
    ///   "pattern": "refs/heads/main",
+
    ///   "allow": ["did:key:z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi"],
+
    ///   "threshold": 1
+
    /// }
+
    /// ```
+
    ///
+
    /// # Errors
+
    ///
+
    /// If the `name` reference begins with `refs/rad`.
+
    pub fn default_branch(did: Did, name: &git::RefStr) -> Result<(Pattern, Self), PatternError> {
+
        let pattern = Pattern::try_from(git::refs::branch(name).to_owned())?;
+
        let rule = Self {
+
            allow: ResolvedDelegates::Delegates(doc::Delegates::from(did)),
+
            // N.B. this needs to be the minimum since we only have one
+
            // delegate.
+
            threshold: doc::Threshold::MIN,
+
            extensions: json::Map::new(),
+
        };
+
        Ok((pattern, rule))
+
    }
+
}
+

+
impl From<ValidRule> for RawRule {
+
    fn from(rule: ValidRule) -> Self {
+
        let Rule {
+
            allow,
+
            threshold,
+
            extensions,
+
        } = rule;
+
        Self {
+
            allow: allow.into(),
+
            threshold: threshold.into(),
+
            extensions,
+
        }
+
    }
+
}
+

+
/// A representation of a set of allowed DIDs.
+
///
+
/// `Allowed` is used in a `RawRule`.
+
#[derive(Clone, Debug, Default, PartialEq, Eq, Serialize, Deserialize)]
+
pub enum Allowed {
+
    /// Pointer to the identity document's set of delegates.
+
    #[serde(rename = "delegates")]
+
    #[default]
+
    Delegates,
+
    /// Explicit list of allowed DIDs.
+
    ///
+
    /// The elements of the list of allowed DIDs will be made unique, i.e.
+
    /// duplicate DIDs will be discarded.
+
    ///
+
    /// # Validation
+
    ///
+
    /// The list of allowed DIDs, `allowed`, must satisfy:
+
    /// ```text
+
    /// 1 <= allowed.len() <= 255
+
    /// ```
+
    #[serde(untagged)]
+
    Set(NonEmpty<Did>),
+
}
+

+
impl From<NonEmpty<Did>> for Allowed {
+
    fn from(dids: NonEmpty<Did>) -> Self {
+
        Self::Set(dids)
+
    }
+
}
+

+
impl From<Did> for Allowed {
+
    fn from(did: Did) -> Self {
+
        Self::Set(NonEmpty::new(did))
+
    }
+
}
+

+
/// A marker `enum` that is used in a [`ValidRule`].
+
///
+
/// It ensures that a rule that has been deserialized, resolving the `delegates`
+
/// token to a set of DIDs, is still serialized back to the `delegates` token –
+
/// as opposed to serializing it to the set of DIDs.
+
///
+
/// The variants mirror the [`Allowed::Delegates`] and [`Allowed::Set`]
+
/// variants.
+
#[derive(Clone, Debug, PartialEq, Eq, Serialize)]
+
#[serde(into = "Allowed")]
+
pub enum ResolvedDelegates {
+
    Delegates(doc::Delegates),
+
    Set(doc::Delegates),
+
}
+

+
impl From<ResolvedDelegates> for Allowed {
+
    fn from(ds: ResolvedDelegates) -> Self {
+
        match ds {
+
            ResolvedDelegates::Delegates(_) => Self::Delegates,
+
            ResolvedDelegates::Set(ds) => Self::Set(ds.into()),
+
        }
+
    }
+
}
+

+
impl std::ops::Deref for ResolvedDelegates {
+
    type Target = doc::Delegates;
+

+
    fn deref(&self) -> &Self::Target {
+
        match self {
+
            ResolvedDelegates::Delegates(ds) => ds,
+
            ResolvedDelegates::Set(ds) => ds,
+
        }
+
    }
+
}
+

+
/// A reference that has been matched against a [`ValidRule`].
+
///
+
/// Can be constructed by using [`Rules::matches`].
+
#[derive(Debug)]
+
pub struct MatchedRule<'a> {
+
    refname: Qualified<'a>,
+
    rule: ValidRule,
+
}
+

+
impl MatchedRule<'_> {
+
    /// Return the reference name that was used for checking if it was a match.
+
    pub fn refname(&self) -> &Qualified {
+
        &self.refname
+
    }
+

+
    /// Return the rule that was matched.
+
    pub fn rule(&self) -> &ValidRule {
+
        &self.rule
+
    }
+

+
    /// Return the allowed DIDs for the matched rule.
+
    pub fn allowed(&self) -> &doc::Delegates {
+
        self.rule().allowed()
+
    }
+

+
    /// Return the [`doc::Threshold`] for the matched rule.
+
    pub fn threshold(&self) -> &doc::Threshold {
+
        self.rule().threshold()
+
    }
+
}
+

+
/// A set of valid [`Rule`]s, where the set of DIDs and threshold are fully
+
/// resolved and valid. Since the rules are constructed via a `BTreeMap`, they
+
/// cannot be duplicated.
+
///
+
/// To construct the set of rules, use [`Rules::from_raw`], which validates a
+
/// set of [`RawRule`]s, and their [`Pattern`] references, into a set of
+
/// [`ValidRule`]s.
+
#[derive(Clone, Debug, Default, PartialEq, Eq, Serialize)]
+
pub struct Rules {
+
    #[serde(flatten)]
+
    rules: BTreeMap<Pattern, ValidRule>,
+
}
+

+
impl FromIterator<(Pattern, ValidRule)> for Rules {
+
    fn from_iter<T: IntoIterator<Item = (Pattern, ValidRule)>>(iter: T) -> Self {
+
        Self {
+
            rules: iter.into_iter().collect(),
+
        }
+
    }
+
}
+

+
impl<'a> IntoIterator for &'a Rules {
+
    type Item = (&'a Pattern, &'a ValidRule);
+
    type IntoIter = std::collections::btree_map::Iter<'a, Pattern, ValidRule>;
+

+
    fn into_iter(self) -> Self::IntoIter {
+
        self.rules.iter()
+
    }
+
}
+

+
impl IntoIterator for Rules {
+
    type Item = (Pattern, ValidRule);
+
    type IntoIter = std::collections::btree_map::IntoIter<Pattern, ValidRule>;
+

+
    fn into_iter(self) -> Self::IntoIter {
+
        self.rules.into_iter()
+
    }
+
}
+

+
impl Extend<(Pattern, ValidRule)> for Rules {
+
    fn extend<T: IntoIterator<Item = (Pattern, ValidRule)>>(&mut self, iter: T) {
+
        self.rules.extend(iter)
+
    }
+
}
+

+
impl From<Rules> for RawRules {
+
    fn from(Rules { rules }: Rules) -> Self {
+
        Self {
+
            rules: rules
+
                .into_iter()
+
                .map(|(pattern, rule)| (pattern, rule.into()))
+
                .collect(),
+
        }
+
    }
+
}
+

+
impl Rules {
+
    /// Returns an iterator over the [`Pattern`] and [`ValidRule`] in the set of
+
    /// rules.
+
    pub fn iter(&self) -> impl Iterator<Item = (&Pattern, &ValidRule)> {
+
        self.rules.iter()
+
    }
+

+
    /// Returns `true` is the set of rules is empty.
+
    pub fn is_empty(&self) -> bool {
+
        self.rules.is_empty()
+
    }
+

+
    /// Construct a set of `Rules` given a set of `RawRule`s.
+
    pub fn from_raw<R>(
+
        rules: impl IntoIterator<Item = (Pattern, RawRule)>,
+
        resolve: &mut R,
+
    ) -> Result<Self, ValidationError>
+
    where
+
        R: Fn() -> doc::Delegates,
+
    {
+
        let valid = rules
+
            .into_iter()
+
            .map(|(pattern, rule)| rule.validate(resolve).map(|rule| (pattern, rule)))
+
            .collect::<Result<_, _>>()?;
+
        Ok(Self { rules: valid })
+
    }
+

+
    /// Return the matching rules for the given `refname`.
+
    pub fn matches<'a>(
+
        &self,
+
        refname: &Qualified<'a>,
+
    ) -> impl Iterator<Item = (&Pattern, &ValidRule)> + use<'a, '_> {
+
        let refname_cloned = refname.clone();
+
        self.rules
+
            .iter()
+
            .filter(move |(pattern, _)| pattern.matches(&refname_cloned))
+
    }
+

+
    /// Match given refname, take the most specific rule, and prepare evaluation
+
    /// as [`Canonical`]
+
    ///
+
    /// N.b. it will find the first rule that is most specific for the given
+
    /// `refname`.
+
    pub fn canonical<'a, 'b>(
+
        &'a self,
+
        refname: Qualified<'b>,
+
        repo: &Repository,
+
    ) -> Result<Option<Canonical<'b, 'a>>, git::raw::Error> {
+
        if let Some((_, rule)) = self.matches(&refname).next() {
+
            Ok(Some(Canonical::new(repo, refname, rule)?))
+
        } else {
+
            Ok(None)
+
        }
+
    }
+
}
+

+
/// A `Rule` defines how a reference or set of references can be made canonical,
+
/// i.e. have a top-level `refs/*` entry – see [`Pattern`].
+
///
+
/// The [`Rule::allowed`] type is generic to allow for [`Allowed`] to be used
+
/// for serialization and deserialization, however, the use of
+
/// [`Rule::validate`] should be used to get a valid rule.
+
///
+
/// The [`Rule::threshold`], similarly, allows for [`doc::Threshold`] to be used, and
+
/// [`Rule::validate`] should be used to get a valid rule.
+
// N.b. it's safe to derive `Serialize` since we only allow constructing a
+
// `Rule` via `Rule::validate`, and we seal `Deserialize` by ensuring that only
+
// `RawRule` can be deserialized.
+
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize)]
+
#[serde(bound(deserialize = "D: Sealed + Deserialize<'de>, T: Sealed + Deserialize<'de>"))]
+
pub struct Rule<D, T> {
+
    /// The set of allowed DIDs that are considered for voting for this rule.
+
    allow: D,
+
    /// The threshold the votes must pass for the reference(s) to be considered
+
    /// canonical.
+
    threshold: T,
+

+
    /// Optional extensions in rules. This is intended to preserve backwards and
+
    /// forward-compatibility
+
    #[serde(skip_serializing_if = "json::Map::is_empty")]
+
    #[serde(flatten)]
+
    extensions: json::Map<String, json::Value>,
+
}
+

+
impl<D, T> Rule<D, T> {
+
    /// Construct a new `Rule` with the given `allow` and `threshold`.
+
    pub fn new(allow: D, threshold: T) -> Self {
+
        Self {
+
            allow,
+
            threshold,
+
            extensions: json::Map::new(),
+
        }
+
    }
+

+
    /// Get the set of DIDs this `Rule` was created with.
+
    pub fn allowed(&self) -> &D {
+
        &self.allow
+
    }
+

+
    /// Get the set of threshold this `Rule` was created with.
+
    pub fn threshold(&self) -> &T {
+
        &self.threshold
+
    }
+

+
    /// Get the extensions that may have been added to this `Rule`.
+
    pub fn extensions(&self) -> &json::Map<String, json::Value> {
+
        &self.extensions
+
    }
+

+
    /// If the [`Rule::extensions`] is not set, the provided `extensions` will
+
    /// be used.
+
    ///
+
    /// Otherwise, it expects that the JSON value is a `Map` and the
+
    /// `extensions` are merged. If the existing value is any other kind of JSON
+
    /// value, this is a no-op.
+
    pub fn add_extensions(&mut self, extensions: impl Into<json::Map<String, json::Value>>) {
+
        self.extensions.extend(extensions.into());
+
    }
+
}
+

+
#[derive(Debug, Error)]
+
pub enum PatternError {
+
    #[error("cannot create rule for '{pattern}' since references under '{prefix}' are protected")]
+
    ProtectedRef {
+
        prefix: RefString,
+
        pattern: QualifiedPattern<'static>,
+
    },
+
}
+

+
#[derive(Debug, Error)]
+
pub enum ValidationError {
+
    #[error(transparent)]
+
    Threshold(#[from] doc::ThresholdError),
+
    #[error(transparent)]
+
    Delegates(#[from] doc::DelegatesError),
+
    #[error("cannot create rule for reserved `rad` references '{pattern}'")]
+
    RadRef { pattern: QualifiedPattern<'static> },
+
}
+

+
#[derive(Debug, Error)]
+
pub enum CanonicalError {
+
    #[error(transparent)]
+
    Git(#[from] git::raw::Error),
+
    #[error(transparent)]
+
    References(#[from] git::ext::Error),
+
}
+

+
#[cfg(test)]
+
#[allow(clippy::unwrap_used)]
+
mod tests {
+
    use std::collections::BTreeMap;
+

+
    use nonempty::nonempty;
+

+
    use crate::crypto::{test::signer::MockSigner, Signer};
+
    use crate::git;
+
    use crate::git::refspec::qualified_pattern;
+
    use crate::git::RefString;
+
    use crate::identity::doc::Doc;
+
    use crate::identity::Visibility;
+
    use crate::node::device::Device;
+
    use crate::rad;
+
    use crate::storage::refs::{IDENTITY_BRANCH, IDENTITY_ROOT, SIGREFS_BRANCH};
+
    use crate::storage::{git::transport, ReadStorage};
+
    use crate::test::{arbitrary, fixtures};
+
    use crate::Storage;
+

+
    use super::*;
+

+
    fn roundtrip(rule: &Rule<Allowed, usize>) {
+
        let json = serde_json::to_string(rule).unwrap();
+
        assert_eq!(
+
            *rule,
+
            serde_json::from_str(&json).unwrap(),
+
            "failed to roundtrip: {json}"
+
        )
+
    }
+

+
    fn did(s: &str) -> Did {
+
        s.parse().unwrap()
+
    }
+

+
    fn pattern(qp: QualifiedPattern<'static>) -> Pattern {
+
        Pattern::try_from(qp).unwrap()
+
    }
+

+
    fn resolve_from_doc(doc: &Doc) -> doc::Delegates {
+
        doc.delegates().clone()
+
    }
+

+
    fn tag(name: RefString, head: git2::Oid, repo: &git2::Repository) -> git::Oid {
+
        let commit = fixtures::commit(name.as_str(), &[head], repo);
+
        let target = repo.find_object(*commit, None).unwrap();
+
        let tagger = repo.signature().unwrap();
+
        repo.tag(name.as_str(), &target, &tagger, name.as_str(), false)
+
            .unwrap()
+
            .into()
+
    }
+

+
    #[test]
+
    fn test_roundtrip() {
+
        let rule1 = Rule::new(Allowed::Delegates, 1);
+
        let rule2 = Rule::new(Allowed::Delegates, 1);
+
        let rule3 = Rule::new(Allowed::Delegates, 1);
+
        let mut rule4 = Rule::new(
+
            Allowed::Set(nonempty![
+
                did("did:key:z6MkpQTLwr8QyADGmBGAMsGttvWzP4PojUMs4hREZW5T5E3K"),
+
                did("did:key:z6MknG1nYDftMYUQ7eTBSGgqB2PL1xK5Pif33J3sRym3e8ye"),
+
            ]),
+
            2,
+
        );
+
        rule4.add_extensions(
+
            serde_json::json!({
+
                "foo": "bar",
+
                "quux": 5,
+
            })
+
            .as_object()
+
            .cloned()
+
            .unwrap(),
+
        );
+
        roundtrip(&rule1);
+
        roundtrip(&rule2);
+
        roundtrip(&rule3);
+
        roundtrip(&rule4);
+
    }
+

+
    #[test]
+
    fn test_deserialization() {
+
        let examples = r#"
+
{
+
  "refs/heads/main": {
+
    "threshold": 2,
+
    "allow": [
+
      "did:key:z6MkpQTLwr8QyADGmBGAMsGttvWzP4PojUMs4hREZW5T5E3K",
+
      "did:key:z6MknG1nYDftMYUQ7eTBSGgqB2PL1xK5Pif33J3sRym3e8ye"
+
    ]
+
  },
+
  "refs/tags/releases/*": {
+
    "threshold": 2,
+
    "allow": [
+
      "did:key:z6MknLWe8A7UJxvTfY36JcB8XrP1KTLb5HFTX38hEmdY3b56",
+
      "did:key:z6Mkq2E5Se5H9gk1DsL1EMwR2t4CqSg3GFkNN2UeG4FNqXoP",
+
      "did:key:z6MkqRmXW5fbP9hJ1Y8j2N4CgVdJ2XJ6TsyXYf3FQ2NJgXax"
+
    ]
+
  },
+
  "refs/heads/development": {
+
    "threshold": 1,
+
    "allow": [
+
      "did:key:z6MkhH7ENYE62JAjTiRZPU71MGZ6xCwnbyHHWfrBu3fr6PVG"
+
    ]
+
  },
+
  "refs/heads/release/*": {
+
    "threshold": 1,
+
    "allow": "delegates"
+
  }
+
}
+
 "#;
+
        let expected = [
+
            (
+
                pattern(qualified_pattern!("refs/heads/main")),
+
                Rule::new(
+
                    Allowed::Set(nonempty![
+
                        did("did:key:z6MkpQTLwr8QyADGmBGAMsGttvWzP4PojUMs4hREZW5T5E3K"),
+
                        did("did:key:z6MknG1nYDftMYUQ7eTBSGgqB2PL1xK5Pif33J3sRym3e8ye"),
+
                    ]),
+
                    2,
+
                ),
+
            ),
+
            (
+
                pattern(qualified_pattern!("refs/tags/releases/*")),
+
                Rule::new(
+
                    Allowed::Set(nonempty![
+
                        did("did:key:z6MknLWe8A7UJxvTfY36JcB8XrP1KTLb5HFTX38hEmdY3b56"),
+
                        did("did:key:z6Mkq2E5Se5H9gk1DsL1EMwR2t4CqSg3GFkNN2UeG4FNqXoP"),
+
                        did("did:key:z6MkqRmXW5fbP9hJ1Y8j2N4CgVdJ2XJ6TsyXYf3FQ2NJgXax")
+
                    ]),
+
                    2,
+
                ),
+
            ),
+
            (
+
                pattern(qualified_pattern!("refs/heads/development")),
+
                Rule::new(
+
                    Allowed::Set(nonempty![did(
+
                        "did:key:z6MkhH7ENYE62JAjTiRZPU71MGZ6xCwnbyHHWfrBu3fr6PVG"
+
                    )]),
+
                    1,
+
                ),
+
            ),
+
            (
+
                pattern(qualified_pattern!("refs/heads/release/*")),
+
                Rule::new(Allowed::Delegates, 1),
+
            ),
+
        ]
+
        .into_iter()
+
        .collect::<RawRules>();
+
        let rules = serde_json::from_str::<BTreeMap<Pattern, RawRule>>(examples)
+
            .unwrap()
+
            .into();
+
        assert_eq!(expected, rules)
+
    }
+

+
    #[test]
+
    fn test_order() {
+
        assert!(
+
            pattern(qualified_pattern!("a/b/c/d/*")) < pattern(qualified_pattern!("*/x")),
+
            "example 1"
+
        );
+
        assert!(
+
            pattern(qualified_pattern!("a")) < pattern(qualified_pattern!("*")),
+
            "example 2.a"
+
        );
+
        assert!(
+
            pattern(qualified_pattern!("abc")) < pattern(qualified_pattern!("a*")),
+
            "example 2.a"
+
        );
+
        assert!(
+
            pattern(qualified_pattern!("a/b/*")) < pattern(qualified_pattern!("a/*/c")),
+
            "example 2.a"
+
        );
+
        assert!(
+
            pattern(qualified_pattern!("aa*")) < pattern(qualified_pattern!("a*")),
+
            "example 2.b.A"
+
        );
+
        assert!(
+
            pattern(qualified_pattern!("a*b")) < pattern(qualified_pattern!("a*")),
+
            "example 2.b.B"
+
        );
+

+
        let pattern01 = pattern(qualified_pattern!("refs/tags/*"));
+
        let pattern02 = pattern(qualified_pattern!("refs/tags/v1"));
+
        let pattern04 = pattern(qualified_pattern!("refs/tags/v1.0.0"));
+
        let pattern05 = pattern(qualified_pattern!("refs/tags/release/v1.0.0"));
+
        let pattern03 = pattern(qualified_pattern!("refs/heads/main"));
+
        let pattern06 = pattern(qualified_pattern!("refs/tags/*/v1.0.0"));
+

+
        let pattern07 = pattern(qualified_pattern!("refs/tags/x*"));
+
        let pattern08 = pattern(qualified_pattern!("refs/tags/xx*"));
+

+
        let pattern09 = pattern(qualified_pattern!("refs/foos/*"));
+

+
        let pattern10 = pattern(qualified_pattern!("a"));
+
        let pattern11 = pattern(qualified_pattern!("b"));
+

+
        let pattern12 = pattern(qualified_pattern!("a/*"));
+
        let pattern13 = pattern(qualified_pattern!("b/*"));
+

+
        let pattern14 = pattern(qualified_pattern!("a/*/ab"));
+
        let pattern15 = pattern(qualified_pattern!("a/*/a"));
+

+
        let pattern16 = pattern(qualified_pattern!("a/*/b"));
+
        let pattern17 = pattern(qualified_pattern!("a/*/a"));
+

+
        // Test priority for path specificity
+
        assert!(
+
            pattern06 < pattern02,
+
            "match for 06 is always more specific since it has more components"
+
        );
+
        assert!(pattern02 < pattern01, "match for 02 is also match for 01");
+
        assert!(pattern08 < pattern07, "match for 08 is also match for 07");
+
        // Test equality
+
        assert!(pattern02 == pattern02);
+
        // Test lexicographical fallback when paths are equally specific
+
        assert!(pattern02 < pattern04);
+
        assert!(pattern03 < pattern01);
+
        assert!(pattern09 < pattern01);
+
        assert!(pattern10 < pattern11);
+
        assert!(pattern12 < pattern13);
+
        assert!(pattern15 < pattern14);
+
        assert!(
+
            pattern17 < pattern16,
+
            "matches have same length, but lexicographically, 'a' < 'b'"
+
        );
+

+
        // Test example from docs
+
        let pattern18 = pattern(qualified_pattern!("refs/tags/release/candidates/*"));
+
        let pattern19 = pattern(qualified_pattern!("refs/tags/release/*"));
+
        let pattern20 = pattern(qualified_pattern!("refs/tags/*"));
+

+
        assert!(pattern18 < pattern19);
+
        assert!(pattern19 < pattern20);
+

+
        let pattern21 = pattern(qualified_pattern!("refs/heads/dev"));
+

+
        assert!(pattern21 < pattern03);
+

+
        let mut patterns = [
+
            pattern01.clone(),
+
            pattern02.clone(),
+
            pattern03.clone(),
+
            pattern04.clone(),
+
            pattern05.clone(),
+
            pattern06.clone(),
+
        ];
+
        patterns.sort();
+

+
        assert_eq!(
+
            patterns,
+
            [pattern05, pattern06, pattern03, pattern02, pattern04, pattern01]
+
        );
+
    }
+

+
    #[test]
+
    fn test_deserialize_extensions() {
+
        let example = r#"
+
{
+
  "threshold": 2,
+
  "allow": [
+
    "did:key:z6MkpQTLwr8QyADGmBGAMsGttvWzP4PojUMs4hREZW5T5E3K",
+
    "did:key:z6MknG1nYDftMYUQ7eTBSGgqB2PL1xK5Pif33J3sRym3e8ye"
+
  ],
+
  "foo": "bar",
+
  "quux": 5
+
}
+
"#;
+
        let rule = serde_json::from_str::<Rule<Allowed, usize>>(example).unwrap();
+
        assert!(!rule.extensions().is_empty());
+
        let extensions = rule.extensions();
+
        assert_eq!(
+
            extensions.get("foo"),
+
            Some(serde_json::Value::String("bar".to_string())).as_ref()
+
        );
+
        assert_eq!(
+
            extensions.get("quux"),
+
            Some(serde_json::Value::Number(5.into())).as_ref()
+
        );
+
    }
+

+
    #[test]
+
    fn test_rule_validate_success() {
+
        let doc = arbitrary::gen::<Doc>(1);
+
        let delegates = Allowed::Set(doc.delegates().as_ref().clone());
+
        let threshold = doc.majority();
+

+
        let rule = Rule::new(delegates, threshold);
+
        let result = rule.validate(&mut || resolve_from_doc(&doc));
+
        assert!(result.is_ok(), "failed to validate doc: {result:?}");
+

+
        let rule = Rule::new(Allowed::Delegates, 1);
+
        let result = rule.validate(&mut || resolve_from_doc(&doc));
+
        assert!(result.is_ok(), "failed to validate doc: {result:?}");
+
    }
+

+
    #[test]
+
    fn test_rule_validate_failures() {
+
        let doc = arbitrary::gen::<Doc>(1);
+
        let pattern = pattern(qualified_pattern!("refs/heads/main"));
+

+
        assert!(matches!(
+
            Rule::new(Allowed::Delegates, 256).validate(&mut || resolve_from_doc(&doc)),
+
            Err(ValidationError::Threshold(_))
+
        ));
+

+
        let threshold = doc.delegates().len().saturating_add(1);
+
        assert!(matches!(
+
            Rule::new(Allowed::Delegates, threshold).validate(&mut || resolve_from_doc(&doc)),
+
            Err(ValidationError::Threshold(_))
+
        ));
+

+
        let delegates = NonEmpty::from_vec(arbitrary::vec::<Did>(256)).unwrap();
+
        assert!(matches!(
+
            Rule::new(delegates.into(), 1).validate(&mut || resolve_from_doc(&doc)),
+
            Err(ValidationError::Delegates(_))
+
        ));
+

+
        let delegates = nonempty![
+
            did("did:key:z6MknLWe8A7UJxvTfY36JcB8XrP1KTLb5HFTX38hEmdY3b56"),
+
            did("did:key:z6MknLWe8A7UJxvTfY36JcB8XrP1KTLb5HFTX38hEmdY3b56")
+
        ];
+
        let expected = Rule {
+
            allow: ResolvedDelegates::Set(
+
                doc::Delegates::new(nonempty![did(
+
                    "did:key:z6MknLWe8A7UJxvTfY36JcB8XrP1KTLb5HFTX38hEmdY3b56"
+
                )])
+
                .unwrap(),
+
            ),
+
            threshold: doc::Threshold::MIN,
+
            extensions: json::Map::new(),
+
        };
+
        assert_eq!(
+
            Rule::new(delegates.into(), 1)
+
                .validate(&mut || resolve_from_doc(&doc))
+
                .unwrap(),
+
            expected,
+
        );
+

+
        // Duplicate rules are overwritten
+
        let rules = vec![
+
            (pattern.clone(), Rule::new(Allowed::Delegates, 1)),
+
            (
+
                pattern.clone(),
+
                Rule::new(doc.delegates().as_ref().clone().into(), 1),
+
            ),
+
        ];
+
        let expected = [(
+
            pattern,
+
            Rule::new(
+
                ResolvedDelegates::Set(doc.delegates().clone()),
+
                doc::Threshold::MIN,
+
            ),
+
        )]
+
        .into_iter()
+
        .collect::<Rules>();
+
        assert_eq!(
+
            Rules::from_raw(rules, &mut || resolve_from_doc(&doc)).unwrap(),
+
            expected
+
        );
+
    }
+

+
    #[test]
+
    fn test_canonical() {
+
        let tempdir = tempfile::tempdir().unwrap();
+
        let storage = Storage::open(tempdir.path().join("storage"), fixtures::user()).unwrap();
+

+
        transport::local::register(storage.clone());
+

+
        let delegate = Device::mock_from_seed([0xff; 32]);
+
        let contributor = MockSigner::from_seed([0xfe; 32]);
+
        let (repo, head) = fixtures::repository(tempdir.path().join("working"));
+
        let (rid, doc, _) = rad::init(
+
            &repo,
+
            "heartwood".try_into().unwrap(),
+
            "Radicle Heartwood Protocol & Stack",
+
            git::refname!("master"),
+
            Visibility::default(),
+
            &delegate,
+
            &storage,
+
        )
+
        .unwrap();
+

+
        let mut doc = doc.edit();
+
        // Ensure there is a second delegate for testing overlapping rules
+
        doc.delegate(contributor.public_key().into());
+

+
        // Create tags and keep track of their OIDs
+
        //
+
        // follows the `refs/tags/release/candidates/*` rule
+
        let failing_tag = git::refname!("release/candidates/v1.0");
+
        let tags = [
+
            // follows the `refs/tags/*` rule
+
            git::refname!("v1.0"),
+
            // follows the `refs/tags/release/*` rule
+
            git::refname!("release/v1.0"),
+
            failing_tag.clone(),
+
            // follows the `refs/tags/*` rule
+
            git::refname!("qa/v1.0"),
+
        ]
+
        .into_iter()
+
        .map(|name| {
+
            (
+
                git::lit::refs_tags(name.clone()).into(),
+
                tag(name, head, &repo),
+
            )
+
        })
+
        .collect::<BTreeMap<Qualified, _>>();
+

+
        git::push(
+
            &repo,
+
            &rad::REMOTE_NAME,
+
            [
+
                (
+
                    &git::qualified!("refs/tags/v1.0"),
+
                    &git::qualified!("refs/tags/v1.0"),
+
                ),
+
                (
+
                    &git::qualified!("refs/tags/release/v1.0"),
+
                    &git::qualified!("refs/tags/release/v1.0"),
+
                ),
+
                (
+
                    &git::qualified!("refs/tags/release/candidates/v1.0"),
+
                    &git::qualified!("refs/tags/release/candidates/v1.0"),
+
                ),
+
                (
+
                    &git::qualified!("refs/tags/qa/v1.0"),
+
                    &git::qualified!("refs/tags/qa/v1.0"),
+
                ),
+
            ],
+
        )
+
        .unwrap();
+

+
        let rules = Rules::from_raw(
+
            [
+
                (
+
                    pattern(qualified_pattern!("refs/tags/*")),
+
                    Rule::new(Allowed::Delegates, 1),
+
                ),
+
                (
+
                    pattern(qualified_pattern!("refs/tags/release/*")),
+
                    Rule::new(Allowed::Delegates, 1),
+
                ),
+
                // Ensure that none of the other rules apply by ensuring we need
+
                // both delegates to get the quorum of the
+
                // `refs/tags/release/candidates/v1.0` reference
+
                (
+
                    pattern(qualified_pattern!("refs/tags/release/candidates/*")),
+
                    Rule::new(Allowed::Delegates, 2),
+
                ),
+
            ],
+
            &mut || resolve_from_doc(&doc.clone().verified().unwrap()),
+
        )
+
        .unwrap();
+

+
        // All tags should succeed at getting their canonical commit other than the
+
        // candidates tag.
+
        let stored = storage.repository(rid).unwrap();
+
        let failing = git::Qualified::from(git::lit::refs_tags(failing_tag));
+
        for (refname, oid) in tags.into_iter() {
+
            let canonical = rules
+
                .canonical(refname.clone(), &stored)
+
                .unwrap()
+
                .unwrap_or_else(|| {
+
                    panic!("there should be a matching rule for {refname}, rules: {rules:#?}")
+
                });
+
            if refname == failing {
+
                assert!(canonical.quorum(&repo).is_err());
+
            } else {
+
                assert_eq!(
+
                    canonical
+
                        .quorum(&repo)
+
                        .unwrap_or_else(|e| panic!("quorum error for {refname}: {e}")),
+
                    (refname, oid),
+
                )
+
            }
+
        }
+
    }
+

+
    #[test]
+
    fn test_special_branches() {
+
        assert!(Pattern::try_from((*IDENTITY_BRANCH).clone()).is_err());
+
        assert!(Pattern::try_from((*SIGREFS_BRANCH).clone()).is_err());
+
        assert!(Pattern::try_from((*IDENTITY_ROOT).clone()).is_err());
+
    }
+
}
modified crates/radicle/src/identity.rs
@@ -1,8 +1,10 @@
#![warn(clippy::unwrap_used)]
+
pub mod crefs;
pub mod did;
pub mod doc;
pub mod project;

+
pub use crefs::CanonicalRefs;
pub use crypto::PublicKey;
pub use did::Did;
pub use doc::{Doc, DocAt, DocError, IdError, PayloadError, RawDoc, RepoId, Visibility};
added crates/radicle/src/identity/crefs.rs
@@ -0,0 +1,124 @@
+
use serde::{Deserialize, Serialize};
+

+
use crate::git::canonical::{
+
    rules::{self, RawRules, Rules, ValidationError},
+
    ValidRule,
+
};
+

+
use super::doc::{Delegates, Payload};
+

+
/// Implemented by any data type or store that can return [`CanonicalRefs`] and
+
/// [`RawCanonicalRefs`].
+
pub trait GetCanonicalRefs {
+
    type Error: std::error::Error + Send + Sync + 'static;
+

+
    /// Retrieve the [`CanonicalRefs`], returning `Some` if they are not
+
    /// present, and `None` if they are missing.
+
    ///
+
    /// [`Self::Error`] is used to return any domain-specific error by the
+
    /// implementing type.
+
    fn canonical_refs(&self) -> Result<Option<CanonicalRefs>, Self::Error>;
+

+
    /// Retrieve the [`RawCanonicalRefs`], returning `Some` if they are not
+
    /// present, and `None` if they are missing.
+
    ///
+
    /// [`Self::Error`] is used to return any domain-specific error by the
+
    /// implementing type.
+
    fn raw_canonical_refs(&self) -> Result<Option<RawCanonicalRefs>, Self::Error>;
+

+
    /// Retrieve the [`CanonicalRefs`], and in the case of `None`, then use the
+
    /// `default` function to return a default set of [`CanonicalRefs`].
+
    fn canonical_refs_or_default<D, E>(&self, default: D) -> Result<CanonicalRefs, E>
+
    where
+
        D: Fn() -> Result<CanonicalRefs, E>,
+
        E: From<Self::Error>,
+
    {
+
        match self.canonical_refs()? {
+
            Some(crefs) => Ok(crefs),
+
            None => Ok(default()?),
+
        }
+
    }
+
}
+

+
/// Configuration for canonical references and their rules.
+
///
+
/// `RawCanonicalRefs` are verified into [`CanonicalRefs`].
+
#[derive(Default, Debug, Clone, PartialEq, Eq, Deserialize)]
+
#[serde(rename_all = "camelCase")]
+
pub struct RawCanonicalRefs {
+
    rules: RawRules,
+
}
+

+
impl RawCanonicalRefs {
+
    /// Construct a new [`RawCanonicalRefs`] from a set of [`RawRules`].
+
    pub fn new(rules: RawRules) -> Self {
+
        Self { rules }
+
    }
+

+
    /// Return the [`RawRules`].
+
    pub fn raw_rules(&self) -> &RawRules {
+
        &self.rules
+
    }
+

+
    /// Validate the [`RawCanonicalRefs`] into a set of [`CanonicalRefs`].
+
    pub fn try_into_canonical_refs<R>(
+
        self,
+
        resolve: &mut R,
+
    ) -> Result<CanonicalRefs, ValidationError>
+
    where
+
        R: Fn() -> Delegates,
+
    {
+
        let rules = Rules::from_raw(self.rules, resolve)?;
+
        Ok(CanonicalRefs::new(rules))
+
    }
+
}
+

+
/// Configuration for canonical references and their [`Rules`].
+
///
+
/// [`CanonicalRefs`] can be converted into a [`Payload`] using its [`From`]
+
/// implementation.
+
#[derive(Debug, Clone, PartialEq, Eq, Serialize)]
+
#[serde(rename_all = "camelCase")]
+
pub struct CanonicalRefs {
+
    rules: Rules,
+
}
+

+
impl CanonicalRefs {
+
    /// Construct a new [`CanonicalRefs`] from a set of [`Rules`].
+
    pub fn new(rules: Rules) -> Self {
+
        CanonicalRefs { rules }
+
    }
+

+
    /// Return the [`Rules`].
+
    pub fn rules(&self) -> &Rules {
+
        &self.rules
+
    }
+
}
+

+
impl FromIterator<(rules::Pattern, ValidRule)> for CanonicalRefs {
+
    fn from_iter<T: IntoIterator<Item = (rules::Pattern, ValidRule)>>(iter: T) -> Self {
+
        Self::new(Rules::from_iter(iter))
+
    }
+
}
+

+
impl Extend<(rules::Pattern, ValidRule)> for CanonicalRefs {
+
    fn extend<T: IntoIterator<Item = (rules::Pattern, ValidRule)>>(&mut self, iter: T) {
+
        self.rules.extend(iter)
+
    }
+
}
+

+
#[derive(Debug, thiserror::Error)]
+
#[non_exhaustive]
+
pub enum CanonicalRefsPayloadError {
+
    #[error("could not convert canonical references to JSON: {0}")]
+
    Json(#[source] serde_json::Error),
+
}
+

+
impl TryFrom<CanonicalRefs> for Payload {
+
    type Error = CanonicalRefsPayloadError;
+

+
    fn try_from(crefs: CanonicalRefs) -> Result<Self, Self::Error> {
+
        let value = serde_json::to_value(crefs).map_err(CanonicalRefsPayloadError::Json)?;
+
        Ok(Self::from(value))
+
    }
+
}
modified crates/radicle/src/identity/doc.rs
@@ -1,3 +1,5 @@
+
pub mod update;
+

mod id;

use std::collections::{BTreeMap, BTreeSet};
@@ -19,6 +21,7 @@ use crate::cob::identity;
use crate::crypto;
use crate::crypto::Signature;
use crate::git;
+
use crate::git::canonical::rules;
use crate::identity::{project::Project, Did};
use crate::node::device::Device;
use crate::storage;
@@ -27,6 +30,9 @@ use crate::storage::{ReadRepository, RepositoryError};
pub use crypto::PublicKey;
pub use id::*;

+
use super::crefs::{self, RawCanonicalRefs};
+
use super::CanonicalRefs;
+

/// Path to the identity document in the identity branch.
pub static PATH: LazyLock<&Path> = LazyLock::new(|| Path::new("radicle.json"));
/// Maximum length of a string in the identity document.
@@ -73,6 +79,14 @@ impl DocError {
    }
}

+
#[derive(Debug, Error)]
+
pub enum DefaultBranchRuleError {
+
    #[error("could not create rule due to the reference name being invalid: {0}")]
+
    Pattern(#[from] rules::PatternError),
+
    #[error("could not load `xyz.radicle.project` to get default branch name: {0}")]
+
    Payload(#[from] PayloadError),
+
}
+

/// The version number of the identity document.
///
/// It is used to ensure compatibility when parsing identity documents.
@@ -215,6 +229,14 @@ impl PayloadId {
                .expect("PayloadId::project: type name is valid"),
        )
    }
+

+
    pub fn canonical_refs() -> Self {
+
        Self(
+
            // SAFETY: We know this is valid.
+
            TypeName::from_str("xyz.radicle.crefs")
+
                .expect("PayloadId::canonical_refs: type name is valid"),
+
        )
+
    }
}

#[derive(Debug, Error)]
@@ -477,6 +499,12 @@ impl AsRef<NonEmpty<Did>> for Delegates {
    }
}

+
impl From<Did> for Delegates {
+
    fn from(did: Did) -> Self {
+
        Self(NonEmpty::new(did))
+
    }
+
}
+

impl TryFrom<Vec<Did>> for Delegates {
    type Error = DelegatesError;

@@ -530,6 +558,11 @@ impl Delegates {
        self.0.contains(did)
    }

+
    /// Check if the `did` is the only delegate of the repository.
+
    pub fn is_only(&self, did: &Did) -> bool {
+
        self.0.tail.is_empty() && &self.0.head == did
+
    }
+

    /// Get the number of delegates in the set.
    pub fn len(&self) -> usize {
        self.0.len()
@@ -709,6 +742,19 @@ impl Doc {
        Ok(proj)
    }

+
    pub fn default_branch_rule(
+
        &self,
+
    ) -> Result<(rules::Pattern, rules::ValidRule), DefaultBranchRuleError> {
+
        let proj = self.project()?;
+
        let refname = proj.default_branch();
+
        let pattern = rules::Pattern::try_from(git::refs::branch(refname).to_owned())?;
+
        let rule = rules::Rule::new(
+
            rules::ResolvedDelegates::Delegates(self.delegates.clone()),
+
            self.threshold,
+
        );
+
        Ok((pattern, rule))
+
    }
+

    /// Return the associated [`Visibility`] of this document.
    pub fn visibility(&self) -> &Visibility {
        &self.visibility
@@ -866,6 +912,66 @@ impl Doc {
    }
}

+
#[derive(Debug, Error)]
+
pub enum CanonicalRefsError {
+
    #[error(transparent)]
+
    Json(#[from] serde_json::Error),
+
    #[error(transparent)]
+
    CanonicalRefs(#[from] rules::ValidationError),
+
    #[error(transparent)]
+
    DefaultBranch(#[from] DefaultBranchRuleError),
+
}
+

+
impl crefs::GetCanonicalRefs for Doc {
+
    type Error = CanonicalRefsError;
+

+
    fn canonical_refs(&self) -> Result<Option<CanonicalRefs>, Self::Error> {
+
        self.raw_canonical_refs().and_then(|raw| {
+
            raw.map(|raw| {
+
                raw.try_into_canonical_refs(&mut || self.delegates.clone())
+
                    .map_err(CanonicalRefsError::from)
+
                    .and_then(|mut crefs| {
+
                        self.default_branch_rule()
+
                            .map_err(CanonicalRefsError::from)
+
                            .map(|rule| {
+
                                crefs.extend([rule]);
+
                                crefs
+
                            })
+
                    })
+
            })
+
            .transpose()
+
        })
+
    }
+

+
    fn raw_canonical_refs(&self) -> Result<Option<RawCanonicalRefs>, Self::Error> {
+
        let value = self.payload.get(&PayloadId::canonical_refs());
+
        let crefs = value
+
            .map(|value| {
+
                serde_json::from_value((**value).clone()).map_err(CanonicalRefsError::from)
+
            })
+
            .transpose()?;
+
        Ok(crefs)
+
    }
+
}
+

+
impl crefs::GetCanonicalRefs for RawDoc {
+
    type Error = CanonicalRefsError;
+

+
    fn canonical_refs(&self) -> Result<Option<CanonicalRefs>, Self::Error> {
+
        Ok(None)
+
    }
+

+
    fn raw_canonical_refs(&self) -> Result<Option<RawCanonicalRefs>, Self::Error> {
+
        let value = self.payload.get(&PayloadId::canonical_refs());
+
        let crefs = value
+
            .map(|value| {
+
                serde_json::from_value((**value).clone()).map_err(CanonicalRefsError::from)
+
            })
+
            .transpose()?;
+
        Ok(crefs)
+
    }
+
}
+

#[cfg(test)]
#[allow(clippy::unwrap_used)]
mod test {
added crates/radicle/src/identity/doc/update.rs
@@ -0,0 +1,360 @@
+
pub mod error;
+

+
use std::{collections::BTreeSet, str::FromStr};
+

+
use serde_json as json;
+

+
use crate::{
+
    git,
+
    identity::crefs::GetCanonicalRefs as _,
+
    prelude::Did,
+
    storage::{refs, ReadRepository, RepositoryError},
+
};
+

+
use super::{Doc, PayloadError, PayloadId, RawDoc, Visibility};
+

+
/// [`EditVisibility`] allows the visibility of a [`RawDoc`] to be edited using
+
/// the [`visibility`] function.
+
///
+
/// Note that this differs from [`Visibility`] since the
+
/// [`EditVisibility::Private`] variant does not hold the allowed set of
+
/// [`Did`]s.
+
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq)]
+
pub enum EditVisibility {
+
    #[default]
+
    Public,
+
    Private,
+
}
+

+
impl FromStr for EditVisibility {
+
    type Err = error::ParseEditVisibility;
+

+
    fn from_str(s: &str) -> Result<Self, Self::Err> {
+
        match s {
+
            "public" => Ok(EditVisibility::Public),
+
            "private" => Ok(EditVisibility::Private),
+
            _ => Err(error::ParseEditVisibility(s.to_owned())),
+
        }
+
    }
+
}
+

+
/// Change the visibility of the [`RawDoc`], using the provided
+
/// [`EditVisibility`].
+
pub fn visibility(mut raw: RawDoc, edit: EditVisibility) -> RawDoc {
+
    match (&mut raw.visibility, edit) {
+
        (Visibility::Public, EditVisibility::Public) => raw,
+
        (Visibility::Private { .. }, EditVisibility::Private) => raw,
+
        (Visibility::Public, EditVisibility::Private) => {
+
            raw.visibility = Visibility::private([]);
+
            raw
+
        }
+
        (Visibility::Private { .. }, EditVisibility::Public) => {
+
            raw.visibility = Visibility::Public;
+
            raw
+
        }
+
    }
+
}
+

+
/// Change the `allow` set of a document if the visibility is set to
+
/// [`Visibility::Private`].
+
///
+
/// All `Did`s in the `allow` set are added to the set, while all `Did`s in the
+
/// `disallow` set are removed from the set.
+
///
+
/// # Errors
+
///
+
/// This will fail when `allow` and `disallow` are not disjoint, i.e. they
+
/// contain at least share one `Did`.
+
///
+
/// This will fail when the [`Visibility`] of the document is
+
/// [`Visibility::Public`].
+
pub fn privacy_allow_list(
+
    mut raw: RawDoc,
+
    allow: BTreeSet<Did>,
+
    disallow: BTreeSet<Did>,
+
) -> Result<RawDoc, error::PrivacyAllowList> {
+
    if allow.is_empty() && disallow.is_empty() {
+
        return Ok(raw);
+
    }
+

+
    if !allow.is_disjoint(&disallow) {
+
        let overlap = allow
+
            .intersection(&disallow)
+
            .map(Did::to_string)
+
            .collect::<Vec<_>>();
+
        return Err(error::PrivacyAllowList::Overlapping(overlap));
+
    }
+

+
    match &mut raw.visibility {
+
        Visibility::Public => Err(error::PrivacyAllowList::PublicVisibility),
+
        Visibility::Private { allow: existing } => {
+
            for did in allow {
+
                existing.insert(did);
+
            }
+
            for did in disallow {
+
                existing.remove(&did);
+
            }
+
            Ok(raw)
+
        }
+
    }
+
}
+

+
/// Change the delegates of the document and perform some verification based on
+
/// the new set of delegates.
+
///
+
/// The set of `additions` are added to the delegates, while the set to
+
/// `removals` are removed from the delegates. Note that `removals` will take
+
/// precedence over the additions, i.e. if an addition and removal overlap, then
+
/// the [`Did`] will not be in the final set.
+
///
+
/// The result is either the updated [`RawDoc`] or a set of
+
/// [`error::DelegateVerification`] errors – which may be reported by the caller
+
/// to provide better error messaging.
+
///
+
/// # Errors
+
///
+
/// This will fail if an operation using the repository fails.
+
pub fn delegates<S>(
+
    mut raw: RawDoc,
+
    additions: Vec<Did>,
+
    removals: Vec<Did>,
+
    repo: &S,
+
) -> Result<Result<RawDoc, Vec<error::DelegateVerification>>, RepositoryError>
+
where
+
    S: ReadRepository,
+
{
+
    if additions.is_empty() && removals.is_empty() {
+
        return Ok(Ok(raw));
+
    }
+

+
    raw.delegates = raw
+
        .delegates
+
        .into_iter()
+
        .chain(additions)
+
        .filter(|d| !removals.contains(d))
+
        .collect::<Vec<_>>();
+
    match verify_delegates(&raw, repo)? {
+
        Some(errors) => Ok(Err(errors)),
+
        None => Ok(Ok(raw)),
+
    }
+
}
+

+
// TODO(finto): I think this API would likely be much nicer if we use [JSON Patch][patch] and [JSON Merge Patch][merge]
+
//
+
// [patch]: https://datatracker.ietf.org/doc/html/rfc6902
+
// [merge]: https://datatracker.ietf.org/doc/html/rfc7396
+
/// Change the payload of the document, using the set of triples:
+
///
+
///   - [`PayloadId`]: the identifier for the document [`Payload`]
+
///   - [`String`]: the key within the [`Payload`] that is being updated
+
///   - [`json::Value`]: the value to update the [`Payload`]
+
///
+
/// # Errors
+
///
+
/// This fails if one of the [`PayloadId`]s does not point to a JSON object as
+
/// its value.
+
///
+
/// [`Payload`]: super::Payload
+
pub fn payload(
+
    mut raw: RawDoc,
+
    payload: Vec<(PayloadId, String, json::Value)>,
+
) -> Result<RawDoc, error::PayloadError> {
+
    for (id, key, val) in payload {
+
        if let Some(ref mut payload) = raw.payload.get_mut(&id) {
+
            if let Some(obj) = payload.as_object_mut() {
+
                if val.is_null() {
+
                    obj.remove(&key);
+
                } else {
+
                    obj.insert(key, val);
+
                }
+
            } else {
+
                return Err(error::PayloadError::ExpectedObject { id });
+
            }
+
        } else {
+
            raw.payload
+
                .insert(id, serde_json::json!({ key: val }).into());
+
        }
+
    }
+
    Ok(raw)
+
}
+

+
/// Verify the document.
+
///
+
/// This ensures performs the verification of the [`RawDoc`] into the [`Doc`],
+
/// while also checking the [`Project`] and [`CanonicalRefs`] will also
+
/// deserialize correctly.
+
///
+
/// [`Project`]: crate::identity::Project
+
/// [`CanonicalRefs`]: crate::identity::CanonicalRefs
+
pub fn verify(raw: RawDoc) -> Result<Doc, error::DocVerification> {
+
    let proposal = raw.clone().verified()?;
+
    // Verify that the project payload is valid
+
    // TODO(finto): perhaps this should be handled by JSON Schemas instead
+
    let project = match proposal.project() {
+
        Ok(project) => Some(project),
+
        Err(PayloadError::NotFound(_)) => None,
+
        Err(PayloadError::Json(e)) => {
+
            return Err(error::DocVerification::PayloadError {
+
                id: PayloadId::project(),
+
                err: e.to_string(),
+
            })
+
        }
+
    };
+
    // Ensure that if we have canonical reference rules and a project, that no
+
    // rule exists for the default branch. This rule must be synthesized when
+
    // constructing the canonical reference rules.
+
    match raw
+
        .raw_canonical_refs()
+
        .map(|rcrefs| rcrefs.and_then(|c| project.map(|p| (c, p))))
+
    {
+
        Ok(Some((crefs, project))) => {
+
            let default = git::Qualified::from(git::lit::refs_heads(project.default_branch()));
+
            let matches = crefs
+
                .raw_rules()
+
                .matches(&default)
+
                .map(|(pattern, _)| pattern.to_string())
+
                .collect::<Vec<_>>();
+
            if !matches.is_empty() {
+
                return Err(error::DocVerification::DisallowDefault { matches, default });
+
            }
+
        }
+
        _ => { /* we validate below */ }
+
    }
+
    // Verify that the canonical references payload is valid
+
    if let Err(e) = proposal.canonical_refs() {
+
        return Err(error::DocVerification::PayloadError {
+
            id: PayloadId::canonical_refs(),
+
            err: e.to_string(),
+
        });
+
    }
+
    Ok(proposal)
+
}
+

+
fn verify_delegates<S>(
+
    proposal: &RawDoc,
+
    repo: &S,
+
) -> Result<Option<Vec<error::DelegateVerification>>, RepositoryError>
+
where
+
    S: ReadRepository,
+
{
+
    let dids = &proposal.delegates;
+
    let threshold = proposal.threshold;
+
    let (canonical, _) = repo.canonical_head()?;
+
    let mut missing = Vec::with_capacity(dids.len());
+

+
    for did in dids {
+
        match refs::SignedRefsAt::load((*did).into(), repo)? {
+
            None => {
+
                missing.push(error::DelegateVerification::MissingDelegate { did: *did });
+
            }
+
            Some(refs::SignedRefsAt { sigrefs, .. }) => {
+
                if sigrefs.get(&canonical).is_none() {
+
                    missing.push(error::DelegateVerification::MissingDefaultBranch {
+
                        branch: canonical.to_ref_string(),
+
                        did: *did,
+
                    });
+
                }
+
            }
+
        }
+
    }
+

+
    Ok((dids.len() - missing.len() < threshold).then_some(missing))
+
}
+

+
#[allow(clippy::unwrap_used)]
+
#[cfg(test)]
+
mod test {
+
    use serde_json::json;
+

+
    use crate::{
+
        git,
+
        identity::{
+
            crefs::GetCanonicalRefs,
+
            doc::{update::error, PayloadId},
+
        },
+
        prelude::RawDoc,
+
        test::arbitrary,
+
    };
+

+
    #[test]
+
    fn test_can_update_crefs() {
+
        let raw = arbitrary::gen::<RawDoc>(1);
+
        let raw = super::payload(
+
            raw,
+
            vec![(
+
                PayloadId::canonical_refs(),
+
                "rules".to_string(),
+
                json!({
+
                    "refs/tags/*": {
+
                        "threshold": 1,
+
                        "allow": "delegates"
+
                    }
+
                }),
+
            )],
+
        )
+
        .unwrap();
+
        let verified = super::verify(raw);
+
        assert!(verified.is_ok(), "Unexpected error {:?}", verified);
+
    }
+

+
    #[test]
+
    fn test_cannot_include_default_branch_rule() {
+
        let raw = arbitrary::gen::<RawDoc>(1);
+
        let branch = git::Qualified::from(git::lit::refs_heads(
+
            raw.project().unwrap().default_branch(),
+
        ));
+
        let raw = super::payload(
+
            raw,
+
            vec![(
+
                PayloadId::canonical_refs(),
+
                "rules".to_string(),
+
                json!({
+
                    "refs/tags/*": {
+
                        "threshold": 1,
+
                        "allow": "delegates"
+
                    },
+
                    branch.as_str(): {
+
                        "threshold": 1,
+
                        "allow": "delegates",
+
                    }
+
                }),
+
            )],
+
        )
+
        .unwrap();
+
        assert!(
+
            matches!(
+
                super::verify(raw),
+
                Err(error::DocVerification::DisallowDefault { .. })
+
            ),
+
            "Verification should be rejected for including default branch rule"
+
        )
+
    }
+

+
    #[test]
+
    fn test_default_branch_rule_exists_after_verification() {
+
        let raw = arbitrary::gen::<RawDoc>(1);
+
        let branch = git::Qualified::from(git::lit::refs_heads(
+
            raw.project().unwrap().default_branch(),
+
        ));
+
        let raw = super::payload(
+
            raw,
+
            vec![(
+
                PayloadId::canonical_refs(),
+
                "rules".to_string(),
+
                json!({
+
                    "refs/tags/*": {
+
                        "threshold": 1,
+
                        "allow": "delegates"
+
                    }
+
                }),
+
            )],
+
        )
+
        .unwrap();
+
        let verified = super::verify(raw).unwrap();
+
        let crefs = verified.canonical_refs().unwrap().unwrap();
+
        assert!(
+
            crefs.rules().matches(&branch).next().is_some(),
+
            "Default branch rule is missing!"
+
        );
+
    }
+
}
added crates/radicle/src/identity/doc/update/error.rs
@@ -0,0 +1,42 @@
+
use thiserror::Error;
+

+
use crate::git;
+
use crate::git::RefString;
+
use crate::identity::{doc::PayloadId, Did, DocError};
+

+
#[derive(Debug, Error)]
+
#[error("'{0}' is not a valid visibility type")]
+
pub struct ParseEditVisibility(pub(super) String);
+

+
#[derive(Debug, Error)]
+
pub enum PrivacyAllowList {
+
    #[error("overlapping allow and disallow of DIDs {0:?}")]
+
    Overlapping(Vec<String>),
+
    #[error("the visibility of the document is public")]
+
    PublicVisibility,
+
}
+

+
#[derive(Debug, Error)]
+
pub enum PayloadError {
+
    #[error("payload found under `{id}` is expected to be a map")]
+
    ExpectedObject { id: PayloadId },
+
}
+

+
#[derive(Debug, Error)]
+
pub enum DocVerification {
+
    #[error("failed to verify `{id}`, {err}")]
+
    PayloadError { id: PayloadId, err: String },
+
    #[error(transparent)]
+
    Doc(#[from] DocError),
+
    #[error("incompatible payloads: The rule(s) xyz.radicle.crefs.rules.{matches:?} matches the value of xyz.radicle.project.defaultBranch ('{default}'). Possible resolutions: Change the name of the default branch or remove the rule(s).")]
+
    DisallowDefault {
+
        matches: Vec<String>,
+
        default: git::Qualified<'static>,
+
    },
+
}
+

+
#[derive(Clone, Debug)]
+
pub enum DelegateVerification {
+
    MissingDefaultBranch { branch: RefString, did: Did },
+
    MissingDelegate { did: Did },
+
}
modified crates/radicle/src/storage.rs
@@ -18,7 +18,7 @@ use crate::cob;
use crate::collections::RandomMap;
use crate::git::{canonical, ext as git_ext};
use crate::git::{refspec::Refspec, PatternString, Qualified, RefError, RefStr, RefString};
-
use crate::identity::{Did, PayloadError};
+
use crate::identity::{doc, Did, PayloadError};
use crate::identity::{Doc, DocAt, DocError};
use crate::identity::{Identity, RepoId};
use crate::node::device::Device;
@@ -120,6 +120,12 @@ pub enum RepositoryError {
    Quorum(#[from] canonical::QuorumError),
    #[error(transparent)]
    Refs(#[from] refs::Error),
+
    #[error("missing canonical reference rule for default branch")]
+
    MissingBranchRule,
+
    #[error("could not get the default branch rule: {0}")]
+
    DefaultBranchRule(#[from] doc::DefaultBranchRuleError),
+
    #[error("failed to get canonical reference rules: {0}")]
+
    CanonicalRefs(#[from] doc::CanonicalRefsError),
}

impl RepositoryError {
modified crates/radicle/src/storage/git.rs
@@ -11,9 +11,9 @@ use std::{fs, io};
use crypto::Verified;
use tempfile::TempDir;

-
use crate::git::canonical::Canonical;
+
use crate::identity::crefs::GetCanonicalRefs as _;
use crate::identity::doc::DocError;
-
use crate::identity::{Doc, DocAt, RepoId};
+
use crate::identity::{CanonicalRefs, Doc, DocAt, RepoId};
use crate::identity::{Identity, Project};
use crate::node::device::Device;
use crate::node::SyncedAt;
@@ -749,13 +749,18 @@ impl ReadRepository for Repository {

    fn canonical_head(&self) -> Result<(Qualified, Oid), RepositoryError> {
        let doc = self.identity_doc()?;
-
        let project = doc.project()?;
-
        let branch_ref = git::refs::branch(project.default_branch());
-
        let raw = self.raw();
-
        let oid =
-
            Canonical::default_branch(self, &project, doc.delegates().into(), doc.threshold())?
-
                .quorum(raw)?;
-
        Ok((branch_ref, oid))
+
        let refname = git::refs::branch(doc.project()?.default_branch());
+
        let crefs = match doc.canonical_refs()? {
+
            Some(crefs) => crefs,
+
            // Fallback to constructing the default branch via the project
+
            // payload
+
            None => CanonicalRefs::from_iter([doc.default_branch_rule()?]),
+
        };
+
        Ok(crefs
+
            .rules()
+
            .canonical(refname, self)?
+
            .ok_or(RepositoryError::MissingBranchRule)?
+
            .quorum(self.raw())?)
    }

    fn identity_head(&self) -> Result<Oid, RepositoryError> {
modified rad-id.1.adoc
@@ -18,7 +18,7 @@ rad-id - Manage changes to a Radicle repository's identity.
*rad id* _edit_ <revision-id> [--title <string>] [--description <string>] [<option>...] +
*rad id* _show_ <revision-id> [<option>...] +
*rad id* _accept_ | _reject_ <revision-id> [<option>...] +
-
*rad id* _redact_ <revision-id> [<option>...]
+
*rad id* _redact_ <revision-id> [<option>...] +

== Description

@@ -176,3 +176,49 @@ To remove a delegate and update the threshold, use the *--rescind* option:
As with adding a delegate, this change will require approval from the remaining
delegates. Make sure you set an appropriate new threshold when removing
delegates!
+

+
=== Adding Canonical References Rules
+

+
To update the canonical reference rules of the project, use the `--payload
+
xyz.radicle.crefs` option while updating, `rules` as the key, and the rules
+
object as the value. Here is an example below:
+

+
    $ rad id update --title "Update canonical reference rules" \
+
        --payload xyz.radicle.crefs rules '{
+
              "refs/heads/master": { "threshold": 1, "allow": "delegates" },
+
              "refs/tags/*": { "threshold": 2, "allow": "delegates" },
+
              "refs/tags/qa/*": { "threshold": 1, "allow": "delegates" }
+
          }'
+

+
Alternatively, you can use the `--edit` option for the `update` command and edit
+
the payload directly. Here is an example of what that may look like:
+

+
    {
+
      "payload": {
+
        "xyz.radicle.crefs": {
+
          "rules": {
+
            "refs/heads/master": {
+
              "allow": "delegates",
+
              "threshold": 1
+
            },
+
            "refs/tags/*": {
+
              "allow": "delegates",
+
              "threshold": 1
+
            },
+
            "refs/tags/qa/*": {
+
              "allow": "delegates",
+
              "threshold": 1
+
            }
+
          }
+
        },
+
        "xyz.radicle.project": {
+
          "defaultBranch": "master",
+
          "description": "Radicle Heartwood Protocol & Stack",
+
          "name": "heartwood"
+
        }
+
      },
+
      "delegates": [
+
        "did:key:z6MknSLrJoTcukLrE435hVNQT4JUhbvWLX4kUzqkEStBU8Vi"
+
      ],
+
      "threshold": 1
+
    }