diff options
| author | main <main@swarm.moe> | 2026-03-20 16:00:30 -0400 |
|---|---|---|
| committer | main <main@swarm.moe> | 2026-03-20 16:00:30 -0400 |
| commit | 9d63844f3a28fde70b19500422f17379e99e588a (patch) | |
| tree | 163cfbd65a8d3528346561410ef39eb1183a16f2 /assets/codex-skills/fidget-spinner/SKILL.md | |
| parent | 22fe3d2ce7478450a1d7443c4ecbd85fd4c46716 (diff) | |
| download | fidget_spinner-9d63844f3a28fde70b19500422f17379e99e588a.zip | |
Refound Spinner as an austere frontier ledger
Diffstat (limited to 'assets/codex-skills/fidget-spinner/SKILL.md')
| -rw-r--r-- | assets/codex-skills/fidget-spinner/SKILL.md | 94 |
1 files changed, 45 insertions, 49 deletions
diff --git a/assets/codex-skills/fidget-spinner/SKILL.md b/assets/codex-skills/fidget-spinner/SKILL.md index f187be3..bfb0990 100644 --- a/assets/codex-skills/fidget-spinner/SKILL.md +++ b/assets/codex-skills/fidget-spinner/SKILL.md @@ -1,12 +1,12 @@ --- name: fidget-spinner -description: Use Fidget Spinner as the local system of record for source capture, hypothesis tracking, and experiment adjudication. Read health, schema, and frontier state first; keep off-path prose cheap; drive core-path work through hypothesis-owned experiments. +description: Use Fidget Spinner as the local system of record for frontier grounding, hypothesis tracking, experiment adjudication, and artifact references. Read health first, ground through frontier.open, and walk the graph deliberately one selector at a time. --- # Fidget Spinner Use this skill when working inside a project initialized with Fidget Spinner or -when the task is to inspect or mutate the project DAG through the packaged MCP. +when the task is to inspect or mutate a frontier through the packaged MCP. Start every session by reading `system.health`. @@ -21,79 +21,75 @@ Then read: - `project.status` - `tag.list` - `frontier.list` -- `frontier.status` for the active frontier -- `experiment.list` if you may be resuming in-flight core-path work +- `frontier.open` for the active frontier -Read `project.schema` only when payload authoring, validation rules, or local -field vocabulary are actually relevant. When in doubt, start with -`detail=concise` and widen to `detail=full` only if the summary is insufficient. +`frontier.open` is the only sanctioned overview surface. It is allowed to give +you the frontier brief, active tags, live metrics, active hypotheses, and open +experiments in one call. If you need more context, pull it from: -- `node.list` -- `node.read` +- `hypothesis.read` - `experiment.read` +- `artifact.read` ## Posture -- the DAG is canonical truth -- frontier state is a derived projection -- project payload validation is warning-heavy at ingest -- annotations are sidecar and hidden by default -- `source` and `note` are off-path memory -- `hypothesis` and `experiment` are the disciplined core path +- `frontier` is scope and grounding, not a graph vertex +- `hypothesis` and `experiment` are the true graph nodes +- every experiment has one mandatory owning hypothesis +- experiments and hypotheses may also cite other hypotheses or experiments as influence parents +- the frontier brief is the one sanctioned freeform overview +- artifacts are references only; Spinner never reads artifact bodies +- token austerity matters more than convenience dumps ## Choose The Cheapest Tool -- `tag.add` when a new note taxonomy token is genuinely needed; every tag must carry a description -- `tag.list` before inventing note tags by memory -- `schema.field.upsert` when one project payload field needs to become canonical without hand-editing `schema.json` -- `schema.field.remove` when one project payload field definition should be purged cleanly -- `source.record` for imported source material, documentary context, or one substantial source digest; always pass `title`, `summary`, and `body`, and pass `tags` when the source belongs in a campaign/subsystem index -- `note.quick` for atomic reusable takeaways, always with an explicit `tags` list plus `title`, `summary`, and `body`; use `[]` only when no registered tag applies +- `tag.add` when a new campaign or subsystem token is genuinely needed; every tag must carry a description +- `tag.list` before inventing tags by memory +- `frontier.brief.update` when the situation, roadmap, or unknowns need to change - `hypothesis.record` before core-path work; every experiment must hang off exactly one hypothesis +- `hypothesis.update` when the title, summary, body, tags, or influence parents need tightening - `experiment.open` once a hypothesis has a concrete slice and is ready to be tested -- `experiment.list` or `experiment.read` when resuming a session and you need to recover open experimental state -- `metric.define` when a project-level metric key needs a canonical unit, objective, or human description -- `run.dimension.define` when a new experiment slicer such as `scenario` or `duration_s` becomes query-worthy +- `experiment.list` or `experiment.read` when resuming a session and you need to recover open or recently closed state +- `experiment.update` while the experiment is still live and its summary, tags, or influence parents need refinement +- `experiment.close` only for an already-open experiment and only when you have measured result, verdict, and rationale; attach `analysis` only when the result needs interpretation beyond the rationale +- `artifact.record` when preserving an external file, link, log, table, plot, dump, or bibliography by reference +- `artifact.read` only to inspect metadata and attachments, never to read the body +- `metric.define` when a project-level metric key needs a canonical unit, objective, visibility tier, or description +- `metric.keys --scope live` before guessing which numeric signals matter now +- `metric.best` when you need the best closed experiments by one numeric key; pass exact run-dimension filters when comparing one slice +- `run.dimension.define` when a new experiment slicer such as `instance` or `duration_s` becomes query-worthy - `run.dimension.list` before guessing which run dimensions actually exist in the store -- `metric.keys` before guessing which numeric signals are actually rankable; pass exact run-dimension filters when narrowing to one workload slice -- `metric.best` when you need the best closed experiments by one numeric key; pass `order` for noncanonical payload fields and exact run-dimension filters when comparing one slice -- `node.annotate` for scratch text that should stay off the main path -- `experiment.close` only for an already-open experiment and only when you have measured result, note, and verdict; attach `analysis` when the result needs explicit interpretation -- `node.archive` to hide stale detritus without deleting evidence -- `node.create` only as a true escape hatch ## Workflow -1. Preserve source texture with `source.record` only when keeping the source itself matters. -2. Extract reusable claims into `note.quick`. -3. State the intended intervention with `hypothesis.record`. -4. Open a live experiment with `experiment.open`. -5. Do the work. -6. Close the experiment with `experiment.close`, including metrics, verdict, and optional analysis. +1. Ground through `frontier.open`. +2. State the intended intervention with `hypothesis.record`. +3. Open a live experiment with `experiment.open`. +4. Do the work. +5. Close the experiment with `experiment.close`, including dimensions, metrics, verdict, rationale, and optional analysis. +6. Attach any large markdown, logs, tables, dumps, or links through `artifact.record` instead of bloating the ledger. -Do not dump a whole markdown tranche into one giant prose node and call that progress. -If a later agent should enumerate it by tag or node list, it should usually be a `note.quick`. -If the point is to preserve or digest a source document, it should be `source.record`. -If the point is to test a claim, it should become a hypothesis plus an experiment. +Do not dump a whole markdown tranche into Spinner. If it matters, attach it as +an artifact and summarize the scientific upshot in the frontier brief, +hypothesis, or experiment outcome. ## Discipline -1. Pull context from the DAG, not from sprawling prompt prose. -2. Prefer off-path records unless the work directly advances or judges the frontier. -3. Do not let critical operational truth live only in annotations. +1. `frontier.open` is the only overview dump. After that, walk the graph one selector at a time. +2. Pull context from hypotheses and experiments, not from sprawling prompt prose. +3. Do not expect artifact content to be available through Spinner. Open the file or link out of band when necessary. 4. If the MCP behaves oddly or resumes after interruption, inspect `system.health` and `system.telemetry` before pushing further. -5. Keep fetches narrow by default; widen only when stale or archived context is - actually needed. +5. Keep fetches narrow by default; slow is better than burning tokens. 6. Treat metric keys as project-level registry entries and run dimensions as the first-class slice surface for experiment comparison; do not encode scenario context into the metric key itself. -7. A source node is not a dumping ground for every thought spawned by that source. - Preserve one source digest if needed, then extract reusable claims into notes. -8. A hypothesis is not an experiment. Open the experiment explicitly; do not - smuggle “planned work” into off-path prose. +7. A hypothesis is not an experiment. Open the experiment explicitly; do not + smuggle planned work into the frontier brief. +8. Experiments are the scientific record. If a fact matters later, it should + usually live in a closed experiment outcome rather than in freeform text. 9. The ledger is scientific, not git-forensic. Do not treat commit hashes as experiment identity. 10. Porcelain is the terse triage surface. Use `detail=full` only when concise output stops being decision-sufficient. |