CoExplorer Wayfinder

Domain Authoring
Workbench Guide

How to use the Workbench to turn expert knowledge into structures that learners can genuinely understand — through conversation, not compilation.

5Rules
4Topic Types
4Relationship Types
1Key Principle

Why Not Just Write a Textbook?

Most ways of organising knowledge — textbooks, curricula, slide decks, online courses — present ideas in a fixed sequence and expect the learner to follow it. The content is designed to be delivered. The learner's job is to receive it.

Gordon Pask, the cybernetician whose theory underpins this tool, spent thirty years showing why that fails for genuine understanding. His central finding: understanding is not reception — it is construction. You understand something when you can explain it yourself, derive it, build a model of it, and teach it to someone else. Everything short of that is not understanding in the sense that matters.

This has a direct consequence for how knowledge should be organised. A textbook is a one-way path. An entailment mesh — the kind of structure the Workbench helps you build — is a network where every idea connects to every other through derivations that work in both directions. You can start anywhere, navigate in any direction, and reach any idea from any other idea.

Concept Map

Shows what goes with what. You see that "Ocean Currents" is related to "Wind Patterns." The relationships are labelled but describe associations, not derivations. Like a tourist map — it shows you what is near what.

Entailment Mesh

Shows how to get from one idea to another — and back. The connections are derivation paths: if you understand these ideas, here is how you derive that one. And if you understand that one, here is how you work back. Like a road map — it shows how to drive from anywhere to anywhere.

The Workbench's Own Entailment Graph

The concepts in this guide form their own entailment mesh. Click any node to see its definition and connections. Notice how every concept derives from others and leads to others — the same cyclicity the Workbench helps you build.

Workbench Concepts

Head
Topic
Primitive
Derivation
Analogy

Graphs, Nodes, and Edges

The Workbench displays your domain as an interactive graph. A graph has two components: nodes (the things) and edges (the relationships between them). Every circle is a node. Every line connecting two circles is an edge.

Think of a map of cities connected by roads. Each city is a node. Each road is an edge. The map tells you not just what cities exist but how you can get from one to another. That is what a graph does for knowledge.

In the Workbench, nodes represent topics — the ideas, concepts, principles, and phenomena that make up your domain. Edges represent relationships — derivations, analogies, conditionals. The graph grows as you converse with Claude and confirm the elements that emerge.

Topic Types

Not all topics play the same role. The Workbench distinguishes four types, each with a specific function.

T

Topic

The basic building block. Any idea, concept, principle, or process that someone could learn about, explain, and demonstrate. A well-defined topic is bounded (clear scope) and reproducible (someone who understands it can reconstruct the explanation independently).

H

Head Topic

A vantage point from which much of the domain can be surveyed. Many other topics can be reached by following derivation paths from a head. More heads mean more entry points and perspectives for different learners. The ratio of heads to total topics is a quality measure (Qual(Ω)).

P

Primitive

A starting point accepted without further derivation within this domain — where the domain connects to what the learner already knows. Primitives are relative: what counts as a primitive depends on who the learner is. What is primitive for a beginner may be fully derived for an expert.

C

Conditional

A genuine intellectual disagreement — two explanations that compete, two theories that cannot both be right. Conditionals are not errors. They are among the most intellectually productive locations in a domain, marking live questions and unresolved tensions.

Relationship Types

The edges connecting topics are as important as the topics themselves. The Workbench uses four kinds.

Derivation →

The most fundamental relationship: "if you understand these topics, you can derive that topic from them." The arrow shows direction. Derivations should work both ways wherever possible — this is what makes the mesh cyclic and learnable. Each derivation carries an explanation of how you get from one idea to the other.

Analogy ↔

Two topics sharing structural similarity while differing in content. Every analogy has a similarity (what they share) and a distinction (how they differ). Analogies bridge different areas of the domain and are the substance of deep, cross-cutting understanding.

Conditional ⟷

Connects two rival hypotheses or competing explanations. Unlike an analogy (where things share structure), a conditional says two things cannot both be true within the same framework — and the question is unresolved. The tension itself enriches the domain.

Descriptor

A property assigned to a topic along a specific dimension — like scale (local/global), timescale (fast/slow), or domain (physical/biological). At least two descriptors per topic. Descriptors enable comparison, differentiation, and the discovery of empty cells — gaps where new topics may be waiting.

Cyclicity: What Makes Knowledge Learnable

Cyclicity is the most important structural property of an entailment mesh. Understanding what it means is essential to using the Workbench well.

What Cyclicity Means

A structure is cyclic when you can start at any topic and reach any other topic by following derivation paths. There are no dead ends. There are no one-way streets you cannot travel back along. If topic C is derived from topics A and B, then A and B must also be reachable from C.

Why It Matters

Pask's central claim: cyclicity equals learnability. A structure that is not cyclic contains topics that can be stated but not genuinely understood through the structure itself. If there is no path from C back to A, then a learner starting at C simply cannot construct an understanding of A from within this domain. They must be told it from outside — and being told is not understanding.

Cyclicity also means the domain supports multiple learning strategies. A step-by-step learner follows one set of derivation paths. A pattern-seeking learner follows another. Both work because the structure has multiple paths, multiple directions, and no dead ends.

Cyclicity Is Not Circular Reasoning

Circular reasoning says "A is true because B, and B because A" — it goes round but proves nothing. Cyclicity says "A can be derived from B, and B from A, through different derivation paths using different supporting topics." Each path involves genuine reasoning. The cycles are routes through a landscape, not logical circles.

Not Cyclic (One-Way Chain)

Temperature Gradients → Thermohaline Circulation → Climate Regulation.
A learner starting at Climate Regulation has no way back to Temperature Gradients.

Cyclic (Mesh)

Temperature Gradients ⟷ Thermohaline Circulation ⟷ Climate Regulation ⟷ Temperature Gradients.
A learner can enter at any point and navigate to any other.

The Five Rules

Claude, acting as your Informed Assistant, applies five rules continuously. These rules transform your linear exposition into a cyclic, richly connected entailment mesh. Claude does not explain these rules to you during conversation — Claude enacts them.

Rule 1

Derivation

"How do you derive that? What must someone understand first?"

Generates entailment links — the directed connections between topics showing how understanding one idea enables understanding another. Without these, you have a list of topics with no paths between them.

Rule 2

Cyclicity

"Can you go the other way? If someone understood the derived topic, could they work back to the precursors?"

Creates reverse derivation paths that close the cycles. This is the rule that transforms a one-way chain into a genuine mesh. Often the most challenging rule — experts typically think in one direction.

Rule 3

Explanation

"Show me how the derivation works. What model or demonstration would make it visible?"

Captures the doing procedures (know-how) and explaining procedures (know-why) for each topic. A derivation without an explanation of its mechanism is incomplete.

Rule 4

Description

"What kind of thing is this? How does it differ from that other topic?"

Generates semantic descriptors — dimensions along which topics can be characterised and compared. At least two per topic. Descriptors also reveal empty cells — combinations no topic occupies — which are potential sites for discovering new topics.

Rule 5

Analogy

"You used a similar structure for X and Y — what do they share? How do they differ?"

Generates analogy relations — formal recognitions of structural correspondence. Analogies bridge different areas of the domain and are the substance of deep, transferable understanding.

What a Session Looks Like

1

You Explain

Start with whatever feels most central. There is no prescribed starting point. Go deep rather than broad.

2

Claude Probes

The IA applies the five rules — asking for derivations, reverse paths, explanations, descriptions, analogies.

3

Elements Emerge

Topics, derivations, and analogies are extracted from the conversation and presented for your review.

4

You Confirm

Each extraction is a proposal. Confirm, edit, or reject. Confirmed elements flow into the graph.

5

The Graph Grows

The IA sees the current graph state, identifies gaps, and steers the next conversation toward filling them.

6

Sessions Accumulate

Each session builds on previous ones. The domain deepens over time, approaching full cyclicity.

Quick Reference

ConceptWhat It MeansWhy It Matters
Entailment MeshNetwork of topics connected by derivations, analogies, and descriptionsThe structure that makes a domain genuinely learnable
CyclicityEvery topic reachable from every other topicLearners can start anywhere and navigate in any direction
DerivationA path of reasoning from one topic to anotherThe edges that allow understanding to propagate
AnalogyTwo topics sharing structure while differing in contentBridges that connect areas into deep understanding
Head TopicVantage point from which much of the domain can be surveyedMore heads = more perspectives and entry points
PrimitiveStarting point accepted without further derivationWhere the domain connects to existing knowledge
ConditionalTwo competing explanations, unresolvedHonest representation of genuine intellectual disagreement
DescriptorDimension along which topics are characterisedEnables comparison and empty cell discovery
TeachbackLearner explains the topic back, demonstrating understandingGold standard — reconstruction, not recognition
Qual(Ω)Number of heads ÷ total topicsQuality measure — higher means more flexible
Source/IA DialogueExpert explains, Informed Assistant probesTransforms linear exposition into cyclic mesh

Frequently Asked Questions

Do I need to know Pask's Conversation Theory to use this tool?

No. This guide covers everything you need. Claude embodies the method — you do not need to understand the theory to benefit from the process. The IA enacts the five rules; your job is to bring your expertise and respond to Claude's probes.

What kind of subject matter expert should I be?

Anyone who knows a domain deeply enough to explain it conversationally. You do not need to be an academic. Field scientists, educators, experienced practitioners, and working professionals all bring valid entailment structures. Different experts produce different meshes — and the differences are often the most interesting parts.

How long does a session take?

A productive session typically runs 30–60 minutes. Go deep in a focused area rather than skimming many topics. A rich 5-topic submesh with full cyclicity and well-described analogies is more valuable than a thin 20-topic list.

What if I cannot find the reverse derivation?

That is a productive moment. Sometimes the reverse path is obvious once you look. Sometimes it reveals something new. And sometimes it genuinely does not exist within the current scope — the topic may need to be reclassified as a primitive or the domain expanded.

What if Claude's extraction is wrong?

Reject it. Every extraction is a proposal. You are the authority on your domain. If an extraction mischaracterises a relationship or proposes an analogy that does not hold, reject it and continue. The conversation is richer than any single extraction.

Can multiple experts work on the same domain?

Yes, and it is encouraged. Where experts agree, you have robust knowledge. Where they disagree, you have potential conditional nodes — genuine intellectual questions. Where one sees connections the other does not, you have analogies waiting to be formalised.

Why does Claude keep asking me to go deeper?

Because a rich, densely connected submesh is more valuable than a broad, thin one. Pask's experimental domains were 20–30 topics but densely interconnected — every topic had multiple paths, reverse derivations, and descriptions. The breadth will come; depth must come first.

What is the difference between a derivation and an analogy?

A derivation is a path of reasoning within a connected area: "from understanding these topics, derive this one." An analogy is a bridge between areas: "these two topics in different areas share the same underlying structure." Derivations are roads within a neighbourhood; analogies are bridges between neighbourhoods.

Why does the IA insist on at least two descriptors per topic?

One descriptor cannot distinguish a topic from its neighbours. If you only describe topics by their domain (physical, chemical, biological), all physical topics look the same. Add scale (local, global) and suddenly physical-local (wave breaking) differs clearly from physical-global (thermohaline circulation). Two descriptors is the minimum for meaningful characterisation.

What happens to the graph after I build it?

It becomes the foundation for the Wayfinder learning platform — the terrain through which future learners navigate, supported by a conversational AI partner that adapts to how each learner thinks. The graph is also exportable as JSON for use in Neo4j or other systems.