How to use the Workbench to turn expert knowledge into structures that learners can genuinely understand — through conversation, not compilation.
Most ways of organising knowledge — textbooks, curricula, slide decks, online courses — present ideas in a fixed sequence and expect the learner to follow it. The content is designed to be delivered. The learner's job is to receive it.
Gordon Pask, the cybernetician whose theory underpins this tool, spent thirty years showing why that fails for genuine understanding. His central finding: understanding is not reception — it is construction. You understand something when you can explain it yourself, derive it, build a model of it, and teach it to someone else. Everything short of that is not understanding in the sense that matters.
This has a direct consequence for how knowledge should be organised. A textbook is a one-way path. An entailment mesh — the kind of structure the Workbench helps you build — is a network where every idea connects to every other through derivations that work in both directions. You can start anywhere, navigate in any direction, and reach any idea from any other idea.
Shows what goes with what. You see that "Ocean Currents" is related to "Wind Patterns." The relationships are labelled but describe associations, not derivations. Like a tourist map — it shows you what is near what.
Shows how to get from one idea to another — and back. The connections are derivation paths: if you understand these ideas, here is how you derive that one. And if you understand that one, here is how you work back. Like a road map — it shows how to drive from anywhere to anywhere.
The concepts in this guide form their own entailment mesh. Click any node to see its definition and connections. Notice how every concept derives from others and leads to others — the same cyclicity the Workbench helps you build.
The Workbench displays your domain as an interactive graph. A graph has two components: nodes (the things) and edges (the relationships between them). Every circle is a node. Every line connecting two circles is an edge.
Think of a map of cities connected by roads. Each city is a node. Each road is an edge. The map tells you not just what cities exist but how you can get from one to another. That is what a graph does for knowledge.
In the Workbench, nodes represent topics — the ideas, concepts, principles, and phenomena that make up your domain. Edges represent relationships — derivations, analogies, conditionals. The graph grows as you converse with Claude and confirm the elements that emerge.
Not all topics play the same role. The Workbench distinguishes four types, each with a specific function.
The basic building block. Any idea, concept, principle, or process that someone could learn about, explain, and demonstrate. A well-defined topic is bounded (clear scope) and reproducible (someone who understands it can reconstruct the explanation independently).
A vantage point from which much of the domain can be surveyed. Many other topics can be reached by following derivation paths from a head. More heads mean more entry points and perspectives for different learners. The ratio of heads to total topics is a quality measure (Qual(Ω)).
A starting point accepted without further derivation within this domain — where the domain connects to what the learner already knows. Primitives are relative: what counts as a primitive depends on who the learner is. What is primitive for a beginner may be fully derived for an expert.
A genuine intellectual disagreement — two explanations that compete, two theories that cannot both be right. Conditionals are not errors. They are among the most intellectually productive locations in a domain, marking live questions and unresolved tensions.
The edges connecting topics are as important as the topics themselves. The Workbench uses four kinds.
The most fundamental relationship: "if you understand these topics, you can derive that topic from them." The arrow shows direction. Derivations should work both ways wherever possible — this is what makes the mesh cyclic and learnable. Each derivation carries an explanation of how you get from one idea to the other.
Two topics sharing structural similarity while differing in content. Every analogy has a similarity (what they share) and a distinction (how they differ). Analogies bridge different areas of the domain and are the substance of deep, cross-cutting understanding.
Connects two rival hypotheses or competing explanations. Unlike an analogy (where things share structure), a conditional says two things cannot both be true within the same framework — and the question is unresolved. The tension itself enriches the domain.
A property assigned to a topic along a specific dimension — like scale (local/global), timescale (fast/slow), or domain (physical/biological). At least two descriptors per topic. Descriptors enable comparison, differentiation, and the discovery of empty cells — gaps where new topics may be waiting.
Cyclicity is the most important structural property of an entailment mesh. Understanding what it means is essential to using the Workbench well.
A structure is cyclic when you can start at any topic and reach any other topic by following derivation paths. There are no dead ends. There are no one-way streets you cannot travel back along. If topic C is derived from topics A and B, then A and B must also be reachable from C.
Pask's central claim: cyclicity equals learnability. A structure that is not cyclic contains topics that can be stated but not genuinely understood through the structure itself. If there is no path from C back to A, then a learner starting at C simply cannot construct an understanding of A from within this domain. They must be told it from outside — and being told is not understanding.
Cyclicity also means the domain supports multiple learning strategies. A step-by-step learner follows one set of derivation paths. A pattern-seeking learner follows another. Both work because the structure has multiple paths, multiple directions, and no dead ends.
Circular reasoning says "A is true because B, and B because A" — it goes round but proves nothing. Cyclicity says "A can be derived from B, and B from A, through different derivation paths using different supporting topics." Each path involves genuine reasoning. The cycles are routes through a landscape, not logical circles.
Temperature Gradients → Thermohaline Circulation → Climate Regulation.
A learner starting at Climate Regulation has no way back to Temperature Gradients.
Temperature Gradients ⟷ Thermohaline Circulation ⟷ Climate Regulation ⟷ Temperature Gradients.
A learner can enter at any point and navigate to any other.
Claude, acting as your Informed Assistant, applies five rules continuously. These rules transform your linear exposition into a cyclic, richly connected entailment mesh. Claude does not explain these rules to you during conversation — Claude enacts them.
Generates entailment links — the directed connections between topics showing how understanding one idea enables understanding another. Without these, you have a list of topics with no paths between them.
Creates reverse derivation paths that close the cycles. This is the rule that transforms a one-way chain into a genuine mesh. Often the most challenging rule — experts typically think in one direction.
Captures the doing procedures (know-how) and explaining procedures (know-why) for each topic. A derivation without an explanation of its mechanism is incomplete.
Generates semantic descriptors — dimensions along which topics can be characterised and compared. At least two per topic. Descriptors also reveal empty cells — combinations no topic occupies — which are potential sites for discovering new topics.
Generates analogy relations — formal recognitions of structural correspondence. Analogies bridge different areas of the domain and are the substance of deep, transferable understanding.
Start with whatever feels most central. There is no prescribed starting point. Go deep rather than broad.
The IA applies the five rules — asking for derivations, reverse paths, explanations, descriptions, analogies.
Topics, derivations, and analogies are extracted from the conversation and presented for your review.
Each extraction is a proposal. Confirm, edit, or reject. Confirmed elements flow into the graph.
The IA sees the current graph state, identifies gaps, and steers the next conversation toward filling them.
Each session builds on previous ones. The domain deepens over time, approaching full cyclicity.
| Concept | What It Means | Why It Matters |
|---|---|---|
| Entailment Mesh | Network of topics connected by derivations, analogies, and descriptions | The structure that makes a domain genuinely learnable |
| Cyclicity | Every topic reachable from every other topic | Learners can start anywhere and navigate in any direction |
| Derivation | A path of reasoning from one topic to another | The edges that allow understanding to propagate |
| Analogy | Two topics sharing structure while differing in content | Bridges that connect areas into deep understanding |
| Head Topic | Vantage point from which much of the domain can be surveyed | More heads = more perspectives and entry points |
| Primitive | Starting point accepted without further derivation | Where the domain connects to existing knowledge |
| Conditional | Two competing explanations, unresolved | Honest representation of genuine intellectual disagreement |
| Descriptor | Dimension along which topics are characterised | Enables comparison and empty cell discovery |
| Teachback | Learner explains the topic back, demonstrating understanding | Gold standard — reconstruction, not recognition |
| Qual(Ω) | Number of heads ÷ total topics | Quality measure — higher means more flexible |
| Source/IA Dialogue | Expert explains, Informed Assistant probes | Transforms linear exposition into cyclic mesh |
No. This guide covers everything you need. Claude embodies the method — you do not need to understand the theory to benefit from the process. The IA enacts the five rules; your job is to bring your expertise and respond to Claude's probes.
Anyone who knows a domain deeply enough to explain it conversationally. You do not need to be an academic. Field scientists, educators, experienced practitioners, and working professionals all bring valid entailment structures. Different experts produce different meshes — and the differences are often the most interesting parts.
A productive session typically runs 30–60 minutes. Go deep in a focused area rather than skimming many topics. A rich 5-topic submesh with full cyclicity and well-described analogies is more valuable than a thin 20-topic list.
That is a productive moment. Sometimes the reverse path is obvious once you look. Sometimes it reveals something new. And sometimes it genuinely does not exist within the current scope — the topic may need to be reclassified as a primitive or the domain expanded.
Reject it. Every extraction is a proposal. You are the authority on your domain. If an extraction mischaracterises a relationship or proposes an analogy that does not hold, reject it and continue. The conversation is richer than any single extraction.
Yes, and it is encouraged. Where experts agree, you have robust knowledge. Where they disagree, you have potential conditional nodes — genuine intellectual questions. Where one sees connections the other does not, you have analogies waiting to be formalised.
Because a rich, densely connected submesh is more valuable than a broad, thin one. Pask's experimental domains were 20–30 topics but densely interconnected — every topic had multiple paths, reverse derivations, and descriptions. The breadth will come; depth must come first.
A derivation is a path of reasoning within a connected area: "from understanding these topics, derive this one." An analogy is a bridge between areas: "these two topics in different areas share the same underlying structure." Derivations are roads within a neighbourhood; analogies are bridges between neighbourhoods.
One descriptor cannot distinguish a topic from its neighbours. If you only describe topics by their domain (physical, chemical, biological), all physical topics look the same. Add scale (local, global) and suddenly physical-local (wave breaking) differs clearly from physical-global (thermohaline circulation). Two descriptors is the minimum for meaningful characterisation.
It becomes the foundation for the Wayfinder learning platform — the terrain through which future learners navigate, supported by a conversational AI partner that adapts to how each learner thinks. The graph is also exportable as JSON for use in Neo4j or other systems.