Tools and how they fit
NEXUS is methodology, not tooling. The tools that
support NEXUS are chosen because they align with
the methodology's principles — they can be replaced
if better alternatives emerge. What cannot be
replaced are the principles they serve.
This wiki describes the current tool set and the
role each tool plays within NEXUS. For each tool
the question is not "what is this tool" but "how
does this tool fit into the methodology and what
does it contribute."
Penpot — Design and Event Modeling
Role: Visual source of truth for both Event
Models and UI screens.
Penpot is the tool where Event Models are drawn
and UI screens are designed. It is open source
and self-hostable — consistent with NEXUS values
of owning your tools and your data.
Within NEXUS, Penpot serves two distinct purposes:
Event Model authoring — the Event Model
timeline is drawn in Penpot. The left-to-right
temporal layout, the swim lanes, the slice
connections — all expressed as a visual artifact
in Penpot. The model lives in Penpot; the link
to it lives in LOGOS.
UI screen design and path documentation —
UI screens are designed in Penpot. Each path
is documented as a Penpot prototype — a linear
sequence of screens with one clickable action
per screen and example data visible in the UI.
Following the prototype is walking the path.
Penpot exports SVG. The SVG output of a screen
design can be consumed directly by the
application — making the Penpot design the
actual UI, not a reference for a developer
to recreate in code. This closes the
design-to-implementation gap structurally.
See: NEXUS → Penpot as dual source of truth —
Event Model and UI.
LOGOS — Knowledge and Communication
Role: The knowledge base and communication
hub where everything is connected and nothing
is lost.
LOGOS is where all design decisions, limitations,
requirements, specs, paths, and AI conversations
live. It is the connective tissue of NEXUS —
every artifact produced by the methodology has
a home in LOGOS and links to the other artifacts
it connects to.
LOGOS serves both human and AI audiences from
the same data. A human navigates it through
the forum interface. CORTEX consumes it through
the API. Both see the same knowledge base with
equal fidelity.
The chat import pipeline brings AI conversations
into LOGOS — preserving the reasoning that
happened during design as permanent,
referenceable content linked to the decisions
it produced.
See: LOGOS — Start Here.
CORTEX — The Personal AI System
Role: AI design collaborator shaped by
accumulated knowledge.
CORTEX is the AI system that knows this
methodology as you practice it, this system's
history, and your preferences and constraints.
It is not installed — it is grown, through the
accumulation of knowledge in LOGOS and the
deliberate shaping of its training and context.
CORTEX is the design collaborator role in
AI-assisted implementation. It participates
in Event Modeling, path definition, and spec
refinement — producing suggestions informed
by the full accumulated knowledge base rather
than generic patterns.
See: CORTEX — Start Here.
See: NEXUS → AI-assisted implementation.
F# — The Implementation Language
Role: Primary render target for AI-assisted
code generation.
F# is chosen as the implementation language
because its properties align directly with
NEXUS principles:
- Type-driven design makes invalid states
unrepresentable — constraints specified in
specs become compiler-enforced types - Discriminated unions model event types
cleanly and exhaustively - Pure functions align naturally with commands
and projections as deterministic functions - The functional paradigm maps directly to
the event sourcing model
F# is the render target, not a design
constraint. The Event Model and specs are
language-agnostic. F# is the language passed
to the rendering AI. Other languages could
be targeted from the same designs.
See: NEXUS → Language-agnostic design — one
model, many implementations.
Talkyard — Current LOGOS Platform and Reference Implementation
Role: The forum platform currently running
LOGOS and the reference implementation from
which LOGOS requirements are derived.
Talkyard is the platform on which LOGOS currently
runs. It provides the forum structure, wiki topics,
category hierarchy, and API that LOGOS uses today.
Talkyard serves a dual purpose within NEXUS. First,
it is the working platform — the place where LOGOS
operates while the purpose-built platform is being
defined. Second, and equally important, it is the
reference implementation for what LOGOS must be.
Building LOGOS is not a greenfield design exercise.
It is closer to a principled reverse engineering —
studying what Talkyard does, how it behaves, what
it enables, and deriving requirements from that
active use. Not from its code — from its behavior.
What Talkyard does well that works for LOGOS becomes
a requirement. What Talkyard cannot do that LOGOS
needs becomes a requirement from the other direction.
The two sources together form a comprehensive
functional specification:
- LOGOS → Talkyard Limitations — constraints
and gaps that LOGOS must not have. Each
limitation is a permanent record of something
the next platform must solve. - LOGOS → Requirements — capabilities LOGOS
must have, sourced from both limitations and
from what Talkyard does correctly that must
be preserved and extended.
Talkyard is explicitly an interim platform in the
sense that LOGOS will eventually supersede it. But
it is not a throwaway — it is the foundation from
which LOGOS is understood and specified.
See: LOGOS → Talkyard Limitations.
See: LOGOS → Requirements.
The Rendering AI
Role: Purpose-trained model for rendering
approved NEXUS specs into idiomatic F#.
The rendering AI is distinct from CORTEX.
Where CORTEX is a broad design collaborator
shaped by accumulated context, the rendering
AI is a narrow, specific tool optimized for
one task: taking an approved spec and producing
correct, idiomatic F# that follows NEXUS
patterns and conventions.
This is a trainable, improvable target. The
corpus of spec-to-implementation pairs that
accumulates over time becomes training data.
The renderer gets better as more specs are
rendered.
See: NEXUS → AI-assisted implementation.
See: NEXUS → The rendering AI — training
and tuning a model for NEXUS F# output.
Tool Selection Principles
Tools are chosen and kept based on alignment
with NEXUS principles:
Open source and self-hostable — you own
your tools and your data. No vendor lock-in.
No dependency on a service that can change
pricing, terms, or availability.
API accessible — every tool exposes its
data through an API with full fidelity. LOGOS
and CORTEX can consume any tool's data
programmatically.
Exportable — full data export is a
requirement of any tool in the NEXUS ecosystem.
The export path must exist before the tool
is adopted.
Replaceable — no tool is irreplaceable.
The methodology is the constant. Tools are
the current best implementation of the
methodology's needs.
See Also
- NEXUS — Start Here
- Event Modeling as the design foundation
- AI-assisted implementation
- The knowledge accumulation principle
- NEXUS → Penpot as dual source of truth —
Event Model and UI - LOGOS — Start Here
- CORTEX — Start Here