Models
Architectural research from Metaphori.
PrimeOrdinal Weave
In DevelopmentAn experimental transformer architecture exploring cross-head interaction within the attention computation itself. Standard multi-head attention runs heads in parallel isolation — each producing one attention pattern, with no communication until concatenation. Weave introduces learned mixing between heads before attention is computed, generating a quadratic expansion of interaction patterns with minimal parameter overhead.
The target capability: multi-step compositional reasoning in a single forward pass — the kind of relational chaining that current architectures can only achieve by scaling parameter count or chain-of-thought depth.
Early experiments combine this approach with structured input notation to test whether architectural cross-head mixing amplifies the multi-specialization response we've observed across 61 production models.
PrimeOrdinal Pirsig
In DevelopmentA two-network architecture exploring pre-decode steering of activation geometry. Standard transformers commit to each token before evaluating the trajectory — quality is a property assessed downstream, not shaped upstream.
Pirsig pairs a generation network (the actor) with a lightweight observation network (the observer) that reads the activation landscape and steers it toward higher-quality geometric configurations beforetoken commitment. The observer doesn't generate; it sculpts.
Influences: actor-critic reinforcement learning, residual stream interpretability, and Pirsig's metaphysics of quality.
The P○ mark reads on multiple layers. In algebraic notation, the small circle often denotes composition (f ∘ g) or identity/closure — both thematically aligned with what these architectures do. Weave is compositional (cross-head composition); Pirsig is about closure of the generation loop before commitment. The logo reads as mathematical while encoding the philosophy.
PrimeOrdinal creates a natural naming taxonomy for future architectures — with P○ as the family mark.