A different approach to interpretability — from the outside in.

Metaphori draws on Cognitive Linguistics, Psycholinguistics, and Cognitive Neuroscience to study how semantic structure shapes attention and meaning construction in transformer models... and in humans. We focus on how Language operates both inside and outside transformer models, and across disciplines, cultures, and technologies.

Direct Logit Attribution (DLA) Research Study

Measuring attention head engagement across every major architecture family

We ran a cross-architecture study across 43 models to understand how Metaphori Engine™ Structured Notation (MESN™) affects attention head engagement. Using TransformerLensand nnsight to measure Direct Logit Attribution (DLA), we mapped activation patterns across 8 specialization families, from symbolic and mathematical reasoning to semantic comprehension.

344of 344 positive family-direction checks
43models across 12 architecture families
69.7%of completions spontaneously reproduced structured operators
MESN™ exhibits a consistent anti-pattern: higher perplexity at input, lower perplexity on output. The model finds the notation less familiar, then produces more confident completions. This inverts the standard assumption that familiar input leads to confident output.
Read the full study

Attention head activation across 8 specialization families

Applied research

Our findings inform a family of tools across coding, memory, architecture, and workflow domains — each applying MESN™ principles to a specific problem space.

All products

Collaborate with us

We work with research institutions and enterprises exploring how structured input shapes AI cognition, memory, and reasoning.

Partnerships