A different approach to interpretability — from the outside in.
Metaphori draws on Cognitive Linguistics, Psycholinguistics, and Cognitive Neuroscience to study how semantic structure shapes attention and meaning construction in transformer models... and in humans. We focus on how Language operates both inside and outside transformer models, and across disciplines, cultures, and technologies.
Measuring attention head engagement across every major architecture family
We ran a cross-architecture study across 43 models to understand how Metaphori Engine™ Structured Notation (MESN™) affects attention head engagement. Using TransformerLensand nnsight to measure Direct Logit Attribution (DLA), we mapped activation patterns across 8 specialization families, from symbolic and mathematical reasoning to semantic comprehension.
MESN™ exhibits a consistent anti-pattern: higher perplexity at input, lower perplexity on output. The model finds the notation less familiar, then produces more confident completions. This inverts the standard assumption that familiar input leads to confident output.
Attention head activation across 8 specialization families
Research areas
Applied research
Our findings inform a family of tools across coding, memory, architecture, and workflow domains — each applying MESN™ principles to a specific problem space.
All productsCollaborate with us
We work with research institutions and enterprises exploring how structured input shapes AI cognition, memory, and reasoning.
Partnerships