Solves the compositional generalization failure of neural networks (0% to 100% accuracy) by embedding algebraic semiring constraints.
March 23, 2026
Original Paper
Ternary Gamma Semirings: From Neural Implementation to Categorical Foundations
arXiv · 2603.19317
The Takeaway
It proves that standard architectures fail at composition because they lack specific logical constraints. By forcing the feature space to behave like a Ternary Gamma Semiring, models internalize algebraic axioms that allow them to generalize perfectly to novel combinations.
From the abstract
This paper establishes a theoretical framework connecting neural network learning with abstract algebraic structures. We first present a minimal counterexample demonstrating that standard neural networks completely fail on compositional generalization tasks (0% accuracy). By introducing a logical constraint -- the Ternary Gamma Semiring -- the same architecture learns a perfectly structured feature space, achieving 100% accuracy on novel combinations. We prove that this learned feature space con