Intelligence isn't just weight tuning; it's a 'periodic table' of conceptual growth that can be mathematically proven.
April 15, 2026
Original Paper
The Periodic Table of Concepts: Hierarchical Constraint Refinement in Neural Systems
SSRN · 6283478
The Takeaway
We often treat neural network learning as a black box of optimization. This paper provides a dynamical systems proof that intelligent systems evolve through discrete 'expansion' and 'decomposition' events, creating stable conceptual ontologies. It formalizes how 'discovery' actually happens within a network’s weights. This gives researchers a mathematical roadmap to build systems that don't just learn patterns, but actually structure their understanding into stable, reusable concepts. It moves us toward a formal theory of conceptual machine intelligence.
From the abstract
Tóm t´t n. i dung Intelligent systems must not only learn within a fixed hypothesis space, but also expand and refine their internal conceptual vocabulary. While the Entropy of Constraints (EoC) framework provides a dynamical description of co-evolving states x and constraints θ, it assumes a fixed constraint space Θ. This assumption excludes the possibility of genuine conceptual discovery. We extend EoC to an expandable constraint space Θ(t) whose dimension and structure evolve via discrete exp