High-capacity kernel memories remain perfect when their weights are made extremely blurry, but they fail instantly if even a few connections are removed.
April 23, 2026
Original Paper
Quantization robustness from dense representations of sparse functions in high-capacity kernel associative memory
arXiv · 2604.20333
The Takeaway
Neural memory follows a sparse function, dense representation principle that creates a counterintuitive relationship with precision. You can reduce the bit-depth of weights to almost nothing without losing information, yet the memory is incredibly fragile to pruning. This geometric property suggests that the existence of a connection is far more important than the exact value of that connection. Engineers should focus on maintaining dense connectivity rather than high-precision numbers when shrinking models. It means we can build extremely low-power AI hardware as long as we do not try to cut the number of wires.
From the abstract
High-capacity associative memories based on Kernel Logistic Regression (KLR) are known for their exceptional performance but are hindered by high computational costs. This paper investigates the compressibility of KLR-trained Hopfield networks to understand the geometric principles of its robust encoding. We provide a comprehensive geometric theory based on spontaneous symmetry breaking and Walsh analysis, and validate it with compression experiments (quantization and pruning). Our experiments r