AI & ML Paradigm Challenge

The "junk" parts of an AI’s brain we’ve been ignoring are actually where all the most important stuff is hidden.

April 3, 2026

Original Paper

MiCA Learns More Knowledge Than LoRA and Full Fine-Tuning

Sten Rüdiger, Sebastian Raschka

arXiv · 2604.01694

The Takeaway

By targeting the parts of the model that researchers previously thought were unimportant, this new method teaches AI five times more knowledge. It flips the standard approach to fine-tuning on its head and unlocks much more efficient learning.

From the abstract

Minor Component Adaptation (MiCA) is a novel parameter-efficient fine-tuning method for large language models that focuses on adapting underutilized subspaces of model representations. Unlike conventional methods such as Low-Rank Adaptation (LoRA), which target dominant subspaces, MiCA leverages Singular Value Decomposition to identify subspaces related to minor singular vectors associated with the least significant singular values and constrains the update of parameters during fine-tuning to th