ActTail achieves 80% activation sparsity in LLMs with significantly lower perplexity degradation than uniform methods by using Heavy-Tailed Self-Regularization theory.
Efficiency Breakthrough arxiv | Mar 16
This paper proposes a method to align and personalize LLMs directly from raw user interactions using self-distillation, bypassing the need for explicit human labels or RLHF.
Paradigm Shift arxiv | Mar 16
The researchers demonstrate that prompt injection is caused by 'role confusion' in the latent space, where models assign authority based on the style of writing rather than the source of the text.
Breaks Assumption arxiv | Mar 16
This theoretical work refutes the 'Garbage In, Garbage Out' mantra for modern ML, proving that high-dimensional model capacity can asymptotically overcome predictor error and structural uncertainty.
Breaks Assumption arxiv | Mar 16
Introduces the Budget-Sensitive Discovery Score (BSDS), a formally verified metric machine-checked in Lean 4 for evaluating AI-guided scientific candidate selection.
Paradigm Shift arxiv | Mar 16
ReBalance is a training-free framework that dynamically modulates 'thinking' length in reasoning models to prune redundancy during overthinking and promote exploration during underthinking.
Efficiency Breakthrough arxiv | Mar 16
This study proves that reasoning traces (Chain-of-Thought) causally shape model behavior and generalization, even when the final answer is held constant.
Breaks Assumption arxiv | Mar 16
SpectralGuard identifies a 'memory collapse' vulnerability in State Space Models (like Mamba) where adversarial inputs can drive the transition operator's spectral radius to zero.
Breaks Assumption arxiv | Mar 16
Surg-R1 is a specialized surgical reasoning model released alongside the largest surgical Chain-of-Thought dataset (320,000 pairs).
Open Release arxiv | Mar 16
This paper establishes a systematic protocol for 'stitching' heterogeneous Vision Foundation Models (e.g., CLIP and DINOv2) to share early layers while retaining specialized capabilities.
Paradigm Shift arxiv | Mar 16
Achieves 100x speedup in robotic action generation by distilling iterative flow/diffusion models into a one-step policy without a pre-trained teacher.
Efficiency Breakthrough arxiv | Mar 16
Introduces Modal Logical Neural Networks (MLNNs) as a differentiable logic layer that bridges deep learning with symbolic Kripke semantics for regulated AI.
Paradigm Shift arxiv | Mar 16
Demonstrates a robot that improves its own locomotion by identifying and physically 'self-destructing' redundant or inhibiting limbs during its lifetime.
Paradigm Shift arxiv | Mar 16
Enables training-free infinite video generation (hour-scale) by using evolving memory tokens to solve identity drift and motion stagnation.
New Capability arxiv | Mar 16
Reveals that standard global correlation metrics for LLM judges fail to predict success in 'best-of-n' selection tasks due to within-prompt signal loss.
Breaks Assumption arxiv | Mar 16
Reduces Chain-of-Thought (CoT) compute costs by 14-55% by learning the optimal 'early-exit' points for Large Reasoning Models.
Efficiency Breakthrough arxiv | Mar 16
Discovers that as LLMs scale, their complex non-linear depth dynamics converge into accurate, low-order linear surrogates.
Scaling Insight arxiv | Mar 16
Derives an exact, unbiased policy gradient for Reinforcement Learning on Diffusion LLMs, bypassing the need for sequence-level likelihood approximations.
Paradigm Shift arxiv | Mar 16
Shows that tool-augmented agents suffer from 'recommendation drift' where they provide unsafe advice under tool corruption while maintaining high ranking scores.
Breaks Assumption arxiv | Mar 16
Accelerates Diffusion Transformers (DiTs) by 2x using a training-free framework that selectively reduces computation in non-aesthetic image regions.
Efficiency Breakthrough arxiv | Mar 16
Challenges the standard practice of deep PPO training by proving that consensus aggregation of 'wider' parallel runs is 8x more sample efficient than multiple epochs.
Breaks Assumption arxiv | Mar 16
Releases Feynman, an agentic pipeline and 100k-sample dataset for generating high-quality, knowledge-rich diagrams with grounded captions.
Open Release arxiv | Mar 16
Introduces the largest-ever multi-modal CAD dataset with 10 million annotations for 1 million models to enable geometric deep learning on BRep data.
Open Release arxiv | Mar 16
Unlocks Maximum Entropy RL for high-dimensional humanoid control, matching or doubling the performance of dominant deterministic baselines.
New Capability arxiv | Mar 16
Introduces a training-free framework that allows LLM agents to dynamically scale their reasoning depth based on a pre-defined token/tool budget.
Efficiency Breakthrough arxiv | Mar 16
Achieves a 98x speedup in LLM routing on AMD hardware using Flash Attention and prompt compression, enabling high-context classification without a dedicated GPU.
Efficiency Breakthrough arxiv | Mar 16
Proposes modeling the world in the feature space of frozen geometry foundation models instead of pixels, achieving 5x faster depth forecasting.
Paradigm Shift arxiv | Mar 16
A retrosynthesis model that explicitly learns strategic bond-disconnection reasoning via reinforcement learning with a round-trip accuracy reward.
New Capability arxiv | Mar 16
Longitudinal evidence reveals that successive ChatGPT versions are converging in output diversity, suggesting potential model collapse from synthetic data saturation.
Scaling Insight arxiv | Mar 16
A new system enables humanoid robots to play competitive tennis rallies with humans by learning from imperfect, fragmented motion data.
New Capability arxiv | Mar 16
Adversarial test case evolution improves code reinforcement learning by creating harder, more discriminative verification signals that drive better model performance.
Scaling Insight arxiv | Mar 16
Modality-level disaggregation enables cost-optimal MLLM serving across heterogeneous GPUs over commodity PCIe, bypassing the need for expensive NVLink interconnects.
Efficiency Breakthrough arxiv | Mar 16
Probing of Vision-Language-Action (VLA) models reveals that the action decoder largely ignores the reasoning logic in Chain-of-Thought, relying almost exclusively on object names.
Breaks Assumption arxiv | Mar 16
SciDesignBench provides a massive simulator-grounded environment for scientific inverse design, revealing that current LLMs struggle significantly with iterative refinement.
New Capability arxiv | Mar 16
A hardware-algorithm co-design for Spiking Neural Networks achieves up to 69x energy efficiency gains using an SRAM-based Compute-in-Memory accelerator.
Efficiency Breakthrough arxiv | Mar 16
The TaoBench benchmark proves that state-of-the-art math LLMs fail on equivalent logic problems when presented outside of the standard 'MathLib' framework.
Breaks Assumption arxiv | Mar 16
A self-supervised robotic system detects novel objects by training bespoke detectors on-the-fly from human video demonstrations, bypassing language-based prompts.
New Capability arxiv | Mar 16
AIM enables post-training modulation of large models to change utility levels or focus features without any retraining or additional data.
New Capability arxiv | Mar 16
Achieves 4x visual token compression and 80% lower training cost while unifying multimodal comprehension and generation.
Efficiency Breakthrough arxiv | Mar 16
First training-free method for debiasing reward models using Sparse Autoencoder (SAE) interventions.
New Capability arxiv | Mar 16
Breaks the long-standing accuracy-robustness trade-off in VLMs by localizing adversarial robustness to shallow layers.
Breaks Assumption arxiv | Mar 16
A flow-based navigation policy that achieves zero-shot sim-to-real transfer across wheeled, quadrupedal, and humanoid platforms.
New Capability arxiv | Mar 16
A small-scale molecular reasoning model that outperforms ultra-large foundation models via structured chain-of-thought and RL.
Paradigm Shift arxiv | Mar 16
Adaptive VLM Routing reduces inference costs for Computer Use Agents by up to 78% with negligible accuracy loss.
Efficiency Breakthrough arxiv | Mar 16
Distills a 2B Vision-Language Retriever into a 70M text-only encoder for visual document retrieval with 50x lower latency.
Efficiency Breakthrough arxiv | Mar 16
Reveals that 'reasoning' gains in fine-tuned LLMs may be artifacts of task familiarity rather than improved capability.
Breaks Assumption arxiv | Mar 16
MotionAnymesh automatically transforms static 3D meshes into simulation-ready, articulated digital twins for robotics using vision-language models grounded in physical priors.
New Capability arxiv | Mar 16
ThinkStream introduces a 'Watch-Think-Speak' paradigm for video reasoning that allows models to incrementally update understanding and decide when to respond in real-time.
Paradigm Shift arxiv | Mar 16
This paper presents an exact federated unlearning protocol for foundation models that is pointwise identical to centralized retraining but uses fixed-size messages.
Breaks Assumption arxiv | Mar 16