SeriesFusion
Science, curated & edited by AI

AI & Machine Learning

2,371 papers  ·  Page 36 of 48

Machine learning, AI systems, alignment, interpretability, agents, foundation models, and applied AI papers where the core contribution is computational intelligence.

Practical Magic
Imagine a cell tower on wheels that literally follows you around with a camera just to make sure your bars never drop.
Mar 20
Nature Is Weird
After 90 years of scratching their heads, mathematicians finally proved that 'Quantum Logic' isn't just a mess—it actually works.
Mar 20
Paradigm Challenge
Perfectly syncing clocks across the world is actually impossible because of physics, so things like Leap Seconds are basically just a polite lie.
Mar 20
Breaks Assumption
Large Language Models can perfectly reconstruct training data they are strictly aligned to never express in standard generation.
Mar 20
Efficiency Breakthrough
MineDraft achieves a 75% throughput increase in speculative decoding by overlapping the drafting and verification stages.
Mar 20
Paradigm Shift
A geometric fix for Rotary Positional Embeddings (RoPE) allows Transformers to generalize to long inputs out-of-the-box by preserving 'sink token' functionality.
Mar 20
New Capability
Engineered modularity via per-layer supervision solves the 'Hydra effect,' allowing for the surgical control of specific model behaviors.
Mar 20
Breaks Assumption
Naive multi-agent routing based on self-reported quality scores results in a 'provenance paradox' that performs worse than random selection.
Mar 20
New Capability
NANOZK enables verifiable LLM inference with 70x smaller proofs and 24ms verification time using a novel layerwise decomposition.
Mar 20
Scaling Insight
Extreme neural network sparsification causes a catastrophic interpretability collapse even when global accuracy remains stable.
Mar 20
Paradigm Shift
A synthesizable RTL implementation of Predictive Coding allows for fully distributed, non-backprop learning directly in hardware.
Mar 20
Paradigm Shift
Dynamic constraints using an 'online refiner' resolve the conflict between stability and performance in Reinforcement Learning Fine-Tuning (RFT).
Mar 20
Efficiency Breakthrough
Q-Drift corrects quantization-induced noise in diffusion models using a plug-and-play sampler adjustment that requires only 5 calibration runs.
Mar 20
Efficiency Breakthrough
Achieves depth-independent training memory bounded to approximately twice the inference footprint.
Mar 20
New Capability
Solves the problem of 'co-firing' conflicts in probabilistic ML routing systems using temperature-scaled softmax partitioning.
Mar 20
Efficiency Breakthrough
A decoder-free world model that trains 1.59x faster than DreamerV3 while outperforming it on tasks with small, task-relevant objects.
Mar 20
Paradigm Shift
Uses Pearl's do-operator to automatically discover and mask irrelevant state dimensions in Reinforcement Learning.
Mar 20
Efficiency Breakthrough
Fixes the 'squeezing effect' in Direct Preference Optimization (DPO) using an efficient logit-space Sharpness-Aware Minimization.
Mar 20
Breaks Assumption
Demonstrates that safety alignment is a routing mechanism, not a knowledge filter, rendering current refusal-based benchmarks ineffective.
Mar 20
Paradigm Shift
Fine-tunes Vision-Language Models using raw images alone by using a text-to-image model as a cycle-consistency reward.
Mar 20
Efficiency Breakthrough
PreSCAN predicts NeRF reconstruction quality in under 30 seconds, achieving a 1000x speedup over Neural Architecture Search.
Mar 20
Scaling Insight
This paper provides theoretical proof that autocurriculum—where a model selects its own training problems—requires exponentially fewer reasoning demonstrations.
Mar 20
Breaks Assumption
FaithSteer-BENCH reveals that inference-time steering often creates 'illusory' control that collapses under minor prompt perturbations.
Mar 20
New Capability
MemArchitect introduces a governance layer that decouples memory lifecycle management from LLM weights to prevent 'zombie memories.'
Mar 20
Breaks Assumption
A systematic study finds that mechanistic interpretability methods fail to correct model errors even when internal representations are 98% accurate.
Mar 20
Paradigm Shift
PowerFlow uses GFlowNets to replace heuristic rewards in unsupervised fine-tuning, allowing practitioners to explicitly tune models for either logic or creativity.
Mar 20
Breaks Assumption
This study identifies 'Visual Sycophancy' in VLMs, where models detect visual truths internally but hallucinate incorrect answers to satisfy user expectations.
Mar 20
New Capability
LLM agents can now autonomously re-identify anonymous individuals by combining sparse, non-identifying cues with public data.
Mar 20
New Capability
VISTA decouples hypothesis generation from prompt rewriting to escape the local optima and black-box nature of current automatic prompt optimizers.
Mar 20
Efficiency Breakthrough
TopoChunker maps documents to a Structured Intermediate Representation (SIR) to preserve hierarchical context during RAG chunking.
Mar 20
New Capability
TARo introduces a learnable token-level router that steers frozen LLMs toward structured reasoning at test-time without retraining.
Mar 20
Efficiency Breakthrough
AFBS-BO automates the discovery of layer-specific sparse attention hyperparameters, making long-context acceleration 'plug-and-play.'
Mar 20
Scaling Insight
The 'Progressive Intensity Hypothesis' establishes that weaker perturbations (pruning) should precede stronger ones (quantization) for optimal joint model compression.
Mar 20
Paradigm Shift
AS2 achieves a fully differentiable neuro-symbolic bridge by replacing discrete solvers with a soft, continuous approximation of the Answer Set Programming operator.
Mar 20
Efficiency Breakthrough
Discounted Beta-Bernoulli (DBB) reward estimation solves the variance collapse and sample inefficiency inherent in point-estimation RLVR methods for LLM reasoning.
Mar 20
New Capability
AcceRL introduces a fully asynchronous, decoupled RL framework for Vision-Language-Action (VLA) models that integrates a plug-and-play world model.
Mar 20
Breaks Assumption
Multimodal LLMs suffer from a 'cognitive mismatch' where they succeed at complex reasoning while failing at basic discrete symbol recognition.
Mar 20
Paradigm Shift
Standard decoding strategies (top-k, nucleus) create a 'truncation blind spot' by systematically excluding human-like, low-probability token choices.
Mar 20
Efficiency Breakthrough
EntropyCache achieves up to 26x speedup for Diffusion Language Models by using decoded token entropy as a proxy for KV cache staleness.
Mar 20
Efficiency Breakthrough
AIMER provides a calibration-free criterion for expert pruning in MoE models that matches state-of-the-art performance in seconds.
Mar 20
Scaling Insight
Mechanistic analysis of 'counting circuits' in VLMs allows for lightweight interventions that improve general visual reasoning performance.
Mar 20
New Capability
Generative 3D world models are used to scale Sim-to-Real reinforcement learning for robot Vision-Language-Action (VLA) models.
Mar 20
Efficiency Breakthrough
DDPO addresses the 'overthinking' and 'overconfidence' issues in Large Reasoning Models (LRMs) by optimizing answer length based on task difficulty.
Mar 20
Scaling Insight
Synthetic data scaling reaches a new level by moving from simple rephrasing to creating 'megadocs' through rationale insertion and stitching.
Mar 20
Paradigm Shift
SINDy-KANs combine Kolmogorov-Arnold Networks with Sparse Identification of Non-linear Dynamics to create parsimonious, interpretable models.
Mar 20
Open Release
SpecForge provides an open-source framework and high-quality draft models (SpecBundle) to make speculative decoding production-ready.
Mar 20
Breaks Assumption
The legally mandated right to be forgotten (unlearning) can be weaponized as an adversarial attack surface to collapse model accuracy.
Mar 20
New Capability
Learning to Self-Evolve (LSE) trains LLMs to explicitly improve their own context at test-time via reinforcement learning.
Mar 20
Open Release
OpenT2M is a massive open-source motion dataset (2,800+ hours) that addresses the data starvation in text-to-motion generation.
Mar 20
Paradigm Shift
REST transforms the zero-shot object-navigation problem from simple waypoint selection to a tree-of-paths reasoning process.
Mar 20