SeriesFusion
Science, curated & edited by AI

AI & Machine Learning

2,371 papers  ·  Page 34 of 48

Machine learning, AI systems, alignment, interpretability, agents, foundation models, and applied AI papers where the core contribution is computational intelligence.

Efficiency Breakthrough
Introduces a streaming detection head that stops Large Reasoning Models (LRMs) from 'overthinking' redundant steps.
Mar 24
Paradigm Shift
Proposed a test-time scaling paradigm for image restoration that allows compute-to-quality trade-offs during inference.
Mar 24
Open Release
Releases the hardware design and training environment for MEVIUS2, an open-source, Spot-scale quadruped robot.
Mar 24
Breaks Assumption
Proves that 'topic-matched' contrast pairs are ineffective for extracting refusal directions in LLM abliteration research.
Mar 24
Scaling Insight
Provides a strictly controlled comparison of autoregressive vs. masked diffusion language models on identical compute budgets.
Mar 24
New Capability
Ensures safe Vision-Language Model generation without over-refusal by steering activations within the null-space of benign inputs.
Mar 24
Paradigm Shift
Identifies that the direction of log-probability change is more critical than magnitude for improving LLM reasoning via RL.
Mar 24
New Capability
Integrates LLMs as closed-loop tuning experts for manufacturing robots to achieve 0% failure in complex 3D printing tasks.
Mar 24
Efficiency Breakthrough
Reduces the token count of Stable Diffusion 3.5 by 4x for high-resolution generation with minimal fine-tuning.
Mar 24
Breaks Assumption
Provides causal evidence that LLMs use internal confidence signals to drive behavioral decisions like abstention, rather than just as a side-effect of output generation.
Mar 24
Paradigm Shift
Identifies 'Visual Anchor Collapse' in DPO-aligned VLMs and introduces an asymmetric constraint to prevent models from ignoring visual evidence in favor of language priors.
Mar 24
Efficiency Breakthrough
A predictive scheduling system for multi-agent workflows that optimizes serving across heterogeneous LLM clusters (mixing large and small models).
Mar 24
Breaks Assumption
Introduces 'Noise Titration' to prove that current time-series foundation models often fail at structural inference, behaving instead as 'context parrots' during non-stationary shifts.
Mar 24
New Capability
Integrates auction bids and monetization logic directly into generative recommender systems (like TIGER) via bid-aware decoding.
Mar 24
New Capability
MemDLM embeds a simulated denoising process into training to create 'Parametric Memory,' narrowing the train-inference gap for Diffusion Language Models.
Mar 24
Open Release
An open foundation suite for universal dexterous robot control trained on over 50k trajectories across eight different robotic hand architectures.
Mar 24
Paradigm Shift
Bypasses Reinforcement Learning during the exploration phase by using uncertainty-guided tree search to discover informative data.
Mar 24
Efficiency Breakthrough
Enables high-rank (r=384) DoRA training on single GPUs through factored norms and fused Triton kernels.
Mar 24
Efficiency Breakthrough
Introduces a parallel reasoning mechanism for Vision-Language-Action (VLA) models that eliminates the latency bottleneck of autoregressive Chain-of-Thought.
Mar 24
Paradigm Shift
UNITE enables single-stage joint training of the tokenizer and the diffusion model from scratch, removing the need for frozen VAEs.
Mar 24
Efficiency Breakthrough
A training-free feature caching framework that achieves 2.3x speedup for video world models while maintaining 99.4% quality.
Mar 24
New Capability
A transformer-based meta-amortized framework that allows simulation-based inference to remain valid across different model structures without retraining.
Mar 24
Paradigm Shift
LassoFlexNet matches or beats leading tree-based models on tabular data while maintaining Lasso-like interpretability through per-feature embeddings and a group Lasso mechanism.
Mar 24
Breaks Assumption
Proves that rotation-invariant algorithms like standard Gradient Descent are fundamentally suboptimal for sparse targets when trained on hard labels.
Mar 24
New Capability
A grid-free probabilistic framework for nonrigid registration of high-dimensional vector-valued functions on irregular manifolds.
Mar 24
Efficiency Breakthrough
A unified discrete diffusion framework that outperforms autoregressive models on large-scale discrete generation tasks for the first time.
Mar 24
Paradigm Challenge
The math we've used for 50 years to figure out how fast the internet should be is actually missing a giant piece of the puzzle.
Mar 23
Nature Is Weird
You can get a whole crowd to agree on something even if everyone only knows what the person right next to them is thinking.
Mar 23
Nature Is Weird
Over 10% of new medical papers are being written by AI now—three years ago, that number was zero.
Mar 23
Practical Magic
We can now spot Alzheimer's early by looking at the brain like a building that’s literally buckling under the weight of toxic sludge.
Mar 23
Nature Is Weird
Massive wealth gaps might just be a math problem: if you always pick the better of two random options, inequality is basically guaranteed.
Mar 23
Paradigm Shift
Introduces a statistical alternative to the standard frequency-based BPE tokenization used in nearly all modern LLMs.
Mar 23
Scaling Insight
Discovers a multiplicative scaling law governing how LLMs revise their beliefs during iterative reasoning (CoT, reflection).
Mar 23
Efficiency Breakthrough
Achieves state-of-the-art LLM distillation using 10-25% of the data required by standard fine-tuning.
Mar 23
Paradigm Shift
Formally proves that a causal Transformer is mathematically equivalent to a stateless Differentiable Neural Computer.
Mar 23
Efficiency Breakthrough
Accelerates MoE inference by speculating future experts to overlap CPU-GPU memory transfers with computation.
Mar 23
New Capability
A self-improvement framework (MIPO) that improves LLM personalization and reasoning with zero additional data or human labels.
Mar 23
Efficiency Breakthrough
Achieve 97% of Oracle reward performance using only 20% of the training labels for complex LLM reasoning.
Mar 23
Efficiency Breakthrough
The first Joint Embedding Predictive Architecture (JEPA) to train stably end-to-end from raw pixels with massive planning speedups.
Mar 23
Paradigm Shift
Solves the compositional generalization failure of neural networks (0% to 100% accuracy) by embedding algebraic semiring constraints.
Mar 23
Scaling Insight
A massive controlled study reveals that post-training algorithm rankings (DPO, SimPO, etc.) completely invert as models scale.
Mar 23
Efficiency Breakthrough
DAPA speeds up GELU computation by 16x and reduces hardware DSP utilization by 16x for on-device Transformer deployment.
Mar 23
Efficiency Breakthrough
Spectral Tempering achieves near-oracle embedding compression for dense retrieval without requiring any labeled data or grid searching.
Mar 23
Paradigm Shift
Challenges the 80-year-old assumption that neurons must use weighted summation as their primary aggregation mechanism.
Mar 23
Efficiency Breakthrough
Empirically proves that most Transformer layers are redundant, enabling a 54% training cost reduction through non-uniform budget allocation.
Mar 23
Efficiency Breakthrough
Warm-Start Flow Matching provides a guaranteed speedup for image/text generation by using lightweight models as initial priors.
Mar 23
New Capability
VAMPO optimizes visual dynamics in video models using policy gradients to fix precision-critical errors in robotic manipulation.
Mar 23
Breaks Assumption
Debunks recent 'evaluation awareness' findings in LLMs by showing that linear probes are actually just tracking formatting artifacts.
Mar 23
Paradigm Shift
Introduces Hyperagents: self-referential systems where the meta-level modification logic is itself an editable program.
Mar 23
Efficiency Breakthrough
Adaptive Layerwise Perturbation (ALP) solves the training-inference mismatch and importance ratio blowup in LLM reinforcement learning.
Mar 23