SeriesFusion
Science, curated & edited by AI

AI & Machine Learning

2,557 papers  ·  Page 39 of 52

Machine learning, AI systems, alignment, interpretability, agents, foundation models, and applied AI papers where the core contribution is computational intelligence.

Efficiency Breakthrough
Reduces long-context inference latency by 26.4x using a training-free, structure-aware prompt compression framework.
Mar 23
New Capability
Boosts open-model agent performance on web navigation tasks from 6.4% to 43%, surpassing proprietary models like GPT-4o.
Mar 23
Breaks Assumption
Proves that intuitive task similarity is a poor predictor of training data value for MLLMs and offers a highly accurate training-free alternative.
Mar 23
Paradigm Shift
Enables zero-shot humanoid robot interaction by generating robot-centric 'dream' videos instead of relying on human-to-robot motion retargeting.
Mar 23
Efficiency Breakthrough
Introduces the first reinforcement learning framework to compress implicit reasoning steps in looped language models.
Mar 23
Paradigm Shift
Replaces fixed context compression ratios with a performance-floor constraint to ensure reliable LLM deployment.
Mar 23
Efficiency Breakthrough
Achieves O(1) time complexity for dense component attribution in SwiGLU Transformers using a single forward-backward pass.
Mar 23
New Capability
First unified pipeline to reconstruct complete geometry, materials, and lighting from sparse views in under one second.
Mar 23
New Capability
Introduces the first inherently scalable primitive for radiance fields, allowing real-time Level-of-Detail (LOD) rendering by simply truncating Fourier coefficients.
Mar 23
Paradigm Shift
FIPO overcomes reasoning length stagnation in LLMs by using Future-KL divergence to create dense rewards, extending Chain-of-Thought lengths to over 10,000 tokens.
Mar 23
Efficiency Breakthrough
A training-free method to fix intra-modal misalignment in CLIP by decomposing projectors into an isotropic aligned subspace.
Mar 23
Efficiency Breakthrough
NASimJax provides a 100x throughput increase for autonomous penetration testing simulators by reimplementing the environment in JAX.
Mar 23
New Capability
SCRL introduces the first negative supervision mechanism for Test-Time Reinforcement Learning, preventing LLMs from reinforcing 'consensus lies'.
Mar 23
Efficiency Breakthrough
SAGE achieves state-of-the-art translation for low-resource languages while reducing training data requirements by 97.1% via RL-guided curation.
Mar 23
Efficiency Breakthrough
Memori reduces agent token costs by 20x by replacing raw conversation history with a persistent layer of semantic triples and summaries.
Mar 23
Efficiency Breakthrough
2K Retrofit enables 2K-resolution inference for any 3D geometric foundation model without modifying or retraining the backbone.
Mar 23
New Capability
X-World is a controllable, action-conditioned multi-camera world model that simulates realistic future video observations for end-to-end driving.
Mar 23
Paradigm Shift
Breaking the 'capability ceiling' in LLM post-training by replacing full-history dependencies with explicit Markov states.
Mar 23
Efficiency Breakthrough
A k-means variant that is up to 7x faster than FAISS and Scikit-Learn on CPUs and 4x faster than cuVS on GPUs.
Mar 23
Efficiency Breakthrough
Reduces the computational cost of Neural Architecture Search for ensembles from O(M) to O(1).
Mar 23
New Capability
Enables LLMs to explore beyond their current distribution during RL by treating failed trajectories as hindsight guidance.
Mar 23
Paradigm Shift
Identifies 'critical times' in diffusion generation where targeted guidance pulses significantly improve image control.
Mar 23
Breaks Assumption
Exposes fundamental flaws in using LLM-based agents to evaluate automated interpretability and model circuits.
Mar 23
New Capability
Replaces unstable free-form recursive LLM code with a typed functional runtime grounded in lambda-calculus.
Mar 23
Paradigm Shift
Derives a variational ELBO for the Joint-Embedding Predictive Architecture (JEPA), unifying it with generative modeling.
Mar 23
New Capability
Enables zero-shot, directed protein generation by applying a simple scalar bias to stochastic attention samplers.
Mar 23
Breaks Assumption
Demonstrates that LLM reasoning capabilities drop sharply when tasks are framed within multi-turn dialogues vs isolated benchmarks.
Mar 23
New Capability
A comprehensive end-to-end workflow for humanoid loco-manipulation that standardizes sim-to-real transfer.
Mar 23
Efficiency Breakthrough
Quantifies LLM uncertainty in a single generation pass without auxiliary models or repeated sampling.
Mar 23
Breaks Assumption
Demonstrates that current 'faithfulness' metrics for Chain-of-Thought reasoning are highly subjective and vary wildly depending on the choice of classifier.
Mar 23
Efficiency Breakthrough
Introduces a long-horizon video agent that uses 93% fewer frames than GPT-5/standalone LMMs while achieving higher accuracy.
Mar 23
Efficiency Breakthrough
Provides a robust method for distilling discrete diffusion models that maintains quality and diversity even with very few sampling steps.
Mar 23
Breaks Assumption
Reveals that 'learned priors' in inverse problems often behave as simple lookup tables that memorize training data rather than learning distributions.
Mar 23
Paradigm Shift
Integrates Kolmogorov-Arnold Networks (KANs) into causal generative modeling to produce human-readable symbolic structural equations.
Mar 23
New Capability
An autonomous AI agent that executes end-to-end theoretical and computational physics research, including hypothesis testing and discovery.
Mar 23
Cosmic Scale
Low-orbit satellites just got scary good—they can pinpoint your location within an inch in basically a heartbeat.
Mar 20
Practical Magic
Imagine a cell tower on wheels that literally follows you around with a camera just to make sure your bars never drop.
Mar 20
Nature Is Weird
After 90 years of scratching their heads, mathematicians finally proved that 'Quantum Logic' isn't just a mess—it actually works.
Mar 20
Paradigm Challenge
Perfectly syncing clocks across the world is actually impossible because of physics, so things like Leap Seconds are basically just a polite lie.
Mar 20
Breaks Assumption
Large Language Models can perfectly reconstruct training data they are strictly aligned to never express in standard generation.
Mar 20
Efficiency Breakthrough
MineDraft achieves a 75% throughput increase in speculative decoding by overlapping the drafting and verification stages.
Mar 20
Paradigm Shift
A geometric fix for Rotary Positional Embeddings (RoPE) allows Transformers to generalize to long inputs out-of-the-box by preserving 'sink token' functionality.
Mar 20
New Capability
Engineered modularity via per-layer supervision solves the 'Hydra effect,' allowing for the surgical control of specific model behaviors.
Mar 20
Breaks Assumption
Naive multi-agent routing based on self-reported quality scores results in a 'provenance paradox' that performs worse than random selection.
Mar 20
New Capability
NANOZK enables verifiable LLM inference with 70x smaller proofs and 24ms verification time using a novel layerwise decomposition.
Mar 20
Scaling Insight
Extreme neural network sparsification causes a catastrophic interpretability collapse even when global accuracy remains stable.
Mar 20
Paradigm Shift
A synthesizable RTL implementation of Predictive Coding allows for fully distributed, non-backprop learning directly in hardware.
Mar 20
Paradigm Shift
Dynamic constraints using an 'online refiner' resolve the conflict between stability and performance in Reinforcement Learning Fine-Tuning (RFT).
Mar 20
Efficiency Breakthrough
Q-Drift corrects quantization-induced noise in diffusion models using a plug-and-play sampler adjustment that requires only 5 calibration runs.
Mar 20
Efficiency Breakthrough
Achieves depth-independent training memory bounded to approximately twice the inference footprint.
Mar 20