SeriesFusion
Science, curated & edited by AI

AI & Machine Learning

2,371 papers  ·  Page 28 of 48

Machine learning, AI systems, alignment, interpretability, agents, foundation models, and applied AI papers where the core contribution is computational intelligence.

New Capability
Restores editable, semantically layered structures from flattened vector graphics (SVGs/icons) by using generative completion to recover occluded geometries.
Mar 26
Efficiency Breakthrough
MoE-Sieve reduces Mixture-of-Experts LoRA fine-tuning parameters and training time by ~70% by only adapting the most-frequently activated 'hot' experts.
Mar 26
New Capability
Identifies that 'attention imbalance' across modalities and tokens drives object hallucinations and proposes a decoding-time rectification (AIR) to fix it.
Mar 26
New Capability
SOMA provides a plug-and-play memory and orchestration system that increases Vision-Language-Action (VLA) robot success rates by over 50% without fine-tuning.
Mar 26
Breaks Assumption
LLMpedia exposes a massive gap in LLM factuality by generating 1M articles from parametric memory, revealing that actual knowledge retrieval is 15%+ lower than multiple-choice benchmarks suggest.
Mar 26
Breaks Assumption
Proves that RLHF and DPO alignment cause 'response homogenization,' which effectively breaks standard sampling-based uncertainty estimation methods.
Mar 26
Paradigm Shift
Formalizes 'likelihood hacking,' a failure mode where RL-trained models learn to generate unnormalized probabilistic programs to artificially inflate rewards.
Mar 26
Efficiency Breakthrough
Achieves up to 400x speedup and 64x memory reduction for open-vocabulary 3D scene understanding compared to current Gaussian Splatting methods.
Mar 26
Efficiency Breakthrough
Enables 1000x faster on-chip training for Weightless Neural Networks (WNNs) on FPGAs with drastically lower power consumption.
Mar 26
Scaling Insight
Provides a systematic blueprint for scaling Reinforcement Learning (RL) in LLMs using multi-turn synthetic data generation and difficulty-based curricula.
Mar 26
Paradigm Shift
A model-agnostic framework to boost time-series forecasting by aligning internal representations with those of pretrained foundation models.
Mar 26
New Capability
Breaks the resolution and aspect ratio barriers of image diffusion models, enabling the generation of consistent 32K resolution images.
Mar 26
Paradigm Shift
Unifies input and predicted meshes under a shared topological framework to enable high-fidelity 3D reconstruction with sharp features.
Mar 26
Open Release
Releases a high-quality, 92K-sentence parallel dataset for Hindi-Sanskrit translation focusing on contemporary and spoken language.
Mar 26
Paradigm Shift
Quantifies an emergent 'self' in robots as an invariant subnetwork that persists across continual learning of variable tasks.
Mar 26
New Capability
Applies reinforcement learning with a cycle-consistency reward to drastically improve natural language to Lean4 autoformalization.
Mar 26
Efficiency Breakthrough
A 5M-parameter OCR model that rivals billion-parameter vision-language models, proving data-centric curation can beat raw parameter scale.
Mar 26
New Capability
Reformulates molecular discovery as an autonomous MCTS planning problem over executable chemical operations rather than just similarity-based prediction.
Mar 26
Scaling Insight
Identifies a 'critical threshold' in human-AI symbiosis beyond which human capability collapses abruptly and irreversibly due to over-delegation.
Mar 26
Paradigm Shift
Moves automated research from stateless linear pipelines to a persistent Research World Model that maintains a self-correcting knowledge graph of gaps and methods.
Mar 26
Efficiency Breakthrough
Achieves high-fidelity sub-seasonal weather forecasting with a 276M parameter model that matches 1.6B parameter baselines in accuracy and speed.
Mar 26
Open Release
Releases 55 hours of continuous 30fps expert human computer-use videos to address the 'missing ingredient' for desktop automation agents.
Mar 26
Paradigm Shift
Introduces a 'sorry-driven' formal decomposition that allows LLM agents to solve complex proofs by isolating and independently verifying subgoals.
Mar 26
Breaks Assumption
Reveals that self-distillation degrades out-of-distribution reasoning by suppressing 'epistemic verbalization' (the model's expression of uncertainty).
Mar 26
Paradigm Shift
Enforces hard incompressibility constraints in neural operators using spectral Leray projection, ensuring physically admissible fluid simulations.
Mar 26
New Capability
An autonomous agentic pipeline discovered novel white-box adversarial attacks that outperform existing methods by up to 300%.
Mar 26
Efficiency Breakthrough
Agentic Variation Operators (AVO) replace fixed evolutionary heuristics with coding agents to discover GPU kernels that outperform FlashAttention-4 by 10.5%.
Mar 26
New Capability
UI-Voyager achieves an 81.0% success rate on AndroidWorld, exceeding human-level performance in mobile GUI automation.
Mar 26
Paradigm Shift
LensWalk introduces a 'reason-plan-observe' loop that allows agents to dynamically control the temporal sampling and density of the videos they analyze.
Mar 26
Paradigm Shift
The Free-Market Algorithm (FMA) is a zero-parameter metaheuristic that discovers complex pathways in chemistry and economics through emergent supply-and-demand dynamics.
Mar 26
Open Release
VFIG enables high-fidelity conversion of rasterized technical figures into editable, scalable SVGs using a new 66K-pair dataset.
Mar 26
Paradigm Shift
MARCH eliminates 'LLM-as-a-judge' confirmation bias by using information asymmetry to force verification agents to check claims without seeing the original response.
Mar 26
Efficiency Breakthrough
DreamerAD accelerates imagination-based training for autonomous driving by 80x, compressing 100-step diffusion sampling down to a single step.
Mar 26
Efficiency Breakthrough
The Multilevel Euler-Maruyama (ML-EM) method allows diffusion models to perform sampling at the computational cost of a single model evaluation.
Mar 26
New Capability
Wasserstein Parallel Transport provides a formal framework for counterfactual prediction in evolving probability distributions.
Mar 26
Paradigm Challenge
So there’s this new AI researcher that’s actually starting to fact-check real math papers and point out exactly where the professors messed up.
Mar 25
Paradigm Challenge
Get this: only about 10% of the computer code used in those fancy Nature papers actually works if you try to run it yourself.
Mar 25
Practical Magic
Researchers figured out they could trick a robot into handing someone a knife instead of an apple using nothing but a printed drink coaster.
Mar 25
Nature Is Weird
Your AI assistant’s 'brain' can be secretly messed with by random emails in your inbox, changing how it treats you without you ever knowing.
Mar 25
Practical Magic
Imagine wireless internet that's actually as fast as a physical cable—no lag, no matter how many devices the signal bounces through.
Mar 25
Breaks Assumption
Effective semantic alignment for low-resource languages can be achieved with only 10,000 noisy synthetic pairs, matching the performance of models trained on 1 million samples.
Mar 25
Paradigm Shift
Mechanistic interpretability reveals that LLMs possess 'affect reception' circuits that detect emotional content even when explicit keywords are removed.
Mar 25
Efficiency Breakthrough
Sparse Feature Attention (SFA) reduces attention costs from quadratic in sequence length and linear in dimension to a fraction based on feature sparsity, enabling 2.5x speedups.
Mar 25
Scaling Insight
hidden states in LLMs occupy a Riemannian submanifold where tokens are Voronoi regions, revealing a universal 'hourglass' intrinsic dimension profile across all tested models.
Mar 25
Breaks Assumption
Forcing AI agents to use human-comprehensible language causes a 50% efficiency drop compared to their own 'inscrutable' communication protocols.
Mar 25
Efficiency Breakthrough
Standard quantization destroys the small parameter 'deltas' that encode post-training knowledge; Delta-Aware Quantization (DAQ) fixes this by optimizing for sign preservation.
Mar 25
Efficiency Breakthrough
Hybrid Associative Memory (HAM) layers allow the KV cache to grow dynamically based only on information that an internal RNN cannot predict.
Mar 25
New Capability
Small adapters can provide frozen decoder-only LLMs with persistent latent-space memory that survives across separate sessions.
Mar 25
Scaling Insight
The standard 'Chinchilla Approach 2' for fitting scaling laws is systematically biased, potentially leading to millions of dollars in wasted compute at frontier scales.
Mar 25
Paradigm Shift
Gradient boosting exhibits a 'first-mover bias' where correlated features selected early in the tree sequence gain an artificial, self-reinforcing importance in SHAP rankings.
Mar 25