SeriesFusion
Science, curated & edited by AI

AI & Machine Learning

2,557 papers  ·  Page 43 of 52

Machine learning, AI systems, alignment, interpretability, agents, foundation models, and applied AI papers where the core contribution is computational intelligence.

Breaks Assumption
Concept erasure in text-to-image models is largely a facade that can be bypassed using text-free inversion attacks.
Mar 19
Paradigm Shift
LLMs compute and cache confidence scores automatically during answer generation, well before they are prompted to verbalize them.
Mar 19
Efficiency Breakthrough
ProbeFlow achieves 14.8x faster action decoding in Vision-Language-Action (VLA) models without any retraining.
Mar 19
New Capability
DebugLM allows developers to trace an LLM's specific behaviors back to individual training data sources.
Mar 19
Paradigm Shift
Measuring the distance between human languages can now be done quantitatively using the attention mechanisms of multilingual transformers.
Mar 19
Breaks Assumption
Large Language Models can maintain performance with only 16-64 unique weight values per matrix, as only the relative rank of weights matters.
Mar 19
Efficiency Breakthrough
Parallel multi-token prediction can be achieved in standard LLMs without training auxiliary models or modifying weights.
Mar 19
Efficiency Breakthrough
CARE provides a recipe for converting standard GQA models into high-efficiency Multi-head Latent Attention (MLA) architectures.
Mar 19
Efficiency Breakthrough
VideoAtlas enables navigation and reasoning over long-form video using compute that scales only logarithmically with video length.
Mar 19
New Capability
Enforce formal safety and Signal Temporal Logic (STL) constraints on robotics foundation models without retraining.
Mar 19
Efficiency Breakthrough
MUD provides a faster, lower-overhead alternative to Muon for transformer training, achieving up to 2.6x higher throughput.
Mar 19
Efficiency Breakthrough
LoST introduces a semantic-first 3D tokenizer that reduces the token count for 3D shape generation by up to 99.9%.
Mar 19
Paradigm Shift
AgentFactory shifts agent evolution from unreliable textual 'reflections' to a library of verifiable, executable Python subagents.
Mar 19
New Capability
SkeletonLLM allows frozen Multimodal LLMs to reason about human motion by rendering skeleton sequences into their native visual modality.
Mar 19
Paradigm Shift
DAPS++ reinterprets diffusion inverse problems as a decoupled EM-style initialization, significantly increasing restoration speed and stability.
Mar 19
New Capability
Motion-MLLM integrates IMU egomotion data into Video-LLMs to solve the fundamental scale and spatial reasoning ambiguities of purely visual models.
Mar 19
Scaling Insight
Provides the first theoretical proof that Graph Transformers structurally prevent the 'oversmoothing' failure mode inherent to deep GCNs.
Mar 19
First Ever
Imagine an AI virus that doesn't just sit there—it copies itself and jumps from one AI to the next all on its own.
Mar 18
Practical Magic
A new VR headset uses mirrors to kill the lag that makes you want to puke.
Mar 18
Nature Is Weird
These tiny sliding antennas are hacking the laws of physics to give you a perfect signal where your phone usually dies.
Mar 18
Practical Magic
New AI can peer into a computer chip's microscopic guts to find "spy tech" hidden by sketchy manufacturers.
Mar 18
Practical Magic
Researchers built a "ghost mode" for robots that calculates the exact path to sneak around without being seen.
Mar 18
Paradigm Challenge
Turns out the long lines at airport security were secretly keeping the whole U.S. flight network from crashing for the last decade.
Mar 18
Efficiency Breakthrough
RSM achieves 20x faster training for recursive reasoning models and enables test-time scaling for up to 20,000 refinement steps.
Mar 18
Scaling Insight
A factorial study on EHR foundation models reveals that joint encoding of code-attribute pairs (local binding) is the primary driver of performance and efficiency.
Mar 18
Paradigm Shift
Alternating Reinforcement Learning with Rubric Rewards (ARL-RR) replaces brittle scalar reward aggregation with a semantic meta-class optimization framework.
Mar 18
Breaks Assumption
Self-reflective program search matches or outperforms recursive language models for long-context tasks, suggesting recursion itself is not the primary driver of performance.
Mar 18
New Capability
Dynamic Representational Circuit Breaking (DRCB) introduces an architectural defense against steganographic collusion in multi-agent RL by monitoring and shuffling latent communication bottlenecks.
Mar 18
Breaks Assumption
Theoretical and empirical evidence suggests that the 'Key' mechanism in Attention may be redundant, proposing a 'QV' paradigm that simplifies Transformer architectures.
Mar 18
Paradigm Shift
Atlas introduces 'Compiled Memory,' which rewrites an agent's system prompt with distilled task experience rather than using RAG or fine-tuning.
Mar 18
New Capability
Latent Posterior Factors (LPF) bridge neural representations with structured probabilistic reasoning by converting VAE posteriors into factors for Sum-Product Networks.
Mar 18
Scaling Insight
Spectral Edge Dynamics (SED) provides an early-warning signal for grokking, predicting generalization up to 1,700 steps before it occurs.
Mar 18
Paradigm Shift
Transition Flow Matching learns a global transition flow rather than local velocity fields, enabling single-step generation and transfer to arbitrary future time points.
Mar 18
Breaks Assumption
Robot policy performance can be improved by up to 60% by identifying a single 'golden ticket' constant noise vector instead of sampling from a Gaussian.
Mar 18
Paradigm Shift
Simulation Distillation (SimDist) enables rapid sim-to-real adaptation by transferring reward and value models directly into a latent world model.
Mar 18
Scaling Insight
Demonstrates that massive scaling of diverse simulator resets can replace manual curriculum engineering for complex dexterous manipulation.
Mar 18
Efficiency Breakthrough
Reduces high-quality 3D head avatar creation time from over 24 hours to 0.5 seconds per frame.
Mar 18
Breaks Assumption
Reveals that models with identical predictive performance produce fundamentally different feature attributions based solely on their hypothesis class.
Mar 18
Paradigm Shift
Introduces a privacy-preserving ML framework that achieves strong non-invertibility without the utility loss of Differential Privacy or the cost of Homomorphic Encryption.
Mar 18
Efficiency Breakthrough
Fuses categorical sampling into the LM-head matmul to eliminate logit materialization and speed up LLM decoding by up to 19%.
Mar 18
Paradigm Shift
Analyses over 10,000 experiments to prove that LLM agents are capable of genuine architectural discovery rather than just hyperparameter tuning.
Mar 18
Breaks Assumption
Provides empirical evidence that structural sparsity in Vision Transformers does not lead to improved semantic interpretability.
Mar 18
New Capability
Demonstrates a complete AI-assisted mathematical research loop where a mathematician wrote zero lines of formal code to verify complex physics equilibria.
Mar 18
New Capability
Integrates LLM agents with the industry-standard Rosetta software to automate physics-based protein design for non-canonical amino acids.
Mar 18
Breaks Assumption
Releases 70B parameter models that operate entirely on bytes, effectively 'liberating' LLMs from static tokenizers.
Mar 18
Scaling Insight
Derives closed-form power-law scaling for hyperparameters like learning rate and batch size using modern optimization theory rather than expensive empirical sweeps.
Mar 18
Paradigm Shift
Introduces per-token adapter routing, allowing a single sequence to dynamically utilize multiple specialized LoRA experts.
Mar 18
Breaks Assumption
Provides the first formal proof that safety is non-compositional, meaning two individually safe AI agents can become hazardous when combined.
Mar 18
New Capability
Enables the prediction of an adapter's task, performance, and attributes directly from its LoRA weights without any inference or data access.
Mar 18
Paradigm Shift
Finds that filtering knowledge at 'write-time' (ingestion) maintains 100% RAG accuracy under noise levels where standard 'read-time' filtering completely collapses.
Mar 18