SeriesFusion
Science, curated & edited by AI

New Capability

333 papers  ·  Page 5 of 7

Papers where something becomes possible that previously was not. New techniques, new instruments, new model behaviors, new measurements at a frontier.

AI
Enables LLMs to explore beyond their current distribution during RL by treating failed trajectories as hindsight guidance.
Mar 23
AI
Replaces unstable free-form recursive LLM code with a typed functional runtime grounded in lambda-calculus.
Mar 23
AI
Enables zero-shot, directed protein generation by applying a simple scalar bias to stochastic attention samplers.
Mar 23
AI
A comprehensive end-to-end workflow for humanoid loco-manipulation that standardizes sim-to-real transfer.
Mar 23
AI
An autonomous AI agent that executes end-to-end theoretical and computational physics research, including hypothesis testing and discovery.
Mar 23
AI
Engineered modularity via per-layer supervision solves the 'Hydra effect,' allowing for the surgical control of specific model behaviors.
Mar 20
AI
NANOZK enables verifiable LLM inference with 70x smaller proofs and 24ms verification time using a novel layerwise decomposition.
Mar 20
AI
Solves the problem of 'co-firing' conflicts in probabilistic ML routing systems using temperature-scaled softmax partitioning.
Mar 20
AI
MemArchitect introduces a governance layer that decouples memory lifecycle management from LLM weights to prevent 'zombie memories.'
Mar 20
AI
LLM agents can now autonomously re-identify anonymous individuals by combining sparse, non-identifying cues with public data.
Mar 20
AI
VISTA decouples hypothesis generation from prompt rewriting to escape the local optima and black-box nature of current automatic prompt optimizers.
Mar 20
AI
TARo introduces a learnable token-level router that steers frozen LLMs toward structured reasoning at test-time without retraining.
Mar 20
AI
AcceRL introduces a fully asynchronous, decoupled RL framework for Vision-Language-Action (VLA) models that integrates a plug-and-play world model.
Mar 20
AI
Generative 3D world models are used to scale Sim-to-Real reinforcement learning for robot Vision-Language-Action (VLA) models.
Mar 20
AI
Learning to Self-Evolve (LSE) trains LLMs to explicitly improve their own context at test-time via reinforcement learning.
Mar 20
AI
AFS-Search introduces a training-free closed-loop framework to solve spatial grounding errors in diffusion models like FLUX.1.
Mar 20
AI
Introduces Action Applicability Policy Optimization to train MLLMs to strategically construct and update visual aids to solve geometry problems.
Mar 20
AI
Introduces explicit spatial tokens (segmentation/depth) into the autoregressive sequence of LVLMs to enable precise 3D/2D grounding.
Mar 20
AI
Automates the entire robot training pipeline by using video generation models as motion priors to synthesize both simulation environments and expert trajectories.
Mar 20
AI
Enables privacy-preserving cross-model inference by using homomorphic encryption and linear alignment to map representations between independently trained LLMs.
Mar 20
AI
A black-box monitoring system that uses behavioral 'fingerprints' to detect silent updates or identity shifts in LLM API endpoints.
Mar 20
AI
Provides the first rigorous error certification for Physics-Informed Neural Networks (PINNs), bridging the gap between empirical residual loss and actual solution guarantees.
Mar 20
AI
Uses Sparse Autoencoders (SAEs) to prove that Vision-Language-Action models learn steerable motion primitives rather than just memorized sequences.
Mar 20
AI
Introduces the first discrete generation model capable of handling high-dimensional (768-1024 dims) representation tokens.
Mar 20
AI
Enables continuous Level of Detail (LoD) for 3D Gaussian Splatting without the typical trade-off in full-capacity rendering quality.
Mar 20
AI
Minimum-Action Learning achieves a 10,000x reduction in noise variance for symbolic physical law identification from observational data.
Mar 19
AI
Learns task-specific dense reward functions directly from images using vision foundation models, without requiring privileged simulator states.
Mar 19
AI
Introduces HopChain, a framework for synthesizing multi-hop vision-language reasoning data that yields generalizable gains across 20+ diverse benchmarks.
Mar 19
AI
Leverages cross-lingual inconsistencies to pinpoint exactly which experts in a Mixture-of-Experts (MoE) model store specific factual knowledge.
Mar 19
AI
Proposes REAL, a Reinforcement Learning framework tailored for regression and ordinal scoring rather than simple binary accuracy.
Mar 19
AI
Introduces a framework for LLM agents to autonomously evolve their policies and skill libraries during system idle time without retraining downtime.
Mar 19
AI
Automates the generation of synthetic machine learning challenges to train agents that can genuinely learn research skills from doing.
Mar 19
AI
Enables reliable, training-free emotion steering in speech-generative audio models via direct manipulation of specific emotion-sensitive neurons.
Mar 19
AI
A framework to quantify and fix 'task steerability,' the common failure of robots to respond to new instructions while mid-task.
Mar 19
AI
Proposes a world model that jointly generates appearance and binocular geometry using an epipolar-aware attention mechanism.
Mar 19
AI
Introduces a paradigm for vision-language navigation that uses ubiquitously available semantic floor plans as global spatial priors.
Mar 19
AI
Embeds invisible, agent-specific 'watermarks' into token distributions to enable forensic attribution and topology reconstruction in multi-agent systems.
Mar 19
AI
Reduces hallucinations by teaching models 'epistemological humility'—the ability to admit they don't know something—using synthetic non-existent terms.
Mar 19
AI
Introduces a Prompt-Free Universal Region Proposal Network (PF-RPN) that identifies objects in any domain without needing text or image exemplars.
Mar 19
AI
FrescoDiffusion enables coherent, 4K image-to-video generation using a training-free, tiled diffusion method with precomputed latent priors.
Mar 19
AI
Introduces a framework to generate complex, non-linear environments with mathematically guaranteed ground-truth optimal policies for RL benchmarking.
Mar 19
AI
VectorWorld enables stable, real-time 1km+ closed-loop world model rollouts for autonomous driving using diffusion flow on vector graphs.
Mar 19
AI
REAL achieves extreme quadruped parkour agility that is robust even to a 1-meter visual blind zone.
Mar 19
AI
Lifting 2D features into a volumetric representation for robot manipulation policies yields a 14.8% success rate improvement by solving the 2D-3D spatial reasoning mismatch.
Mar 19
AI
DebugLM allows developers to trace an LLM's specific behaviors back to individual training data sources.
Mar 19
AI
Enforce formal safety and Signal Temporal Logic (STL) constraints on robotics foundation models without retraining.
Mar 19
AI
SkeletonLLM allows frozen Multimodal LLMs to reason about human motion by rendering skeleton sequences into their native visual modality.
Mar 19
AI
Motion-MLLM integrates IMU egomotion data into Video-LLMs to solve the fundamental scale and spatial reasoning ambiguities of purely visual models.
Mar 19
AI
Dynamic Representational Circuit Breaking (DRCB) introduces an architectural defense against steganographic collusion in multi-agent RL by monitoring and shuffling latent communication bottlenecks.
Mar 18
AI
Latent Posterior Factors (LPF) bridge neural representations with structured probabilistic reasoning by converting VAE posteriors into factors for Sum-Product Networks.
Mar 18