Papers where something becomes possible that previously was not. New techniques, new instruments, new model behaviors, new measurements at a frontier.
Filter by desk: AI Computing Robotics Math Quantum Physics Space Earth Chemistry Engineering Ecology Biology Neuroscience Health Psychology Economics Society
AI
Enables LLMs to explore beyond their current distribution during RL by treating failed trajectories as hindsight guidance.
AI
Replaces unstable free-form recursive LLM code with a typed functional runtime grounded in lambda-calculus.
AI
Enables zero-shot, directed protein generation by applying a simple scalar bias to stochastic attention samplers.
AI
A comprehensive end-to-end workflow for humanoid loco-manipulation that standardizes sim-to-real transfer.
AI
An autonomous AI agent that executes end-to-end theoretical and computational physics research, including hypothesis testing and discovery.
AI
Engineered modularity via per-layer supervision solves the 'Hydra effect,' allowing for the surgical control of specific model behaviors.
AI
NANOZK enables verifiable LLM inference with 70x smaller proofs and 24ms verification time using a novel layerwise decomposition.
AI
Solves the problem of 'co-firing' conflicts in probabilistic ML routing systems using temperature-scaled softmax partitioning.
AI
MemArchitect introduces a governance layer that decouples memory lifecycle management from LLM weights to prevent 'zombie memories.'
AI
LLM agents can now autonomously re-identify anonymous individuals by combining sparse, non-identifying cues with public data.
AI
VISTA decouples hypothesis generation from prompt rewriting to escape the local optima and black-box nature of current automatic prompt optimizers.
AI
TARo introduces a learnable token-level router that steers frozen LLMs toward structured reasoning at test-time without retraining.
AI
AcceRL introduces a fully asynchronous, decoupled RL framework for Vision-Language-Action (VLA) models that integrates a plug-and-play world model.
AI
Generative 3D world models are used to scale Sim-to-Real reinforcement learning for robot Vision-Language-Action (VLA) models.
AI
Learning to Self-Evolve (LSE) trains LLMs to explicitly improve their own context at test-time via reinforcement learning.
AI
AFS-Search introduces a training-free closed-loop framework to solve spatial grounding errors in diffusion models like FLUX.1.
AI
Introduces Action Applicability Policy Optimization to train MLLMs to strategically construct and update visual aids to solve geometry problems.
AI
Introduces explicit spatial tokens (segmentation/depth) into the autoregressive sequence of LVLMs to enable precise 3D/2D grounding.
AI
Automates the entire robot training pipeline by using video generation models as motion priors to synthesize both simulation environments and expert trajectories.
AI
Enables privacy-preserving cross-model inference by using homomorphic encryption and linear alignment to map representations between independently trained LLMs.
AI
A black-box monitoring system that uses behavioral 'fingerprints' to detect silent updates or identity shifts in LLM API endpoints.
AI
Provides the first rigorous error certification for Physics-Informed Neural Networks (PINNs), bridging the gap between empirical residual loss and actual solution guarantees.
AI
Uses Sparse Autoencoders (SAEs) to prove that Vision-Language-Action models learn steerable motion primitives rather than just memorized sequences.
AI
Introduces the first discrete generation model capable of handling high-dimensional (768-1024 dims) representation tokens.
AI
Enables continuous Level of Detail (LoD) for 3D Gaussian Splatting without the typical trade-off in full-capacity rendering quality.
AI
Minimum-Action Learning achieves a 10,000x reduction in noise variance for symbolic physical law identification from observational data.
AI
Learns task-specific dense reward functions directly from images using vision foundation models, without requiring privileged simulator states.
AI
Introduces HopChain, a framework for synthesizing multi-hop vision-language reasoning data that yields generalizable gains across 20+ diverse benchmarks.
AI
Leverages cross-lingual inconsistencies to pinpoint exactly which experts in a Mixture-of-Experts (MoE) model store specific factual knowledge.
AI
Proposes REAL, a Reinforcement Learning framework tailored for regression and ordinal scoring rather than simple binary accuracy.
AI
Introduces a framework for LLM agents to autonomously evolve their policies and skill libraries during system idle time without retraining downtime.
AI
Automates the generation of synthetic machine learning challenges to train agents that can genuinely learn research skills from doing.
AI
Enables reliable, training-free emotion steering in speech-generative audio models via direct manipulation of specific emotion-sensitive neurons.
AI
A framework to quantify and fix 'task steerability,' the common failure of robots to respond to new instructions while mid-task.
AI
Proposes a world model that jointly generates appearance and binocular geometry using an epipolar-aware attention mechanism.
AI
Introduces a paradigm for vision-language navigation that uses ubiquitously available semantic floor plans as global spatial priors.
AI
Embeds invisible, agent-specific 'watermarks' into token distributions to enable forensic attribution and topology reconstruction in multi-agent systems.
AI
Reduces hallucinations by teaching models 'epistemological humility'—the ability to admit they don't know something—using synthetic non-existent terms.
AI
Introduces a Prompt-Free Universal Region Proposal Network (PF-RPN) that identifies objects in any domain without needing text or image exemplars.
AI
FrescoDiffusion enables coherent, 4K image-to-video generation using a training-free, tiled diffusion method with precomputed latent priors.
AI
Introduces a framework to generate complex, non-linear environments with mathematically guaranteed ground-truth optimal policies for RL benchmarking.
AI
VectorWorld enables stable, real-time 1km+ closed-loop world model rollouts for autonomous driving using diffusion flow on vector graphs.
AI
REAL achieves extreme quadruped parkour agility that is robust even to a 1-meter visual blind zone.
AI
Lifting 2D features into a volumetric representation for robot manipulation policies yields a 14.8% success rate improvement by solving the 2D-3D spatial reasoning mismatch.
AI
DebugLM allows developers to trace an LLM's specific behaviors back to individual training data sources.
AI
Enforce formal safety and Signal Temporal Logic (STL) constraints on robotics foundation models without retraining.
AI
SkeletonLLM allows frozen Multimodal LLMs to reason about human motion by rendering skeleton sequences into their native visual modality.
AI
Motion-MLLM integrates IMU egomotion data into Video-LLMs to solve the fundamental scale and spatial reasoning ambiguities of purely visual models.
AI
Dynamic Representational Circuit Breaking (DRCB) introduces an architectural defense against steganographic collusion in multi-agent RL by monitoring and shuffling latent communication bottlenecks.
AI
Latent Posterior Factors (LPF) bridge neural representations with structured probabilistic reasoning by converting VAE posteriors into factors for Sum-Product Networks.