AI & ML First Ever

Teaching an AI a cool new trick can backfire so badly that it accidentally starts blabbing your private passwords in its own behind-the-scenes notes.

April 6, 2026

Original Paper

Credential Leakage in LLM Agent Skills: A Large-Scale Empirical Study

Zhihao Chen, Ying Zhang, Yi Liu, Gelei Deng, Yuekang Li, Yanjun Zhang, Jianting Ning, Leo Yu Zhang, Lei Ma, Zhiqiang Li

arXiv · 2604.03070

The Takeaway

It identifies a major security flaw in the AI skill ecosystem, where seemingly helpful tools can expose secrets through normal software behavior. This requires a fundamental rethink of how we secure AI assistants that use third-party tools.

From the abstract

Third-party skills extend LLM agents with powerful capabilities but often handle sensitive credentials in privileged environments, making leakage risks poorly understood. We present the first large-scale empirical study of this problem, analyzing 17,022 skills (sampled from 170,226 on SkillsMP) using static analysis, sandbox testing, and manual inspection. We identify 520 vulnerable skills with 1,708 issues and derive a taxonomy of 10 leakage patterns (4 accidental and 6 adversarial). We find th