Over 75% of the original words in a sentence can be recovered from the abstract vector 'black box' of an embedding.
April 23, 2026
Original Paper
FLiP: Towards understanding and interpreting multimodal multilingual sentence embeddings
arXiv · 2604.18109
The Takeaway
Sentence embeddings are often treated as anonymous mathematical summaries of meaning. This reverse engineering method proves that these vectors actually store the specific vocabulary used in the prompt. The latent space is far more transparent and detailed than previously thought. This has massive implications for privacy, as sensitive text can be reconstructed from its vector form. It also shows that the AI is not just distilling meaning, but effectively memorizing the input. We must treat embeddings with the same security caution as the raw text itself.
From the abstract
This paper presents factorized linear projection (FLiP) models for understanding pretrained sentence embedding spaces. We train FLiP models to recover the lexical content from multilingual (LaBSE), multimodal (SONAR) and API-based (Gemini) sentence embedding spaces in several high- and mid-resource languages. We show that FLiP can recall more than 75% of lexical content from the embeddings, significantly outperforming existing non-factorized baselines. Using this as a diagnostic tool, we uncover