VLA-Mark: A cross modal watermark for large vision-language alignment model
About
Vision-language models demand watermarking solutions that protect intellectual property without compromising multimodal coherence. Existing text watermarking methods disrupt visual-textual alignment through biased token selection and static strategies, leaving semantic-critical concepts vulnerable. We propose VLA-Mark, a vision-aligned framework that embeds detectable watermarks while preserving semantic fidelity through cross-modal coordination. Our approach integrates multiscale visual-textual alignment metrics, combining localized patch affinity, global semantic coherence, and contextual attention patterns, to guide watermark injection without model retraining. An entropy-sensitive mechanism dynamically balances watermark strength and semantic preservation, prioritizing visual grounding during low-uncertainty generation phases. Experiments show 7.4% lower PPL and 26.6% higher BLEU than conventional methods, with near-perfect detection (98.8% AUC). The framework demonstrates 96.1\% attack resilience against attacks such as paraphrasing and synonym substitution, while maintaining text-visual consistency, establishing new standards for quality-preserving multimodal watermarking
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Watermarking Robustness | LVLM Evaluation Set (test) | Relative Drop0.00e+0 | 36 | |
| Image Watermarking | COCO Dataset | ACC98.13 | 23 | |
| Watermarking | AMBER | AUC99.23 | 18 | |
| Multimodal Watermarking | AMBER | PPL3.03 | 14 | |
| Multimodal Watermarking | MS-COCO 14 | PPL3.05 | 14 | |
| Multimodal Watermarking | MS-COCO 17 | PPL3.08 | 14 | |
| Latency Analysis | LVLM generation | Average Generation Time (s)9.4673 | 14 |