Robust image watermarking based on Tucker decomposition and Adaptive-Lattice Quantization Index Modulation

Bingwen Feng, Wei Lu, Wei Sun, Jiwu Huang, Yun Qing Shi

Research output: Contribution to journalArticlepeer-review

22 Scopus citations

Abstract

In this paper, a robust blind image watermarking scheme with a good rate distortion-robustness tradeoff is proposed by adopting both Tucker Decomposition (TD) and Adaptive-Lattice Quantization Index Modulation (A-LQIM). Inspired by the good properties provided by TD, such as content-based representation and stable decomposition under distortions, the core tensor of TD is computed from the host image to carry watermarks. The two coarsest coefficients in each frontal slice of the core tensor are considered as a host vector, into which one watermark bit is embedded by using Lattice Quantization Index Modulation (LQIM). In order to further improve the watermarked image quality, an A-LQIM method is proposed to control the embedding strength on each host vector by approximately minimizing the Structural SIMilarity (SSIM)-measured perceptual distortion. Optimal parameters for each embedding are obtained according to the host image. Experimental results have demonstrated that the proposed scheme provides high robustness against common attacks without degrading the image quality compared with state-of-the-art schemes.

Original languageEnglish (US)
Pages (from-to)1-14
Number of pages14
JournalSignal Processing: Image Communication
Volume41
DOIs
StatePublished - 2016

All Science Journal Classification (ASJC) codes

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Keywords

  • Adaptive-Lattice Quantization Index Modulation
  • Image watermarking
  • Structural SIMilarity
  • Tensor
  • Tucker Decomposition

Fingerprint

Dive into the research topics of 'Robust image watermarking based on Tucker decomposition and Adaptive-Lattice Quantization Index Modulation'. Together they form a unique fingerprint.

Cite this