An HVS-based adaptive coder for perceptually lossy image compression

作者:

Highlights:

摘要

In this paper a locally adaptive wavelet image coder is presented. It is based on an embedded human visual system model that exploits the space- and frequency-localization properties of wavelet decompositions for tuning the quantization step for each discrete wavelet transforms coefficient, according to the local properties of the image. A coarser quantization is performed in the areas of the image where the visibility of errors is reduced, thus decreasing the total bit rate without affecting the resulting visual quality. The size of the quantization step for each DWT coefficient is computed by taking into account the multiresolution structure of wavelet decompositions, so that there is no need for any side information to be sent to the decoder or for prediction mechanisms.Perceptually lossless as well as perceptually lossy compression is supported: the desired visual quality of the compressed image is set by means of a quality factor. Moreover, the technique for tuning the target visual quality allows the user to define arbitrarily shaped regions of interest and to set for each one a different quality factor.

论文关键词:Wavelet,HVS,Lossy compression,Quantization,Perceptual coding

论文评审过程:Received 21 September 2001, Revised 27 June 2002, Accepted 27 June 2002, Available online 12 December 2002.

论文官网地址:https://doi.org/10.1016/S0031-3203(02)00164-4