Towards a low bandwidth talking face using appearance models

作者:

Highlights:

摘要

This paper is motivated by the need to develop low bandwidth virtual humans capable of delivering audio-visual speech and sign language at a quality comparable to high bandwidth video. Using an appearance model combined with parameter compression significantly reduces the number of bits required for animating the face of a virtual human. A perceptual method is used to evaluate the quality of the synthesised sequences and it appears that 3.6 kb s−1 can yield acceptable quality.

论文关键词:Talking faces,Shape and appearance models,Principal component analysis

论文评审过程:Accepted 13 August 2003, Available online 22 October 2003.

论文官网地址:https://doi.org/10.1016/j.imavis.2003.08.015