Adaptive Signal-Dependent Audio Watermarking Based on Human Auditory System and Neural Networks

作者:Hung-Hsu Tsai, Ji-Shiung Cheng

摘要

Based the characteristics of the human auditory system (HAS) and the techniques of neural networks, this work proposes an Adaptive Signal-Dependent Audio Watermarking (ASDAW) technique for protecting audio copyrights. First, a signal-dependent watermark for the ASDAW technique is generated by using the characteristics of the HAS (the temporal and frequency maskings). Next, the signal-dependent watermark is hidden in an original audio on the temporal domain. The ASDAW technique can make the signal-dependent audio watermark imperceptive (inaudible) because of the characteristics of the HAS. Moreover, an artificial neural network (ANN) is trained in the ASDAW technique so that the ASDAW technique can memorize the relationships between an original audio and the corresponding watermarked audio. Using the trained ANN (TANN), the ASDAW technique can extract the signal-dependent watermarks without the original audio. The extracted watermarks are then exploited in verifying legal duplications made of an audio during audio authentication. Consequently, the copyright forgery for audio can be suppressed greatly. Furthermore, experimental results illustrate that the ASDAW technique significantly possesses memorized, adaptive, and robust capabilities, making it immune against common audio manipulations and pirate attacks for counterfeiting audio copyrights.

论文关键词:audio watermarking, data hiding, neural networks, audio authentication, human auditory system

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-005-4607-y