Optimisation of HMM topology and its model parameters by genetic algorithms

作者:

Highlights:

摘要

Hidden Markov model (HMM) is currently the most popular approach to speech recognition. However, the problems of finding a good HMM model and its optimised model parameters are still of great interest to the researchers in this area. In our previous work, we have successfully applied the genetic algorithm (GA) to the HMM training process to obtain the optimised model parameters (Chau et al. Proc. ICASSP (1997) 1727) of the HMM models. In this paper, we further extend our work and propose a new training method based on GA and Baum–Welch algorithms to obtain an HMM model with optimised number of states in the HMM models and its model parameters. In this work, we are not only able to overcome the shortcomings of the slow convergence speed of the simple GA-HMM approach. In addition, this method also finds better number of states in the HMM topology as well as its model parameters. From our experiments with the 100 words extracted from the TIMIT corpus, our method is able to find the optimal topology in all cases. In addition, the HMMs trained by our GA HMM training have a better recognition capability than the HMMs trained by the Baum–Welch algorithm. In addition, 290 words are randomly selected from the TMIIT database for testing the recognition performances of both approaches, it is found that the GA-HMM approach has a recognition rate of 95.86% while the Baum–Welch method has a recognition rate of 93.1%. This implies that the HMMs trained by our GA-HMM method are more optimised than the HMMs trained by the Baum–Welch method.

论文关键词:Hidden Markov model,Genetic algorithm,Baum–Welch algorithm,Speech recognition,Pattern recognition

论文评审过程:Received 26 January 1998, Revised 16 November 1999, Accepted 16 November 1999, Available online 7 June 2001.

论文官网地址:https://doi.org/10.1016/S0031-3203(99)00226-5