Indexed by:
Abstract:
In order to generate more realistic mouth animation in visual speech synthesis, this paper proposed a method based on a two-level learning model. The authors can learn the potential mapping relationship between acoustic features and the visual features through the combination of HMM (Hidden Markov Models) and GA (Genetic Algorithms). This model can decrease the redundant information in abstracting acoustic features for large acoustic sample space and predict more realistic mouth animation. In addition, this paper also proposed a new method based on FAP points in mouth feature expression. This method can eliminate the effect by illumination and decrease the dimensions of mouth feature vector. It improves the speed of training and synthesis.
Keyword:
Reprint Author's Address:
Email:
Source :
Journal of Beijing University of Technology
ISSN: 0254-0037
Year: 2009
Issue: 5
Volume: 35
Page: 702-707
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 10