báo cáo hóa học:" Research Article Lip-Synching Using Speaker-Specific Articulation, Shape and Appearance Models"

Tuyển tập báo cáo các nghiên cứu khoa học quốc tế ngành hóa học dành cho các bạn yêu hóa học tham khảo đề tài: Research Article Lip-Synching Using Speaker-Specific Articulation, Shape and Appearance Models | Hindawi Publishing Corporation EURASIP Journal on Audio Speech and Music Processing Volume 2009 Article ID 769494 11 pages doi 2009 769494 Research Article Lip-Synching Using Speaker-Specific Articulation Shape and Appearance Models Gerard Bailly 1 Oxana Govokhina 1 2 Frederic Elisei 1 and Gaspard Breton2 1 Department of Speech and Cognition GIPSA-Lab CNRS Grenoble University 961 rue delaHouilleBlanche-Domaine universitaire-BP 46-38402 Saint Martin d Heres cedex France 2 TECH IRIS IAM Team Orange Labs 4 rue du Clos Courtel BP 59 35512 Cesson-Sevigne France Correspondence should be addressed to Gerard Bailly Received 25 February 2009 Revised 26 June 2009 Accepted 23 September 2009 Recommended by Sascha Fagel We describe here the control shape and appearance models that are built using an original photogrammetric method to capture characteristics of speaker-specific facial articulation anatomy and texture. Two original contributions are put forward here the trainable trajectory formation model that predicts articulatory trajectories of a talking face from phonetic input and the texture modelthat computes a texture for each 3D facial shape according to articulation. Using motion capture data from different speakers and module-specific evaluation procedures we show here that this cloning system restores detailed idiosyncrasies and the global coherence of visible articulation. Results of a subjective evaluation of the global system with competing trajectory formation models are further presented and commented. Copyright 2009 Gerard Bailly et al. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited. 1. Introduction Embodied conversational agents ECAs virtual characters as well as anthropoid robots should be able to talk with their human interlocutors. They should .

Không thể tạo bản xem trước, hãy bấm tải xuống
TÀI LIỆU LIÊN QUAN
TÀI LIỆU MỚI ĐĂNG
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.