Predicting translation behaviors by using Hidden Markov Model
Lu Sheng | Renmin University of China
Michael Carl | Kent State University
Yao Xinyue | Renmin University of China
Su Wenchao | Guangdong University of Foreign Studies
The translation process can be studied as sequences of activity units. The application of machine learning
technology offers researchers new possibilities in the study of the translation process. This research project developed a
program, activity unit predictor, using the Hidden Markov Model. The program takes in duration, translation phase, target
language and fixation as the input and produces an activity unit type as the output. The highest prediction accuracy reached is
61%. As one of the first endeavors, the program demonstrates strong potential of applying machine learning in translation process
research.
Article outline
- 1.Introduction
- 2.Translation process modeling
- 3.Activity unit and activity unit predictor
- 4.The present study
- 4.1Data analysis
- 4.2Modeling
- 4.2.1Model configuration
- 4.2.2Decoding
- 4.2.3Generalization
- 4.3Experiment
- 5.Results
- 6.Discussion and conclusion
- Note
-
References
Published online: 13 May 2020
https://doi.org/10.1075/tcb.00035.lu
https://doi.org/10.1075/tcb.00035.lu
References
Aziz, Wilker, Maarit Koponen, and Lucia Specia
Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio
2015 “Neural Machine Translation by Jointly Learning to Align and Translate.” Paper presented at International Conference on Learning Representations (San Diego, USA, 7–9 May 2015).
Bangalore, Srinivas, Bergljot Behrens, Michael Carl, Maheshwar Ghankot, Arndt Heilmann, Jean Nitzke, Moritz J. Schaeffer, and Annegret Sturm
Campbell, Stuart
Carl, Michael and Arnt Lykke Jakobsen
Carl, Michael and Moritz J. Schaeffer
Carl, Michael, Srinivas Bangalore and Moritz J. Schaeffer
Heilmann, Arndt and Stella Neumanm
2016 “Dynamic Pause Assessment of Keystroke Logged Data for the Detection of Complexity in Translation and Monolingual Text Production.” Paper presented at Workshop on Computational Linguistics for Linguistic Complexity (Osaka, Japan, 11–17 December 2016).
Lacruz, Isabel and Gregory Shreve
Läubli, Samuel and Ulrich Germann
Martínez Gómez, Pascual Akshay Minocha, Jin Huang, Michael Carl, Srinivas Bangalore, and Akiko Aizawa
Minh, Volodymyr et al.
2014 “Recurrent Models of Visual Attention.” Paper presented at Neural Information Processing Systems (Montreal, 8–13 December 2014).
Pomegranate: Probabilistic Modeling in Python
https://homes.cs.washington.edu/~jmschr/lectures/pomegranate.html Accessed 20 August 2018.
Schaeffer, Moritz J., Michael Carl, Isabel Lacruz, and Akiko Aizawa
Cited by
Cited by 1 other publications
Su, Wenchao
This list is based on CrossRef data as of 07 february 2022. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.