The augmented interpreter
An exploratory study of the usability of augmented reality technology in interpreting
Computer-assisted interpreting (CAI) tools use speech recognition and machine translation to display numbers and
names on a screen or automatically suggest renditions for technical terms. One way to improve the usability of CAI tools may be to
use augmented reality (AR) technology, which allows information to be displayed wherever convenient. Instead of having to look
down at a tablet or a laptop, the interpreter can see the term or number projected directly into their field of vision, allowing
them to maintain their focus on the speaker and the audio input. In this study, we investigated the affordances of AR in
simultaneous interpreting. Nine professional conference interpreters each interpreted two technical talks: one with numerals,
proper nouns and suggestions for technical terms automatically shown on an AR display and the other with an MS Word glossary on a
laptop. The results indicate a hypothetical use case for AR technologies in interpreting but highlight the practical limitations,
such as a lack of comfort in wearing the AR equipment, a lack of ergonomic and intuitive interaction with virtual objects, and
distraction and interference with the interpreting process in the form of additional visual input.
Article outline
- Introduction
- Computer-assisted interpreting
- CAI with AR technology
- Method
- Participants
- Source texts and terminology
- Equipment and application
- Procedure
- Data analysis
- Participants’ notes
- Rendition of critical items
- Interviews
- Results
- Familiarity with the speech topics and terminology management software
- Preparation
- Rendering of critical items
- Preferred position of the term box and technical issues
- Experiences using the AR technology
- Discussion
- Conclusion
- Acknowledgements
- Notes
-
References