The Unicode standard is an enormous step toward realizing the goal of a single computer encoding scheme for virtually all of the world’s scripts. Although not all computers will necessarily have the type fonts to print all characters, at least all computers will be able to recognize what characters are required for proper display of text in almost any language. However the Unicode standard presupposes that each language has a script consisting of a finite number of agreed-upon characters. Some languages still lack such agreement. As planning has gone forward for Unicode, more and more code points are being assigned, leaving ever less conveniently accessed code points for future expansion. This article describes the Unicode project. Then it describes the special challenge of encoding Chinese characters. Finally it uses the example of Hokkien, a “dialect” of Chinese spoken by most people in Taiwan, to explore the problem of unorthodox, unstable, or unofficial scripts. Political forces and technical considerations make it difficult to include such scripts in Unicode. As Unicode becomes the “de facto” standard for writing human languages, script innovations will presumably become less and less likely to receive wide use.
2024. Text Standards for the “Rest of World”: The Making of the Unicode Standard and the OpenType Format. IEEE Annals of the History of Computing 46:1 ► pp. 20 ff.
John, Nicholas A.
2013. The Construction of the Multilingual Internet: Unicode, Hebrew, and Globalization. Journal of Computer-Mediated Communication 18:3 ► pp. 321 ff.
Zhao, Shouhui
2005. Chinese Character Modernisation in the Digital Era: A Historical Perspective. Current Issues in Language Planning 6:3 ► pp. 315 ff.
[no author supplied]
2005. References. In Clinical Sociolinguistics, ► pp. 281 ff.
This list is based on CrossRef data as of 24 october 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers.
Any errors therein should be reported to them.