Nikolov, NikolaHu, YuhuangTan, Mi XueHahnloser, Richard H.R.2019-10-312019-10-312019-10-312018-10-0110.18653/v1/W18-6302https://infoscience.epfl.ch/handle/20.500.14299/162570Character-level Neural Machine Translation(NMT) models have recently achieved impressive results on many language pairs. They mainly do well for Indo-European language pairs, where the languages share the same writing system. However, for translating between Chinese and English, the gap between the two different writing systems poses a ma-jor challenge because of a lack of systematic correspondence between the individual linguistic units.In this paper, we enable character-level NMT for Chinese, by breaking down Chinese characters into linguistic units similar to that of Indo-European languages. We use the Wubi encoding scheme, which preserves the original shape and semantic in-formation of the characters, while also being reversible. We show promising results from training Wubi-based models on the character-and subword-level with recurrent as well as convolutional models.Character-level Chinese-English Translation through ASCII Encodingtext::conference output::conference proceedings::conference paper