The majority of the world’s languages have little to no NLP resources or tools. This is due to a lack of training data (“resources”) over which tools, such as taggers or parsers, can be trained. In recent years, there have been increasing efforts to apply NLP methods to a much broader swathe of the worlds languages. In many cases this involves bootstrapping the learning process with enriched or partially enriched resources. One promising line of research involves the use of Interlinear Glossed Text (IGT), a very common form of annotated data used in the field of linguistics. Although IGT is generally very richly annotated, and can be enriched even further (e.g., through structural projection), much of the content is not easily consumable by machines since it remains “trapped” in linguistic scholarly documents and in human readable form. In this paper, we introduce several tools that make IGT more accessible and consumable by NLP researchers.


  year = {2015},
  isbn = {978-3-319-18110-3},
  booktitle = {Computational Linguistics and Intelligent Text Processing},
  volume = {9041},
  series = {Lecture Notes in Computer Science},
  editor = {Gelbukh, Alexander},
  doi = {10.1007/978-3-319-18111-0_3},
  title = {Enriching, Editing, and Representing Interlinear Glossed Text},
  url = {http://dx.doi.org/10.1007/978-3-319-18111-0_3},
  publisher = {Springer International Publishing},
  author = {Xia, Fei and Goodman, Michael Wayne and Georgi, Ryan and Slayden, Glenn and Lewis, William D.},
  pages = {32-46},
  language = {English}