Dataset for our work: On the Development of a Large Scale Corpus for Native Language Identification.
Note: The italki website has moved away from the notebooks used in this project. This code probably wont work anymore (at least till updated)
Due to copyright reasons we don't publish the raw data. Instead, tools are provided to recreate NLI corpus from the italki website.
To recreate the exact same dataset as collected in 2017, pass the ID list file:
python3 scrape.py recreate 2017_ids.txt
Collect your own new data using:
python3 scrape.py scrape arabic chinese french german hindi italian japanese korean russian spanish turkish
By default, this will make a new folder italki_data
with .txt
files named with their document id, as well as a label csv file:
document_id, author_id, L1, english_proficiency
142576, 32162, Turkish, 2
248781, 12987, French, 4
...
In the benchmark
folder there are 2 scripts:
-
italki/italki.py
- Loads the data using the Huggingface Datasets library. You can reuse this for your own models. -
train.py
- Trains a simple bert model using the dataset.
Feel free to use and adapt these for your own research. To include the huggingface datasets version in your own script, you can write:
>>> import datasets
>>> ds = datasets.load_dataset("./benchmark/italki", data="../italki_data")
>>> print(ds["train"][0])
{"document": "Today I went to...", "native_language": "French", "proficiency": 5, ...}
If you use this dataset in your work, please cite:
@inproceedings{hudson2018development,
title={On the Development of a Large Scale Corpus for Native Language Identification},
author={Hudson, Thomas G and Jaf, Sardar},
booktitle={Proceedings of the 17th International Workshop on Treebanks and Linguistic Theories (TLT 2018), December 13--14, 2018, Oslo University, Norway},
number={155},
pages={115--129},
year={2018},
organization={Link{\"o}ping University Electronic Press}
}
The following table is necessary for this dataset to be indexed by search engines such as Google Dataset Search.
property | value |
---|---|
name | Italki Native Language Identification Dataset |
alternateName | Italki |
url | https://github.com/ghomasHudson/italkiCorpus |
description | Native Language Identification (NLI) is the task of identifying an author’s native language from their writings in a second language. This dataset (italki) consists of large quantities of text from the language learning website italki. The italki website creates a community for language learners to access teaching resources, practice speaking, discuss topics and ask questions in their target language (the English language). We gather free-form ‘Notebook’ documents, which are mainly autobiographical diary entries with connected profiles describing the native language of the author.
|
citation | https://ep.liu.se/ecp/article.asp?issue=155&article=012 |