Result filters

Metadata provider

Language

Resource type

Availability

Loading...
703 record(s) found

Search results

  • The CLASSLA-Stanza model for morphosyntactic annotation of non-standard Serbian 2.1

    This model for morphosyntactic annotation of non-standard Serbian was built with the CLASSLA-Stanza tool (https://github.com/clarinsi/classla) by training on the SETimes.SR training corpus (http://hdl.handle.net/11356/1200), the ReLDI-NormTagNER-sr corpus (http://hdl.handle.net/11356/1794) and the hr500k training corpus (http://hdl.handle.net/11356/1792), using the CLARIN.SI-embed.sr word embeddings (http://hdl.handle.net/11356/1789). These corpora were additionally augmented for handling missing diacritics by repeating parts of the corpora with diacritics removed. The model produces simultaneously UPOS, FEATS and XPOS (MULTEXT-East) labels. The estimated F1 of the XPOS annotations is ~92.64. The difference to the previous version of the model is that this version uses the new version of Serbian word embeddings and is trained on a combination of three training corpora (SETimes.SR, ReLDI-NormTagNER-sr, hr500k).
  • The CLASSLA-StanfordNLP model for morphosyntactic annotation of non-standard Slovenian 1.0

    This model for morphosyntactic annotation of non-standard Slovenian was built with the CLASSLA-StanfordNLP tool (https://github.com/clarinsi/classla-stanfordnlp) by training on the ssj500k training corpus (http://hdl.handle.net/11356/1210) and the Janes-Tag corpus (http://hdl.handle.net/11356/1238), using the CLARIN.SI-embed.sl word embeddings (http://hdl.handle.net/11356/1204). These corpora were additionally augmented for handling missing diacritics by repeating parts of the corpora with diacritics removed. The model produces simultaneously UPOS, FEATS and XPOS (MULTEXT-East) labels. The estimated F1 of the XPOS annotations is ~96.14.
  • Byte-Level Neural Error Correction Model for Icelandic - Yfirlestur (24.03)

    This Byte-Level Neural Error Correction Model for Icelandic is a fine-tuned byT5-base Transformer model for error correction in natural language. It acts as a machine translation model in that it “translates” from deficient Icelandic to correct Icelandic. The model is an improved version of a previous model which is accessible here: http://hdl.handle.net/20.500.12537/321. The improved model is trained on contextual and domain-tagged data, with an additional span-masking pre-training, along with a wider variety of text genre. The model is trained on span-masked data, parallel synthetic error data and real error data. The span-masked pre-training data consisted of a wide variety of texts, including forums and texts from the Icelandic Gigaword Corpus (IGC, http://hdl.handle.net/20.500.12537/254). Synthetic error data was taken from different texts, e.g. from IGC (data which was excluded from the span-masked data), MÍM (http://hdl.handle.net/20.500.12537/113), student essays and educational material. This data was scrambled to simulate real grammatical and typographical errors, and some span-masking was included. Fine-tuning data consisted of data from the iceErrorCorpus (IceEC, http://hdl.handle.net/20.500.12537/73) and the three specialised error corpora (L2: http://hdl.handle.net/20.500.12537/131, dyslexia: http://hdl.handle.net/20.500.12537/132, child language: http://hdl.handle.net/20.500.12537/133). The model can correct a variety of textual errors, even in texts containing many errors, such as those written by people with dyslexia. Measured on the Grammatical Error Correction Test Set (http://hdl.handle.net/20.500.12537/320), the model scores 0.898229 on the GLEU metric (modified BLEU for grammatical error correction) and 0.07% in TER (translation error rate). When measured on the Icelandic Error Corpus' test set, the model scores 0.906834 on the GLEU metric and 0.04% in TER. Þetta leiðréttingarlíkan fyrir íslensku er fínþjálfað byT5-base Transformer-líkan. Það er í raun þýðingalíkan sem þýðir úr íslenskum texta með villum yfir í texta án villna. Líkanið er uppfærð útgáfa af fyrra líkani sem má nálgast hér: http://hdl.handle.net/20.500.12537/321. Uppfærða líkanið er þjálfað á samhengi og gögnum sem hafa verið merkt fyrir óðölum ásamt eyðufylllingarþjálfun og þjálfun með fjölbreyttari texta. Líkanið er þjálfað í eyðufyllingu, á samhliða gervivillugögnum og raunverulegum villugögnum. Eyðufyllingargögn voru tekin úr ýmsum texta, m.a. úr spjallborðum og textum úr Risamálheildinni (http://hdl.handle.net/20.500.12537/254). Gervivillugögn voru einnig tekin úr ýmsum texta, m.a. úr Risamálheildinni (þeim hluta sem var ekki í eyðufyllingarverkefninu), MÍM (http://hdl.handle.net/20.500.12537/113), nemendaritgerðum og fræðsluefni. Gögnin voru rugluð til þess að líkja eftir raunverulegum málfræði- og ritunarvillum og voru að hluta til hulin til þess að þjálfa eyðufyllingu. Fínþjálfunargögn voru tekin úr íslensku villumálheildinni (http://hdl.handle.net/20.500.12537/73) og sérhæfðu villumálheildunum þremur (íslenska sem erlent mál: http://hdl.handle.net/20.500.12537/131, lesblinda: http://hdl.handle.net/20.500.12537/132, barnatextar: http://hdl.handle.net/20.500.12537/133). Líkanið getur leiðrétt fjölbreyttar textavillur, jafnvel í texta sem inniheldur mjög margar villur, svo sem frá fólki með lesblindu. Líkanið skorar 0,898229 GLEU-stig (BLEU nema lagað að málrýni) og er með 0,07% villuhlutfall í þýðingu (translation error rate), þegar það er metið á Prófunarmengi fyrir textaleiðréttingar (http://hdl.handle.net/20.500.12537/320). Þegar það er metið á prófunarmengi íslensku villumálheildarinnar skorar líkanið 0,906834 GLEU-stig og er með 0,04% villuhlutfall í þýðingu.
  • Translation Models (en-ru) (v1.0)

    En-Ru translation models, exported via TensorFlow Serving, available in the Lindat translation service (https://lindat.mff.cuni.cz/services/translation/). Models are compatible with Tensor2tensor version 1.6.6. For details about the model training (data, model hyper-parameters), please contact the archive maintainer. Evaluation on newstest2020 (BLEU): en->ru: 18.0 ru->en: 30.4 (Evaluated using multeval: https://github.com/jhclark/multeval)
  • The CLASSLA-StanfordNLP model for lemmatisation of standard Serbian 1.1

    The model for lemmatisation of standard Serbian was built with the CLASSLA-StanfordNLP tool (https://github.com/clarinsi/classla-stanfordnlp) by training on the SETimes.SR training corpus (http://hdl.handle.net/11356/1200) and using the srLex inflectional lexicon (http://hdl.handle.net/11356/1233). The estimated F1 of the lemma annotations is ~97.9. The difference to the previous version of the model is that it is trained with the lemmatiser padding bug removed, cf. https://github.com/stanfordnlp/stanfordnlp/issues/143.
  • The CLASSLA-Stanza model for lemmatisation of standard Serbian 2.1

    The model for lemmatisation of standard Serbian was built with the CLASSLA-Stanza tool (https://github.com/clarinsi/classla) by training on the SETimes.SR training corpus (http://hdl.handle.net/11356/1200) combined with the Serbian non-standard training corpus ReLDI-NormTagNER-sr (http://hdl.handle.net/11356/1794) and using the srLex inflectional lexicon (http://hdl.handle.net/11356/1233). The estimated F1 of the lemma annotations is ~98.02. The difference to the previous version is that this version was trained on a combination of the standard (SETimes.SR) and non-standard (ReLDI-NormTagNER-sr) Serbian training corpora.
  • The CLASSLA-Stanza model for morphosyntactic annotation of spoken Slovenian 2.2

    This model for morphosyntactic annotation of spoken Slovenian was built with the CLASSLA-Stanza tool (https://github.com/clarinsi/classla) by training on the SST treebank of spoken Slovenian (https://github.com/UniversalDependencies/UD_Slovenian-SST) combined with the SUK training corpus (http://hdl.handle.net/11356/1959) and using the CLARIN.SI-embed.sl word embeddings (http://hdl.handle.net/11356/1791) that were expanded with the MaCoCu-sl Slovene web corpus (http://hdl.handle.net/11356/1517). The model produces simultaneously UPOS, FEATS and XPOS (MULTEXT-East) labels. The estimated F1 of the XPOS annotations is ~96.76.