Result filters

Metadata provider

Language

Resource type

Availability

Loading...
703 record(s) found

Search results

  • GreynirCorrect4LT (1.0)

    This is a slightly adapted version of Miðeind's spell and grammar checker GreynirCorrect <CLARIN link: http://hdl.handle.net/20.500.12537/174> . This version is implemented for use in a text-to-speech text pre-processing pipeline, but includes guidelines for a quick adaptation to other use cases in language technology applications as well. [ICELANDIC] Þetta er lítillega aðlöguð útgáfa af málrýnitólinu GreynirCorrect <CLARIN link: http://hdl.handle.net/20.500.12537/174> til notkunar í textavinnslu fyrir talgervla. Einnig inniheldur útgáfan leiðbeiningar um það hvernig aðlaga má GreyniCorrect að öðrum notkunartilvikum í máltækni, þar sem þarfirnar kunna að vera aðrar en í málrýni fyrir almenna notendur.
  • Parsito

    Parsito is a fast open-source dependency parser written in C++. Parsito is based on greedy transition-based parsing, it has very high accuracy and achieves a throughput of 30K words per second. Parsito can be trained on any input data without feature engineering, because it utilizes artificial neural network classifier. Trained models for all treebanks from Universal Dependencies project are available (37 treebanks as of Dec 2015). Parsito is a free software under Mozilla Public License 2.0 (http://www.mozilla.org/MPL/2.0/) and the linguistic models are free for non-commercial use and distributed under CC BY-NC-SA (http://creativecommons.org/licenses/by-nc-sa/4.0/) license, although for some models the original data used to create the model may impose additional licensing conditions. Parsito website http://ufal.mff.cuni.cz/parsito contains download links of both the released packages and trained models, hosts documentation and offers online demo. Parsito development repository http://github.com/ufal/parsito is hosted on GitHub.
  • Tiro TTS web service (22.10)

    Tiro TTS is a text-to-speech (TTS) API web service that works with various TTS backends. By default, it expects a FastSpeech2+Melgan+IceG2P backend. See the https://github.com/cadia-lvl/fastspeech2 repository for more information on the backend. The service can accept either unnormalized text or an SSML document and respond with audio (MP3, Ogg Vorbis or raw 16 bit PCM) or speech marks, indicating the byte and time offset of each synthesized word in the request. The full API documentation in OpenAPI 2 format is available online at tts.tiro.is. The code for the service along with further information is on https://github.com/tiro-is/tiro-tts/releases/tag/M9. You should also check if a newer version is out (see README.md)
  • MOSI: TTS evaluation tool (22.01)

    EN: MOSI is a text-to-speech (TTS) evaluation platform. The platform is focused on listening tests. Organizers can upload audio clips to be evaluated using Mean opinion score (MOS), AB or ABX setups. The platform allows the organizers to arrange and plan the evaluations, customize the setup, send out invite links to participants and view and download the results. A detailed setup description can be found in README.md and a user guide can be found in HOW_TO_USE.md. IS: MOSI er tól/vettvangur þar sem hljóðgerving er metin. MOSI er búinn til fyrir hlustunarpróf. Notendur MOSA geta hlaðið upp hljóðklippum og notað MOS-, AB- eða ABX-fyrirkomulag. MOSI gerir skipuleggjendum kleift að skipuleggja kannanir, stilla þær eftir sinni hentisemi, senda boðshlekki til þátttakenda og skoða og hlaða niður niðurstöðum. Uppsetningarleiðbeiningar má finna í readme.md og notkunarleiðbeiningar má finna í HOW_TO_USE.md.
  • Q-CAT Corpus Annotation Tool 1.1

    The Q-CAT (Querying-Supported Corpus Annotation Tool) is a computational tool for manual annotation of language corpora, which also enables advanced queries on top of these annotations. The tool has been used in various annotation campaigns related to the ssj500k reference training corpus of Slovenian (http://hdl.handle.net/11356/1210), such as named entities, dependency syntax, semantic roles and multi-word expressions, but it can also be used for adding new annotation layers of various types to this or other language corpora. Q-CAT is a .NET application, which runs on Windows operating system. Version 1.1 enables the automatic attribution of token IDs and personalized font adjustments.
  • Lithuanian keyboard for macOS users

    This keyboard driver allows easy access of the Lithuanian letters via conventional keyboard layout a.k.a. „Lithuanian letters instead of numbers“. Essential new feature of this layout is the extensive use of "dead key" technique to type the following single letters: • Lithuanian accented (ą̃, ū́, m̃, ė́ etc.); • Latvian; • Estonian; • Polish; • French; • German; • Scandinavian; • old Greek; • Russian.
  • Trankit model for linguistic processing of spoken Slovenian

    This is a retrained Slovenian spoken language model for Trankit v1.1.1 library (https://pypi.org/project/trankit/). It is able to predict sentence segmentation, tokenization, lemmatization, language-specific morphological annotation (MULTEXT-East morphosyntactic tags), as well as universal part-of-speech tagging, feature prediction, and dependency parsing in accordance with the Universal Dependencies annotation scheme (https://universaldependencies.org/). The model was trained using a combination of two datasets published by Universal Dependencies in release 2.12, the spoken SST treebank (https://github.com/UniversalDependencies/UD_Slovenian-SSJ/tree/r2.12) and the written SSJ treebank (https://github.com/UniversalDependencies/UD_Slovenian-SST/tree/r2.12). Its evaluation on the spoken SST test set yields an F1 score of 97.78 for lemmas, 97.19 for UPOS, 95.05 for XPOS and 81.26 for LAS, a significantly better performance in comparison to the counterpart model trained on written SSJ data only (http://hdl.handle.net/11356/1870). To utilize this model, please follow the instructions provided in our github repository (https://github.com/clarinsi/trankit-train) or refer to the Trankit documentation (https://trankit.readthedocs.io/en/latest/training.html#loading). This ZIP file contains models for both xlm-roberta-large (which delivers better performance but requires more hardware resources) and xlm-roberta-base.
  • Dependency tree extraction tool STARK 2.0

    STARK is a python-based command-line tool for extraction of dependency trees from parsed corpora, aimed at corpus-driven linguistic investigations of syntactic and lexical phenomena of various kinds. It takes a treebank in the CONLL-U format as input and returns a list of all relevant dependency trees with frequency information and other useful statistics, such as the strength of association between the nodes of a tree, or its significance in comparison to another treebank. For installation, execution and the description of various user-defined parameter settings, see the official project page at: https://github.com/clarinsi/STARK In comparison with v1, this version introduces several new features and improvements, such as the option to set parameters in the command line, compare treebanks or visualise results online.
  • THEaiTRobot 2.0

    The THEaiTRobot 2.0 tool allows the user to interactively generate scripts for individual theatre play scenes. The previous version of the tool (http://hdl.handle.net/11234/1-3507) was based on GPT-2 XL generative language model, using the model without any fine-tuning, as we found that with a prompt formatted as a part of a theatre play script, the model usually generates continuation that retains the format. The current version also uses vanilla GPT-2 by default, but can also instead use a GPT-2 medium model fine-tuned on theatre play scripts (as well as film and TV series scripts). Apart from the basic "flat" generation using a theatrical starting prompt and the script model, the tool also features a second, hierarchical variant, where in the first step, a play synopsis is generated from its title using a synopsis model (GPT-2 medium fine-tuned on synopses of theatre plays, as well as film, TV series and book synopses). The synopsis is then used as input for the second stage, which uses the script model. The choice of models to use is done by setting the MODEL variable in start_server.sh and start_syn_server.sh THEaiTRobot 2.0 was used to generate the second THEaiTRE play, "Permeation/Prostoupení".
  • The Orange workflow for observing collocation clusters ColEmbed 1.0

    The Orange Workflow for Observing Collocation Clusters ColEmbed 1.0 ColEmbed is a workflow (.OWS file) for Orange Data Mining (an open-source machine learning and data visualization software: https://orangedatamining.com/) that allows the user to observe clusters of collocation candidates extracted from corpora. The workflow consists of a series of data filters, embedding processors, and visualizers. As input, the workflow takes a tab-separated file (.TSV/.TAB) with data on collocations extracted from a corpus, along with their relative frequencies by year of publication and other optional values (such as information on temporal trends). The workflow allows the user to select the features which are then used in the workflow to cluster collocation candidates, along with the embeddings generated based on the selected lemmas (either one lemma or both lemmas can be selected, depending on our clustering criteria; for instance, if we wish to cluster adjective+noun candidates based on the similarities of their noun components, we only select the second lemma to be taken into account in embedding generation). The obtained embedding clusters can be visualized and further processed (e.g. by finding the closest neighbors of a reference collocation). The workflow is described in more detail in the accompanying README file. The entry also contains three .TAB files that can be used to test the workflow. The files contain collocation candidates (along with their relative frequencies per year of publication and four measures describing their temporal trends; see http://hdl.handle.net/11356/1424 for more details) extracted from the Gigafida 2.0 Corpus of Written Slovene (https://viri.cjvt.si/gigafida/) with three different syntactic structures (as defined in http://hdl.handle.net/11356/1415): 1) p0-s0 (adjective + noun, e.g. rezervni sklad), 2) s0-s2 (noun + noun in the genitive case, e.g. ukinitev lastnine), and 3) gg-s4 (verb + noun in the accusative case, e.g. pripraviti besedilo). It should be noted that only collocation candidates with absolute frequency of 15 and above were extracted. Please note that the ColEmbed workflow requires the installation of the Text Mining add-on for Orange. For installation instructions as well as a more detailed description of the different phases of the workflow and the measures used to observe the collocation trends, please consult the README file.