Result filters

Metadata provider

Language

Resource type

Availability

Loading...
703 record(s) found

Search results

  • Service for querying dependency treebanks Drevesnik 1.1

    Drevesnik (https://orodja.cjvt.si/drevesnik/) is an online service for querying Slovenian corpora parsed with the Universal Dependencies annotation scheme. It features an easy-to-use query language on the one hand and user-friendly graph visualizations on the other. It is based on the open-source dep_search tool (https://github.com/TurkuNLP/dep_search), which was localized and modified so as to also support querying by JOS morphosyntactic tags, random distribution of results, and filtering by sentence length. The source code and the documentation for the search backend and the web user interface are publicly available on the CLARIN.SI GitHub repository https://github.com/clarinsi/drevesnik. This submission corresponds to release 1.1: https://github.com/clarinsi/drevesnik/releases/tag/1.1, which brings improved architecture, documentation and branding in comparison to release 1.0.
  • Alexia: Lexicon Acquisition Tool for Icelandic (Orðtökutól) 2.0

    The purpose of the lexicon acquisition tool is to facilitate the development and expansion of online dictionaries and glossaries, particularly the Database of Modern Icelandic Inflection (DMII/BÍN) and ISLEX. The tool is designed around the Icelandic Gigaword Corpus (IGC) and the information contained within its TEI-formatted documents. That is to say, its best performance comes from using the available part-of-speech tags, lemmas and word forms defined in the IGC. The lexicon acquisition tool can however use any corpus as input that uses either the same TEI-format as is used in the IGC or a plain text file format, depending on the user's preference. The output files, examples of which are included, are the following: Frequency per word form with no extra information added. Useful for generally picking candidates for the online dictionaries and glossaries. Frequency per lemma with no extra information added. Useful for generally picking candidates for the online dictionaries and glossaries. Frequency per word form, including information on all possible lemmas for the given word forms. Provides information on whether the word form can belong to more than one word class, as well as whether or not the automatic lemmatization is working correctly. Frequency per lemma, including information on all possible word forms for the given lemma. To examine if a certain word form appears much more or less frequently than the others and thus if the word form is only used as a part of a certain expression. Frequency per lemma, including information in which types of text the particular lemma appears. The frequency for each individual text type can also be examined in descending order. Facilitates the creation of a specialized glossary (e.g. a glossary of sport related words). Also included is a list of approximately 60 thousand subwords, manually chosen from the ICG. These include foreign words, typos, misspelled words, lemmatization errors and acronyms. Tilgangur orðtökutólsins er að einfalda þróun og smíði netorðabóka og netorðasafna, einkum og sér í lagi Beygingarlýsingu íslensks nútímamáls (BÍN) og Nútímamálsorðabókarinnar (ISLEX). Smíði tólsins byggist að miklu leyti á notkun Risamálheildarinnar (RMH) og þeirra upplýsinga sem eru skilgreindar innan tei-sniðsins sem hún notar, en þar er helst átt við notkun málfræðilegra marka, nefnimynda og orðmynda sem þar er að finna. Orðtökutólið má aftur á móti nota með hvaða málheild sem er sé hún annað hvort á sama tei-sniði og Risamálheildin eða á einföldu txt-sniði. Dæmi um úttaksskjöl orðtökutólsins má finna í meðfylgjandi möppu. Þau eru eftirfarandi: Tíðnilistar sem innihalda lemmur ásamt tíðni þeirra í inntaksmálheildinni. Þetta má nýta til þess að ákveða hvaða orð koma til greina að bæta við í orðabækur og -söfn. Tíðnilistar sem innihalda orðmyndir ásamt tíðni þeirra í inntaksmálheildinni. Þetta má nýta til þess að ákveða hvaða orð koma til greina að bæta við í orðabækur og -söfn. Tíðnilistar sem innihalda lemmur ásamt tíðni þeirra í inntaksmálheildinni, en jafnframt eru allar orðmyndir viðkomandi lemmu sem koma fyrir taldar upp. Nýtist til að kanna hvort tiltekin orðmynd er mun algengari en aðrar og þar með hvort orðið tilheyri einkum ákveðnu orðtaki. Tíðnilistar sem innihalda orðmyndir ásamt tíðni þeirra í inntaksmálheildinni, en jafnframt eru allar lemmur viðkomandi orðmyndar sem koma fyrir taldar upp. Veitir upplýsingar um hvort tiltekin orðmynd getur tilheyrt fleiri en einum orðflokki og hvort sjálfvirk lemmun skili réttum niðurstöðum. Tíðnilistar sem innihalda lemmur ásamt tíðni þeirra í inntaksmálheildinni, en auk þess tíðni hverrar lemmu innan ákveðinnar gerðar texta (t.d. fréttir, stærðfræði eða fótbolti). Má nýta við smíði íðorðasafna. Meðfylgjandi er einnig listi sem inniheldur um 60 þúsund stopporð sem hefur verið safnað handvirkt úr Risamálheildinni. Þetta eru erlend orð, stafsetningar- og innsláttarvillur, lemmuvillur og skammstafanir.
  • Kaldi L2 Speakers Recipe 22.10

    This release includes a recipe intended to show how to integrate the corpus "Samromur L2 22.09" [1] and the "Icelandic Language Models with Pronunciations 22.01" [2] to create automatic speech recognition systems using the Kaldi toolkit. Þessi útgáfa inniheldur talgreiningarforskrift sem sýnir hvernig má beita talmálheildinni „Samromur L2 22.09“ [1] ásamt „Íslenskum mállíkönum með framburðarorðabók 22.01“ [2] til þess að byggja talgreiningarkerfi með verkfærakistunni Kaldi. [1] http://hdl.handle.net/20.500.12537/263 [2] http://hdl.handle.net/20.500.12537/172
  • Slovene Text Normalizator RSDO-DS2-NORM 1.0

    This Text Normalisator converts Slovene text from written-form into its spoken-form. Traditionally it is an essential preprocessing step before text-to-speech (TTS). As input it accepts text as a string, and returns a dictionary with fields "input_text", "normalised_text", "status" and "logs". Example: normalize_text("Sodobna definicija Celzijeve temperaturne lestvice, ki velja od leta 1954, je, da je temperatura trojne točke vode enaka 0,01 °C.") {'input_text': 'Sodobna definicija Celzijeve temperaturne lestvice, ki velja od leta 1954, je, da je temperatura trojne točke vode enaka 0,01 °C.', 'normalized_text': 'Sodobna definicija Celzijeve temperaturne lestvice, ki velja od leta tisoč devetsto štiriinpetdeset, je, da je temperatura trojne točke vode enaka nič celih nič ena stopinje Celzija.', 'status': 1, 'logs': [('1954', 'tisoč devetsto štiriinpetdeset'), ('0,01', 'nič celih nič ena'), ('°C', 'stopinje Celzija')]} For further details see README.md.
  • Slovene Punctuation and Capitalisation model RSDO-DS2-P&C 3.6

    This Punctuation and Capitalisation model was trained following the NVIDIA NeMo Punctuation and Capitalisation recipe (for details see the official NVIDIA NeMo P&C documentation, https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/punctuation_and_capitalization.html, and NVIDIA NeMo GitHub repository https://github.com/NVIDIA/NeMo). It provides functionality for restoring punctuation (,.!?) and capital letters in lowercased non-punctuated Slovene text. The training corpus was built from publicly available datasets, as well as a small portion of proprietary data. In total the training corpus consisted of 38.829.529 sentences and the validation corpus consisted of 2.092.497 sentences.
  • IceNeuralParsingPipeline 20.04

    The Icelandic Neural Parsing Pipeline (IceNeuralParsingPipeline) includes all steps necessary for parsing plain Icelandic text, i.e. preprocessing, parsing and post processing. The preprocessing step consists of tokenization, both punctuation and matrix clause splitting. The parsing step consists of an Icelandic model of the Berkeley Neural Parser, trained on IcePaHC, which reports an 84.74 F1 score. The output's annotation scheme is the same as IcePaHC's, except that neither empty phrases, e.g. traces and zero subjects, nor lemmas are shown. The post processing step includes minor steps for cleaning and formatting the parsed text.
  • GreynirT2T Serving - En--Is NMT Inference and Pre-trained Models (1.0)

    Code and models required to run the GreynirT2T Transformer NMT system for translation between English and Icelandic. Includes a Docker-Compose file that starts a REST web server making the translation models available to clients. Forrit og líkön til að keyra GreynirT2T Transformer vélþýðingarlíkön fyrir þýðingar á milli íslensku og ensku. Docker-Compose uppskrift keyrir upp REST vefþjón sem gerir líkönin aðgengileg netbiðlurum.
  • Service for querying dependency treebanks Drevesnik 1.0

    Drevesnik (https://orodja.cjvt.si/drevesnik/) is an online service for querying syntactically parsed corpora in Slovenian using the Universal Dependencies annotation scheme with easy-to-use query language on the one hand and user-friendly graph visualizations on the other. It is based on the open-source dep_search tool (https://github.com/TurkuNLP/dep_search), which was localized and modified so as to also support querying by JOS morphosyntactic tags, random distribution of results, and filtering by sentence length. The source code and the documentation for the search backend and the web user interface are publicly available on the CLARIN.SI GitHub repository https://github.com/clarinsi/drevesnik. This submission corresponds to release 1.0: https://github.com/clarinsi/drevesnik/releases/tag/1.0.
  • Generator of Czech lyrics according to structure

    Fine-tuned Czech TinyLlama model (https://huggingface.co/BUT-FIT/CSTinyLlama-1.2B) and Czech GPT2 small model (https://huggingface.co/lchaloupsky/czech-gpt2-oscar) to generate lyrics of song sections based on the provided syllable counts, keywords and rhyme scheme. The TinyLlama-based model yields better results, however, the GPT2-based model can run locally. Both models are discussed in a Bachelor Thesis: Generation of Czech Lyrics to Cover Songs.