Result filters

Metadata provider

Language

Resource type

  • Web service

Availability

Active filters:

  • Resource type: Web service
  • Project: CLARIAH-CORE
Loading...
11 record(s) found

Search results

  • Blacklab AutoSearch Corpus Search

    This demonstrator allows users to define one or more corpora and upload data for the corpora, after which the corpora will be made automatically searchable in a private workspace. Users can upload text data annotated with lemma + part of speech tags in TEI or FoLiA format, either as a single XML file or as an archive (zip or tar.gz) containing several XML files. Corpus size is limited to begin with (25 MB limit per uploaded file; 500,000 token limit for an entire corpus), but these limits may be increased at a later point in time. The search application is powered by the INL BlackLab corpus search engine. The search interface is the same as the one used in for example the Corpus of Contemporary Dutch / Corpus Hedendaags Nederlands.
  • WebCelex

    WebCelex is a webbased interface to the CELEX lexical databases of English, Dutch and German. CELEX was developed as a joint enterprise of the University of Nijmegen, the Institute for Dutch Lexicology in Leiden, the Max Planck Institute for Psycholinguistics in Nijmegen, and the Institute for Perception Research in Eindhoven. For each language, the database contains detailed information on: orthography (variations in spelling, hyphenation), phonology (phonetic transcriptions, variations in pronunciation, syllable structure, primary stress), morphology (derivational and compositional structure, inflectional paradigms), syntax (word class, word class-specific subcategorizations, argument structures) and word frequency (summed word and lemma counts, based on recent and representative text corpora).
  • PICCL: Philosophical Integrator of Computational and Corpus Libraries

    PICCL is a set of workflows for corpus building through OCR, post-correction, modernization of historic language and Natural Language Processing. It combines Tesseract Optical Character Recognition, TICCL functionality and Frog functionality in a single pipeline. Tesseract offers Open Source software for optical character recognition. TICCL (Text Induced Corpus Clean-up) is a system that is designed to search a corpus for all existing variants of (potentially) all words occurring in the corpus. This corpus can be one text, or several, in one or more directories, located on one or more machines. TICCL creates word frequency lists, listing for each word type how often the word occurs in the corpus. These frequencies of the normalized word forms are the sum of the frequencies of the actual word forms found in the corpus. TICCL is a system that is intended to detect and correct typographical errors (misprints) and OCR errors (optical character recognition) in texts. When books or other texts are scanned from paper by a machine, that then turns these scans, i.e. images, into digital text files, errors occur. For instance, the letter combination `in' can be read as `m', and so the word `regeering' is incorrectly reproduced as `regeermg'. TICCL can be used to detect these errors and to suggest a correct form. Frog enriches textual documents with various linguistic annotations.
    Martin Reynaert, Maarten van Gompel, Ko van der Sloot and Antal van den Bosch. 2015. PICCL: Philosophical Integrator of Computational and Corpus Libraries. Proceedings of CLARIN Annual Conference 2015, pp. 75-79. Wrocław, Poland. http://www.nederlab.nl/cms/wp-content/uploads/2015/10/Reynaert_PICCL-Philosophical-Integrator-of-Computational-and-Corpus-Libraries.pdf
    PICCL
  • Frog: An advanced Natural Language Processing Suite for Dutch (Web Service and Application)

    Frog is an integration of memory-based natural language processing (NLP) modules developed for Dutch. It performs automatic linguistic enrichment such as part of speech tagging, lemmatisation, named entity recognition, shallow parsing, dependency parsing and morphological analysis. All NLP modules are based on TiMBL.
    Iris Hendrickx, Antal van den Bosch, Maarten van Gompel, Ko van der Sloot and Walter Daelemans. 2016.Frog: A Natural Language Processing Suite for Dutch. CLST Technical Report 16-02, pp 99-114. Nijmegen, the Netherlands. https://github.com/LanguageMachines/frog/blob/master/docs/frogmanual.pdf
    Van den Bosch, A., Busser, G.J., Daelemans, W., and Canisius, S. (2007). An efficient memory-based morphosyntactic tagger and parser for Dutch, In F. van Eynde, P. Dirix, I. Schuurman, and V. Vandeghinste (Eds.), Selected Papers of the 17th Computational Linguistics in the Netherlands Meeting, Leuven, Belgium, pp. 99-114. http://ilk.uvt.nl/downloads/pub/papers/tadpole-final.pdf
    Frog (plain text input)
    Frog (folia+xml input)
  • Ucto Tokeniser

    Ucto tokenizes text files: it separates words from punctuation, and splits sentences. This is one of the first tasks for almost any Natural Language Processing application. Ucto offers several other basic preprocessing steps such as changing case that you can all use to make your text suited for further processing such as indexing, part-of-speech tagging, or machine translation. The tokeniser engine is language independent. By supplying language-specific tokenisation rules in an external configuration file a tokeniser can be created for a specific language. Ucto comes with tokenization rules for English, Dutch, French, Italian, and Swedish; it is easily extendible to other languages. It recognizes dates, times, units, currencies, abbreviations. It recognizes paired quote spans, sentences, and paragraphs. It produces UTF8 encoding and NFC output normalization, optionally accepts other encodings as input. Optional conversion to all lowercase or uppercase. Ucto supports FoLiA XML.
    Ucto
  • CLARIN Vocabulary Service

    The CLARIN Vocabulary Service is a running instance of the OpenSKOS exchange and publication platform for SKOS vocabularies. OpenSKOS offers several ways to publish SKOS vocabularies (upload SKOS file, harvest from another OpenSKOS instance with OAI-PMH, construct using the RESTful API) and to use vocabularies (search and autocomplete using the API, harvest using OAI-PMH, inspect in the interactive Editor or consult as Linked Data). This CLARIN OpenSKOS instance is hosted by the Meertens Institute. Contents This OpenSKOS instance currently publishes SKOS versions of three vocabularies: - ISO-639-3 language codes, as published by SIL. - Closed and simple Data Categories from the ISOcat metadata profile. - A manually constructed and curated list of Organizations, based on the CLARIN VLO. .
    Brugman, H. 2017. CLAVAS: A CLARIN Vocabulary and Alignment Service. In: Odijk J. & van Hessen A, CLARIN in the Low Countries, ch 5, pp 61-69. London: Ubiquity Press. DOI: https://doi.org/10.5334/bbi.5
  • Automatic Transcription of Oral History Interviews

    This webservice and web application uses automatic speech recognition to provide the transcriptions of recordings spoken in Dutch. You can upload and process only one file per project. For bulk processing and other questions, please contact Henk van den Heuvel at h.vandenheuvel@let.ru.nl.
  • OpenConvert

    The OpenConvert tools convert to TEI or FOLiA from a number of input formats (alto, text, word, HTML, ePub). The tools are available as a Java command line tool, a web service and a web application.The OpenConvert Tools were created by IVDNT in the OpenConvert project. The OpenConvert tools convert to TEI or FOLiA from a number of input formats (alto, text, word, HTML, ePub). The tools are available as a Java command line tool, a web service and a web application. Furthermore, as a proof of concept, the website currently provides two annotation tools: a simple Tokenizer for TEI files and a modern Dutch part of speech tagger.
    The tool service can be called as a REST webservice which returns responses in XML, allowing it to be part of a webservice tool chain.
    Input TEI, plain text, HTML
    ALTO XML input
    ePub input
    directory containing files of a valid input type
    zip file (with extension .zip) containing files of a valid input type
    Free for academic use. Non-applicable for commercial parties
    CLARIN based login required. The Clarin federation accepts login from many europian institutions. please seehttp://www.clarin.eu/content/service-provider-federation for more details
    input file name (File upload)
    Format of input file
    Format of output file
    to specify the tagger or tokeniser
    input file mimetype is application/tei+xml
    input file mimetype is text/html
    input file mimetype is text/alto+xml
    input file mimetype is application/msword
    input file mimetype is application/epub+zip
    input file mimetype is text/plain
    output file mimetype is application/tei+xml
    output file mimetype is text/folia+xml
    Basic tagger-lemmatizer for modern Dutch
    a TEI tokenizer
  • CMDI to RDF conversion

    There is growing amount of on-line information available in RDF format as Linked Open Data (LOD) and a strong community very actively promotes its use. The publication of information as LOD is also considered an important signal that the publisher is actively searching for information sharing with a world full of new potential users. Added advantages of LOD, when well used, are the explicit semantics and high interoperability. But the problematic modelling by non-expert users offsets these advantages, which is a reason why modelling systems as CMDI are used. The CMDI2RDF project aims to bring the LOD advantages to the CMDI world and make the huge store of CMDI information available to new groups of users and at the same time offer CLARIN a powerful tool to experiment with new metadata discovery possibilities. The CMD2RDFservice was created to allow connection with the growing LOD world, and facilitate experiments within CLARIN merging CMDI with other, RDF based, information sources. One of the promises of LOD is the ease to link data sets together and answer queries based on this ‘cloud’ of LOD datasets. Thus in the enrichment and use cases part of the project we looked at other datasets to link to the CLARIN joint metadata domain. We used the WALS N3 RDF dump for one of the use cases. Although it is in the end relatively easy to go from a specific typological feature to the CMD records via a shared URI, it still showcased a weakness of the Linked Data approach. One has to carefully inspect the property paths involved. And in this case the path was broken as there was no clear way to go from the WALS feature data to the WALS language info except for extracting the WALS language code from the feature URI pattern and insert it the language URI pattern. This showcases that although the big LOD cloud shows potential for knowledge discovery by crossing dataset boundaries, design decisions in the individual datasets can still hamper algorithms and manual inspection is needed. The CMD2RDF service was developed at the TLA/MPI for Psycholinguistics and DANS and later moved to Meertens Institute where the expertise remains.