NLP Resources

From the LDC Language Resource Wiki

(Difference between revisions)
Jump to: navigation, search
m
m (Foma)
Line 19: Line 19:
==Foma==
==Foma==
-
[http://www.aclweb.org/anthology-new/E/E09/E09-2008.pdf Foma: a finite-state compiler and library]
+
* Foma: a finite-state compiler and library. Hulden, Mans. 2009. ''Proceedings of the EACL 2009 Demonstrations Session'', pages 29–32, Athens, Greece, 3 April 2009. [[http://www.aclweb.org/anthology-new/E/E09/E09-2008.pdf PDF]
==HFST==
==HFST==

Revision as of 19:57, 10 May 2011

THIS PAGE IS

UNDER CONSTRUCTION


[Ftyers 19:13, 22 April 2010 (UTC)]

This page is for language-independent resources for computational natural language processing.
Language-independent General Meta-resources that are not specific to NLP have their own page.

Contents

Apertium

A free/open-source rule-based machine translation platform offering free linguistic data (morphological analysers, bilingual dictionaries, etc.) in XML formats for a range of languages.

Links

An Crúbadán

Corpus building for minority languages: Home page for An Crúbadán, web crawling software by Kevin P. Scannell designed for corpus building for minority languages. [Mamandel 00:25, 14 May 2010 (UTC)]

Statistical techniques are a key part of most modern natural language processing systems. Unfortunately, such techniques require the existence of large bodies of text, and in the past corpus development has proved to be quite expensive. As a result, substantial corpora exist primarily for languages like English, French, German, etc. where there is a market-driven need for NLP tools.
My software is designed to exploit the vast quantities of text freely available on the web as a way of bringing the benefits of statistical NLP to languages with small numbers of speakers and/or limited computational resources. Initially it was deployed for the six Celtic languages, but more recently I've added support for a number of other languages from all parts of the world. You can find an up-to-date list of languages and the corpus statistics for each on the Status Page. There is also information on tools developed using these corpora on the Applications Page.

Foma

  • Foma: a finite-state compiler and library. Hulden, Mans. 2009. Proceedings of the EACL 2009 Demonstrations Session, pages 29–32, Athens, Greece, 3 April 2009. [PDF

HFST

The Helsinki finite-state toolkit is a free/open-source rewrite of the Xerox finite-state tools. It provides an implementation both of the lexc and twolc formalisms.

Links

Machine Translation Archive

Machine Translation Archive. Electronic repository and bibliography of articles, books and papers on topics in machine translation, computer translation systems, and computer-based translation tools. >6400 items. Aims to be comprehensive on English-language publications since 1990; adding earlier papers and books to provide partial coverage from the 1950s. [Mamandel 20:53, 22 April 2010 (UTC)]

Methodology

Probabilistic tagging of minority language data: a case study using Qtag

  • Christopher Cox. 2010. [Mamandel 20:24, 23 August 2010 (UTC)]
  • In Corpus-linguistic applications, ed. Stefan Th. Gries, Stefanie Wulff, and Mark Davies. Rodopi. Electronic: ISBN 9789042028012; hardback: ISBN 9789042028005.
  • Reviewed in LINGUIST List 21.3318 by Andrew Caines (2010-08-17):
    {{{1}}}Cox's theme is corpus planning. He considers the tagging process, and evaluates the time-accuracy trade-off in using (a) normalized/unnormalized orthography; (b) various chunk sizes for rounds of iterative, interactive tagging; (c) tagset size. He does so in the context of corpus building for minority languages which are on the whole associated with more modest resources than major language projects.
    Cox considers what is required to tag a minority-language corpus. He finds that orthographically normalized data is 20% more accurate but more expensive to prepare, that smaller chunks are preferable for iterative interactive tagging, and that a less elaborate tagset is more accurate and efficient. Cox notes that these observations must be set against the purpose of the corpus and the requirements of the researchers who will be using it. This is a well-written paper with well-defined research questions and conclusions which are explicitly linked back to them -- an attribute which cannot be taken for granted in academic literature.

OBELEX

Online Bibliography of Electronic Lexicography (OBELEX). All relevant articles, monographs, anthologies and reviews since 2000 and some older relevant works. Focus is on online lexicography. Dictionaries not included, but included in a supplementary database now under construction. Search by full text, keyword, person, analysed languages, or publication year. [Mamandel 22:26, 28 April 2010 (UTC)]

TMX

An XML-based format for translation memories.

Links

Universal Networking Language

Universal Networking Language (UNL): an artificial language for representing, describing, summarizing, refining, storing and disseminating information in a natural-language-independent format. It is a kind of mark-up language which represents not the formatting but the core information of a text. As HTML annotations can be realized differently in the context of different applications, machines, displays, etc., so UNL expressions can have different realizations in different human languages. [Mamandel 20:26, 6 May 2010 (UTC)]

VISL Constraint Grammar

A free/open-source software reimplementation and extension of Fred Karlsson's Constraint Grammar formalism.

Links

Personal tools