NLP Resources

From the LDC Language Resource Wiki

(Difference between revisions)
Jump to: navigation, search
m
m
Line 79: Line 79:
-
[[Category:General]] [[Category:General Resources]]
+
[[Category:General]]

Revision as of 12:56, 12 October 2010

THIS PAGE IS

UNDER CONSTRUCTION


(Ftyers 19:13, 22 April 2010 (UTC))

This page is for language-independent NLP resources.

Contents

Apertium

A free/open-source rule-based machine translation platform offering free linguistic data (morphological analysers, bilingual dictionaries, etc.) in XML formats for a range of languages.

Links

An Crúbadán

Corpus building for minority languages: Home page for An Crúbadán, web crawling software by Kevin P. Scannell designed for corpus building for minority languages. [Mamandel 00:25, 14 May 2010 (UTC)]

Template:Heavy lqStatistical techniques are a key part of most modern natural language processing systems. Unfortunately, such techniques require the existence of large bodies of text, and in the past corpus development has proved to be quite expensive. As a result, substantial corpora exist primarily for languages like English, French, German, etc. where there is a market-driven need for NLP tools.
Template:Heavy quotes

Foma

HFST

The Helsinki finite-state toolkit is a free/open-source rewrite of the Xerox finite-state tools. It provides an implementation both of the lexc and twolc formalisms.

Links

Machine Translation Archive

Machine Translation Archive. Electronic repository and bibliography of articles, books and papers on topics in machine translation, computer translation systems, and computer-based translation tools. >6400 items. Aims to be comprehensive on English-language publications since 1990; adding earlier papers and books to provide partial coverage from the 1950s. [Mamandel 20:53, 22 April 2010 (UTC)]

Methodology

Probabilistic tagging of minority language data: a case study using Qtag

  • Christopher Cox. 2010. [Mamandel 20:24, 23 August 2010 (UTC)]
  • In Corpus-linguistic applications, ed. Stefan Th. Gries, Stefanie Wulff, and Mark Davies. Rodopi. Electronic: ISBN 9789042028012; hardback: ISBN 9789042028005.
  • Reviewed in LINGUIST List 21.3318 by Andrew Caines (2010-08-17):
    Template:Heavy lqCox's theme is corpus planning. He considers the tagging process, and evaluates the time-accuracy trade-off in using (a) normalized/unnormalized orthography; (b) various chunk sizes for rounds of iterative, interactive tagging; (c) tagset size. He does so in the context of corpus building for minority languages which are on the whole associated with more modest resources than major language projects.
    Cox considers what is required to tag a minority-language corpus. He finds that orthographically normalized data is 20% more accurate but more expensive to prepare, that smaller chunks are preferable for iterative interactive tagging, and that a less elaborate tagset is more accurate and efficient. Cox notes that these observations must be set against the purpose of the corpus and the requirements of the researchers who will be using it. This is a well-written paper with well-defined research questions and conclusions which are explicitly linked back to them -- an attribute which cannot be taken for granted in academic literature.

OBELEX

Online Bibliography of Electronic Lexicography (OBELEX). All relevant articles, monographs, anthologies and reviews since 2000 and some older relevant works. Focus is on online lexicography. Dictionaries not included, but included in a supplementary database now under construction. Search by full text, keyword, person, analysed languages, or publication year. (Mamandel 22:26, 28 April 2010 (UTC))

TMX

An XML-based format for translation memories.

Links

Universal Networking Language

[from the home page]: Template:Heavy quotes [Mamandel 20:26, 6 May 2010 (UTC)]

Links

University of Western Australia Web Text Mining and NLP Tools

[From LINGUIST List 21.2867]: We have made available a list of web services for accessing text mining and NLP tools implemented at our research group such as boilerplate removal (known as HERCULES), semantic similarity/relatedness measures (i.e. Normalised Web Distance, n-Degree of Wikipedia), noun phrase chunking, triple extraction, noisy text cleaning (known as ISSAC), simple term extraction, and access to our multi-domain, 300 million token text corpora (which are continuously growing).
--Dr Wilson Wong, School of Computer Science & Software Engineering, The University of Western Australia [Mamandel 17:36, 12 July 2010 (UTC)]

Links

VISL Constraint Grammar

A free/open-source software reimplementation and extension of Fred Karlsson's Constraint Grammar formalism.

Links

Personal tools