m |
m |
(7 intermediate revisions not shown) |
Line 6: |
Line 6: |
| ---- | | ---- |
| ---- | | ---- |
- | {{Under construction}}
| |
- | {{si|[[User:Mamandel|Mamandel]] 20:19, 10 May 2011 (UTC)}}
| |
- |
| |
- | This page is for language-independent resources for computational natural language processing. <br>
| |
- | Language-independent [[General Meta-resources]] that are not specific to NLP have their own page.
| |
- |
| |
- | ==NLP Literature==
| |
- |
| |
- | ==Software==
| |
- |
| |
- |
| |
- | * [http://borel.slu.edu/crubadan/index.html An Crúbadán]: Corpus building for minority languages. Web crawling software {{Hq|designed to exploit the vast quantities of text freely available on the web as a way of bringing the benefits of statistical NLP to languages with small numbers of speakers and/or limited computational resources.}} Kevin P. Scannell. {{si|[[User:Mamandel|Mamandel]] 00:25, 14 May 2010 (UTC)}}
| |
- | * [http://www.apertium.org Apertium]. A free/open-source rule-based machine translation platform offering free linguistic data (morphological analysers, bilingual dictionaries, etc.) in XML formats for a range of languages.
| |
- |
| |
- | * Foma: a finite-state compiler and library. Hulden, Mans. 2009. ''Proceedings of the EACL 2009 Demonstrations Session'', pages 29–32, Athens, Greece, 3 April 2009. [http://www.aclweb.org/anthology-new/E/E09/E09-2008.pdf PDF]
| |
- |
| |
- | * [http://www.ling.helsinki.fi/kieliteknologia/tutkimus/hfst/ Helsinki Finite-State Transducer Technology (HFST)]. A free/open-source rewrite of the Xerox finite-state tools. It provides an implementation both of the <code>lexc</code> and <code>twolc</code> formalisms.
| |
- |
| |
- | ==Machine Translation Archive==
| |
- | [http://www.mt-archive.info/ Machine Translation Archive]. Electronic repository and bibliography of articles, books and papers on topics in machine translation, computer translation systems, and computer-based translation tools. >6400 items. Aims to be comprehensive on English-language publications since 1990; adding earlier papers and books to provide partial coverage from the 1950s. {{si|[[User:Mamandel|Mamandel]] 20:53, 22 April 2010 (UTC)}}
| |
- |
| |
- | ==Methodology==
| |
- | '''Probabilistic tagging of minority language data: a case study using Qtag'''
| |
- | * Christopher Cox. 2010. {{si|[[User:Mamandel|Mamandel]] 20:24, 23 August 2010 (UTC)}}
| |
- | *In ''[http://www.rodopi.nl/senj.asp?BookId=LC+71 Corpus-linguistic applications]'', ed. Stefan Th. Gries, Stefanie Wulff, and Mark Davies. [http://www.rodopi.nl/ Rodopi]. Electronic: ISBN 9789042028012; hardback: ISBN 9789042028005.
| |
- | * Reviewed in [http://linguistlist.org/issues/21/21-3318.html LINGUIST List 21.3318] by Andrew Caines (2010-08-17):
| |
- | *:{{Hq}}Cox's theme is corpus planning. He considers the tagging process, and evaluates the time-accuracy trade-off in using (a) normalized/unnormalized orthography; (b) various chunk sizes for rounds of iterative, interactive tagging; (c) tagset size. He does so in the context of corpus building for minority languages which are on the whole associated with more modest resources than major language projects.
| |
- | *:{{Hq|Cox considers what is required to tag a minority-language corpus. He finds that orthographically normalized data is 20% more accurate but more expensive to prepare, that smaller chunks are preferable for iterative interactive tagging, and that a less elaborate tagset is more accurate and efficient. Cox notes that these observations must be set against the purpose of the corpus and the requirements of the researchers who will be using it. This is a well-written paper with well-defined research questions and conclusions which are explicitly linked back to them -- an attribute which cannot be taken for granted in academic literature.}}
| |
- |
| |
- | ==OBELEX==
| |
- | [http://hypermedia.ids-mannheim.de/pls/lexpublic/bib_en.ansicht Online Bibliography of Electronic Lexicography] (OBELEX). All relevant articles, monographs, anthologies and reviews since 2000 and some older relevant works. Focus is on online lexicography. Dictionaries not included, but included in a supplementary database now under construction. Search by full text, keyword, person, analysed languages, or publication year. {{si|[[User:Mamandel|Mamandel]] 22:26, 28 April 2010 (UTC)}}
| |
- | *[http://hypermedia.ids-mannheim.de/pls/lexpublic/bib.ansicht Home page in German.]
| |
- | *[http://linguistlist.org/issues/21/21-1915.html Announcement] on LINGUIST List {{attrib|19-Apr-2010 }}
| |
- |
| |
- |
| |
- |
| |
- | ==Universal Networking Language==
| |
- | [http://www.unlweb.net/unlweb/ Universal Networking Language (UNL)]: {{hq|an artificial language for representing, describing, summarizing, refining, storing and disseminating information in a natural-language-independent format. It is a kind of mark-up language which represents not the formatting but the core information of a text. As HTML annotations can be realized differently in the context of different applications, machines, displays, etc., so UNL expressions can have different realizations in different human languages.}}
| |
- | {{si|[[User:Mamandel|Mamandel]] 20:26, 6 May 2010 (UTC)}}
| |
- |
| |
- | ==VISL Constraint Grammar==
| |
- |
| |
- | A free/open-source software reimplementation and extension of Fred Karlsson's Constraint Grammar formalism.
| |
- |
| |
- | ===Links===
| |
- |
| |
- | * [http://beta.visl.sdu.dk/constraint_grammar.html VISL Constraint Grammar: Home]
| |
- |
| |
- |
| |
- | [[Category:Non-language-specific]]
| |
The Sandbox is a place to play. Use this page for practicing wiki editing, making links, anything! Don't expect anything you put here to last.