Sandbox
From the LDC Language Resource Wiki
m |
m |
||
Line 1: | Line 1: | ||
[[Category:Admin|Sandbox]]{{:Sandbox top}} | [[Category:Admin|Sandbox]]{{:Sandbox top}} | ||
- | <p style="text-align:center">'''FEEL FREE TO DELETE ANYTHING BELOW THE LINE,<BR> | + | <p style="text-align:center">'''FEEL FREE TO DELETE ANYTHING BELOW THE DOUBLE LINE,<BR> |
- | '''BUT DON'T TOUCH THE DOUBLE LINE OR | + | '''BUT DON'T TOUCH THE DOUBLE LINE OR ANYTHING ABOVE IT. THANKS. <br> |
'''-- The Mgt.</p> | '''-- The Mgt.</p> | ||
=WELCOME TO THE SANDBOX= | =WELCOME TO THE SANDBOX= | ||
---- | ---- | ||
---- | ---- | ||
- | + | {{Under construction}} | |
- | + | {{si|[[User:Ftyers|Ftyers]] 19:13, 22 April 2010 (UTC)}} | |
- | + | ||
- | + | ||
+ | This page is for language-independent resources for computational natural language processing. <br> | ||
+ | Language-independent [[General Meta-resources]] that are not specific to NLP have their own page. | ||
- | = | + | ==Apertium== |
- | + | A free/open-source rule-based machine translation platform offering free linguistic data (morphological analysers, bilingual dictionaries, etc.) in XML formats for a range of languages. | |
- | + | ===Links=== | |
+ | |||
+ | * [http://www.apertium.org Apertium: Home] | ||
+ | |||
+ | == An Crúbadán == | ||
+ | [http://borel.slu.edu/crubadan/index.html Corpus building for minority languages]: Home page for ''An Crúbadán'', web crawling software by Kevin P. Scannell designed for corpus building for minority languages. {{si|[[User:Mamandel|Mamandel]] 00:25, 14 May 2010 (UTC)}} | ||
+ | :{{Hlq}}Statistical techniques are a key part of most modern natural language processing systems. Unfortunately, such techniques require the existence of large bodies of text, and in the past corpus development has proved to be quite expensive. As a result, substantial corpora exist primarily for languages like English, French, German, etc. where there is a market-driven need for NLP tools. | ||
+ | : {{Hq|My software is designed to exploit the vast quantities of text freely available on the web as a way of bringing the benefits of statistical NLP to languages with small numbers of speakers and/or limited computational resources. Initially it was deployed for the six Celtic languages, but more recently I've added support for a number of other languages from all parts of the world. You can find an up-to-date list of languages and the corpus statistics for each on the Status Page. There is also information on tools developed using these corpora on the Applications Page.}} | ||
+ | |||
+ | ==Foma== | ||
+ | [http://www.aclweb.org/anthology-new/E/E09/E09-2008.pdf Foma: a finite-state compiler and library] | ||
+ | |||
+ | ==HFST== | ||
+ | |||
+ | The Helsinki finite-state toolkit is a free/open-source rewrite of the Xerox finite-state tools. It provides an implementation both of the <code>lexc</code> and <code>twolc</code> formalisms. | ||
+ | |||
+ | ===Links=== | ||
+ | |||
+ | * [http://www.ling.helsinki.fi/kieliteknologia/tutkimus/hfst/ HFST: Home] | ||
+ | |||
+ | ==Machine Translation Archive== | ||
+ | [http://www.mt-archive.info/ Machine Translation Archive]. Electronic repository and bibliography of articles, books and papers on topics in machine translation, computer translation systems, and computer-based translation tools. >6400 items. Aims to be comprehensive on English-language publications since 1990; adding earlier papers and books to provide partial coverage from the 1950s. {{si|[[User:Mamandel|Mamandel]] 20:53, 22 April 2010 (UTC)}} | ||
+ | |||
+ | ==Methodology== | ||
+ | '''Probabilistic tagging of minority language data: a case study using Qtag''' | ||
+ | * Christopher Cox. 2010. {{si|[[User:Mamandel|Mamandel]] 20:24, 23 August 2010 (UTC)}} | ||
+ | *In ''[http://www.rodopi.nl/senj.asp?BookId=LC+71 Corpus-linguistic applications]'', ed. Stefan Th. Gries, Stefanie Wulff, and Mark Davies. [http://www.rodopi.nl/ Rodopi]. Electronic: ISBN 9789042028012; hardback: ISBN 9789042028005. | ||
+ | * Reviewed in [http://linguistlist.org/issues/21/21-3318.html LINGUIST List 21.3318] by Andrew Caines (2010-08-17): | ||
+ | *:{{Hq}}Cox's theme is corpus planning. He considers the tagging process, and evaluates the time-accuracy trade-off in using (a) normalized/unnormalized orthography; (b) various chunk sizes for rounds of iterative, interactive tagging; (c) tagset size. He does so in the context of corpus building for minority languages which are on the whole associated with more modest resources than major language projects. | ||
+ | *:{{Hq|Cox considers what is required to tag a minority-language corpus. He finds that orthographically normalized data is 20% more accurate but more expensive to prepare, that smaller chunks are preferable for iterative interactive tagging, and that a less elaborate tagset is more accurate and efficient. Cox notes that these observations must be set against the purpose of the corpus and the requirements of the researchers who will be using it. This is a well-written paper with well-defined research questions and conclusions which are explicitly linked back to them -- an attribute which cannot be taken for granted in academic literature.}} | ||
+ | |||
+ | ==OBELEX== | ||
+ | [http://hypermedia.ids-mannheim.de/pls/lexpublic/bib_en.ansicht Online Bibliography of Electronic Lexicography] (OBELEX). All relevant articles, monographs, anthologies and reviews since 2000 and some older relevant works. Focus is on online lexicography. Dictionaries not included, but included in a supplementary database now under construction. Search by full text, keyword, person, analysed languages, or publication year. {{si|[[User:Mamandel|Mamandel]] 22:26, 28 April 2010 (UTC)}} | ||
+ | *[http://hypermedia.ids-mannheim.de/pls/lexpublic/bib.ansicht Home page in German.] | ||
+ | *[http://linguistlist.org/issues/21/21-1915.html Announcement] on LINGUIST List {{attrib|19-Apr-2010 }} | ||
+ | |||
+ | ==TMX== | ||
+ | |||
+ | An XML-based format for translation memories. | ||
+ | |||
+ | ===Links=== | ||
+ | |||
+ | * [http://www.lisa.org/Translation-Memory-e.34.0.html TMX: Home] | ||
+ | |||
+ | ==Universal Networking Language== | ||
+ | [http://www.unlweb.net/unlweb/ Universal Networking Language (UNL)]: {{hq|an artificial language for representing, describing, summarizing, refining, storing and disseminating information in a natural-language-independent format. It is a kind of mark-up language which represents not the formatting but the core information of a text. As HTML annotations can be realized differently in the context of different applications, machines, displays, etc., so UNL expressions can have different realizations in different human languages.}} | ||
+ | {{si|[[User:Mamandel|Mamandel]] 20:26, 6 May 2010 (UTC)}} | ||
==VISL Constraint Grammar== | ==VISL Constraint Grammar== | ||
- | A free/open-source software reimplementation and extension of Fred Karlsson's Constraint Grammar formalism. | + | A free/open-source software reimplementation and extension of Fred Karlsson's Constraint Grammar formalism. |
+ | |||
+ | ===Links=== | ||
+ | |||
+ | * [http://beta.visl.sdu.dk/constraint_grammar.html VISL Constraint Grammar: Home] | ||
+ | |||
+ | |||
+ | [[Category:Non-language-specific]] |
Revision as of 19:49, 10 May 2011
The Sandbox is a place to play. Use this page for practicing wiki editing, making links, anything! Don't expect anything you put here to last.
- Learn how to manipulate the Wiki.
- What Can I Do?
- I can make things bold ('''bold''').
- I can italicize (''italicize'').
- I can timestamp and sign: Mamandel 14:52, 22 April 2010 (UTC) (four tildes: ~~~~)
- or just timestamp: 14:52, 22 April 2010 (UTC) (five tildes: ~~~~~)
- or just sign: Mamandel (three tildes: ~~~)
- I can make an external link ([http://ldc.upenn.edu external link] -- space between URL and text).
- I can make an internal link ([[Bengali/Bengali|internal link]] -- pipe character '|' between page title and text).
- What Can I Do?
I can make text preformatted and in a box (note, no auto-wrapping). (White space at beginning of line).
Some magic words and what they produce:
- {{SERVER}}: http://lrwiki.ldc.upenn.edu
- {{PAGENAME}}: Sandbox
For much, much more info see Mediawiki's editing help.
FEEL FREE TO DELETE ANYTHING BELOW THE DOUBLE LINE,
BUT DON'T TOUCH THE DOUBLE LINE OR ANYTHING ABOVE IT. THANKS.
-- The Mgt.
Contents |
WELCOME TO THE SANDBOX
UNDER CONSTRUCTION
[Ftyers 19:13, 22 April 2010 (UTC)]
This page is for language-independent resources for computational natural language processing.
Language-independent General Meta-resources that are not specific to NLP have their own page.
Apertium
A free/open-source rule-based machine translation platform offering free linguistic data (morphological analysers, bilingual dictionaries, etc.) in XML formats for a range of languages.
Links
An Crúbadán
Corpus building for minority languages: Home page for An Crúbadán, web crawling software by Kevin P. Scannell designed for corpus building for minority languages. [Mamandel 00:25, 14 May 2010 (UTC)]
- “Statistical techniques are a key part of most modern natural language processing systems. Unfortunately, such techniques require the existence of large bodies of text, and in the past corpus development has proved to be quite expensive. As a result, substantial corpora exist primarily for languages like English, French, German, etc. where there is a market-driven need for NLP tools.
- “My software is designed to exploit the vast quantities of text freely available on the web as a way of bringing the benefits of statistical NLP to languages with small numbers of speakers and/or limited computational resources. Initially it was deployed for the six Celtic languages, but more recently I've added support for a number of other languages from all parts of the world. You can find an up-to-date list of languages and the corpus statistics for each on the Status Page. There is also information on tools developed using these corpora on the Applications Page.”
Foma
Foma: a finite-state compiler and library
HFST
The Helsinki finite-state toolkit is a free/open-source rewrite of the Xerox finite-state tools. It provides an implementation both of the lexc
and twolc
formalisms.
Links
Machine Translation Archive
Machine Translation Archive. Electronic repository and bibliography of articles, books and papers on topics in machine translation, computer translation systems, and computer-based translation tools. >6400 items. Aims to be comprehensive on English-language publications since 1990; adding earlier papers and books to provide partial coverage from the 1950s. [Mamandel 20:53, 22 April 2010 (UTC)]
Methodology
Probabilistic tagging of minority language data: a case study using Qtag
- Christopher Cox. 2010. [Mamandel 20:24, 23 August 2010 (UTC)]
- In Corpus-linguistic applications, ed. Stefan Th. Gries, Stefanie Wulff, and Mark Davies. Rodopi. Electronic: ISBN 9789042028012; hardback: ISBN 9789042028005.
- Reviewed in LINGUIST List 21.3318 by Andrew Caines (2010-08-17):
- “{{{1}}}”Cox's theme is corpus planning. He considers the tagging process, and evaluates the time-accuracy trade-off in using (a) normalized/unnormalized orthography; (b) various chunk sizes for rounds of iterative, interactive tagging; (c) tagset size. He does so in the context of corpus building for minority languages which are on the whole associated with more modest resources than major language projects.
- “Cox considers what is required to tag a minority-language corpus. He finds that orthographically normalized data is 20% more accurate but more expensive to prepare, that smaller chunks are preferable for iterative interactive tagging, and that a less elaborate tagset is more accurate and efficient. Cox notes that these observations must be set against the purpose of the corpus and the requirements of the researchers who will be using it. This is a well-written paper with well-defined research questions and conclusions which are explicitly linked back to them -- an attribute which cannot be taken for granted in academic literature.”
OBELEX
Online Bibliography of Electronic Lexicography (OBELEX). All relevant articles, monographs, anthologies and reviews since 2000 and some older relevant works. Focus is on online lexicography. Dictionaries not included, but included in a supplementary database now under construction. Search by full text, keyword, person, analysed languages, or publication year. [Mamandel 22:26, 28 April 2010 (UTC)]
- Home page in German.
- Announcement on LINGUIST List [19-Apr-2010 ]
TMX
An XML-based format for translation memories.
Links
Universal Networking Language
Universal Networking Language (UNL): “an artificial language for representing, describing, summarizing, refining, storing and disseminating information in a natural-language-independent format. It is a kind of mark-up language which represents not the formatting but the core information of a text. As HTML annotations can be realized differently in the context of different applications, machines, displays, etc., so UNL expressions can have different realizations in different human languages.” [Mamandel 20:26, 6 May 2010 (UTC)]
VISL Constraint Grammar
A free/open-source software reimplementation and extension of Fred Karlsson's Constraint Grammar formalism.