Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
The 1990s have been an interesting time for researchers working with large collections of text. It was not all that long ago that researchers referred to the Brown Corpus as a “large’ corpus. The Brown Corpus, a "mere’ million words collected at Brown University in the 1960s, is about the same size as a dozen novels, the complete works of William Shakespeare, the Bible, a collegiate dictionary or a week of a newswire service. Today, one can easily surf the web and download millions of words in no time at all. What can we do with all this data? It is better to do something simple than nothing at all. Researchers in large corpora are using basically brute force methods to make progress on some of the hardest problems in natural language processing, including part-of-speech tagging, word sense disambiguation, parsing, machine translation, information retrieval and discourse analysis. They are overcoming the so-called knowledge-acquisition bottleneck by processing vast quantities of data, more text than anyone could possibly read in a lifetime, and estimating all sorts of "central and typical’ facts that any speaker of the language would be expected to know, for example, word frequencies, word associations and typical predicate-argument relations. Much of this work has been reported at a series of annual meetings, known as the Workshop on Very Large Corpora (WVLC) and related meetings sponsored by ACL/SIGDAT (Association for Computational Linguistics’ special interest group on data). Subsequent meetings have been held in Asia (1994, 1997), America (1995, 1996, 1997) and Europe (1995, 1996). The papers in this book represent much of the best of the first three years of this workshop/conference as selected by a competitive review process. From the foreword: "The bipolar transistor has a remarkable characteristic that makes it unique as a circuit design element; it displays an exponential relationship between collector current and base-to-emitter voltage that is highly accurate over an extremely wide range of currents. This conformance to a mathematical law opens up numerous possibilities for analog signal processing. Log filters represent one of the most interesting applications of this exponential relationship.‘ Robert Adams, Analog Devices.
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
The 1990s have been an interesting time for researchers working with large collections of text. It was not all that long ago that researchers referred to the Brown Corpus as a “large’ corpus. The Brown Corpus, a "mere’ million words collected at Brown University in the 1960s, is about the same size as a dozen novels, the complete works of William Shakespeare, the Bible, a collegiate dictionary or a week of a newswire service. Today, one can easily surf the web and download millions of words in no time at all. What can we do with all this data? It is better to do something simple than nothing at all. Researchers in large corpora are using basically brute force methods to make progress on some of the hardest problems in natural language processing, including part-of-speech tagging, word sense disambiguation, parsing, machine translation, information retrieval and discourse analysis. They are overcoming the so-called knowledge-acquisition bottleneck by processing vast quantities of data, more text than anyone could possibly read in a lifetime, and estimating all sorts of "central and typical’ facts that any speaker of the language would be expected to know, for example, word frequencies, word associations and typical predicate-argument relations. Much of this work has been reported at a series of annual meetings, known as the Workshop on Very Large Corpora (WVLC) and related meetings sponsored by ACL/SIGDAT (Association for Computational Linguistics’ special interest group on data). Subsequent meetings have been held in Asia (1994, 1997), America (1995, 1996, 1997) and Europe (1995, 1996). The papers in this book represent much of the best of the first three years of this workshop/conference as selected by a competitive review process. From the foreword: "The bipolar transistor has a remarkable characteristic that makes it unique as a circuit design element; it displays an exponential relationship between collector current and base-to-emitter voltage that is highly accurate over an extremely wide range of currents. This conformance to a mathematical law opens up numerous possibilities for analog signal processing. Log filters represent one of the most interesting applications of this exponential relationship.‘ Robert Adams, Analog Devices.