Cross-Lingual Word Embeddings | Buch | 978-1-68173-063-9 | sack.de

Buch, Englisch, 132 Seiten, Paperback, Format (B × H): 191 mm x 235 mm

Reihe: Synthesis Lectures on Human Language Technologies

Cross-Lingual Word Embeddings


Erscheinungsjahr 2019
ISBN: 978-1-68173-063-9
Verlag: Morgan & Claypool Publishers

Buch, Englisch, 132 Seiten, Paperback, Format (B × H): 191 mm x 235 mm

Reihe: Synthesis Lectures on Human Language Technologies

ISBN: 978-1-68173-063-9
Verlag: Morgan & Claypool Publishers


The majority of natural language processing (NLP) is English language processing, and while there is good language technology support for (standard varieties of) English, support for Albanian, Burmese, or Cebuano—and most other languages—remains limited.Being able to bridge this digital divide is important for scientific and democratic reasons but also represents an enormous growth potential. A key challenge for this to happen is learning to align basic meaning-bearing units of different languages.In this book, the authors survey and discuss recent and historical work on supervised and unsupervised learning of such alignments. Specifically, the book focuses on so-called cross-lingual word embeddings. The survey is intended to be systematic, using consistent notation and putting the available methods on comparable form, making it easy to compare wildly different approaches. In so doing, the authors establish previously unreported relations between these methods and are able to present a fast-growing literature in a very compact way. Furthermore, the authors discuss how best to evaluate cross-lingual word embedding methods and survey the resources available for students and researchers interested in this topic.
Cross-Lingual Word Embeddings jetzt bestellen!

Autoren/Hrsg.


Weitere Infos & Material


- Preface
- Introduction
- Monolingual Word Embedding Models
- Cross-Lingual Word Embedding Models: Typology
- A Brief History of Cross-Lingual Word Representations
- Word-Level Alignment Models
- Sentence-Level Alignment Methods
- Document-Level Alignment Models
- From Bilingual to Multilingual Training
- Unsupervised Learning of Cross-Lingual Word Embeddings
- Applications and Evaluation
- Useful Data and Software
- General Challenges and Future Directions
- Bibliography
- Authors' Biographies


Anders Søgaard is a Professor in Computer Science of the University of Copenhagen. He is funded by a Google Focused Research Award, and before that, he held an ERC Starting Grant. He has won best paper awards at NAACL, EACL, CoNLL, and more. He is interested in the learnability of language.

Ivan Vulic is a Senior Research Associate in the Language Technology Lab at the University of Cambridge since 2015. Ivan holds a Ph.D. in Computer Science from KU Leuven, having achieved summa cum laude in 2014 on "Unsupervised Algorithms for Cross-lingual Text Analysis, Translation Mining, and Information Retrieval." He is interested in representation learning, human language understanding, distributional, lexical, and multi-modal semantics in monolingual and multilingual contexts, and transfer learning for enabling cross-lingual NLP applications. He has co-authored more than 60 peer-reviewed research papers published in top-tier journals and conference proceedings in NLP and IR. He co-lectured a tutorial on monolingual and multilingual topic models and applications at ECIR 2013 and WSDM 2014, a tutorial on word vector space specialisation at EACL 2017 and ESSLLI 2018, a tutorial on cross lingual word representations at EMNLP 2017, and a tutorial on deep learning for conversational AI at NAACL 2018.

Sebastian Ruder is a Research Scientist at DeepMind. He obtained his Ph.D. in Natural Language Processing at the National University of Ireland, Galway in 2019. He is interested in transfer learning and cross-lingual learning and has published widely read reviews as well as more than ten peer-reviewed research papers in top-tier conference proceedings in NLP.

Manaal Faruqui is a Senior Research Scientist at Google, working on industrial scale NLP and ML problems. He obtained his Ph.D. in the Language Technologies Institute at Carnegie Mellon University while working on representation learning, multilingual learning, and distributional and lexical semantics. He received a best paper award at NAACL 2015 for his work on incorporating semantic knowledge in word vector representations. He serves on the editorial board of the Computational Linguistics journal and has been an area chair for several ACL conferences.


Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.