Buch, Englisch, 309 Seiten, Paperback, Format (B × H): 190 mm x 235 mm
Buch, Englisch, 309 Seiten, Paperback, Format (B × H): 190 mm x 235 mm
Reihe: Synthesis Lectures on Human Language Technologies
ISBN: 978-1-62705-298-6
Verlag: MORGAN & CLAYPOOL
The second part of the book (Parts III and IV) introduces more specialized neural network architectures, including 1D convolutional neural networks, recurrent neural networks, conditioned-generation models, and attention-based models. These architectures and techniques are the driving force behind state-of-the-art algorithms for machine translation, syntactic parsing, and many other applications. Finally, we also discuss tree-shaped networks, structured prediction, and the prospects of multi-task learning.
Autoren/Hrsg.
Weitere Infos & Material
- Preface
- Acknowledgments
- Introduction
- Learning Basics and Linear Models
- Learning Basics and Linear Models
- From Linear Models to Multi-layer Perceptrons
- Feed-forward Neural Networks
- Neural Network Training
- Features for Textual Data
- Case Studies of NLP Features
- From Textual Features to Inputs
- Language Modeling
- Pre-trained Word Representations
- Pre-trained Word Representations
- Using Word Embeddings
- Case Study: A Feed-forward Architecture for Sentence
- Case Study: A Feed-forward Architecture for Sentence Meaning Inference
- Ngram Detectors: Convolutional Neural Networks
- Recurrent Neural Networks: Modeling Sequences and Stacks
- Concrete Recurrent Neural Network Architectures
- Modeling with Recurrent Networks
- Modeling with Recurrent Networks
- Conditioned Generation
- Modeling Trees with Recursive Neural Networks
- Modeling Trees with Recursive Neural Networks
- Structured Output Prediction
- Cascaded, Multi-task and Semi-supervised Learning
- Conclusion
- Bibliography
- Author's Biography