Ghatak | Deep Learning with R | Buch | 978-981-13-7089-2 | www.sack.de

Buch, Englisch, 390 Seiten, Book, Format (B × H): 156 mm x 234 mm, Gewicht: 427 g

Ghatak

Deep Learning with R


1. Auflage 2019
ISBN: 978-981-13-7089-2
Verlag: SPRINGER NATURE

Buch, Englisch, 390 Seiten, Book, Format (B × H): 156 mm x 234 mm, Gewicht: 427 g

ISBN: 978-981-13-7089-2
Verlag: SPRINGER NATURE


Deep Learning with R introduces deep learning and neural networks using the R programming language. The book builds on the understanding of the theoretical and mathematical constructs and enables the reader to create applications on computer vision, natural language processing and transfer learning.  The book starts with an introduction to machine learning and moves on to describe the basic architecture, different activation functions, forward propagation, cross-entropy loss and backward propagation of a simple neural network. It goes on to create different code segments to construct deep neural networks. It discusses in detail the initialization of network parameters, optimization techniques, and some of the common issues surrounding neural networks such as dealing with NaNs and the vanishing/exploding gradient problem. Advanced variants of multilayered perceptrons namely, convolutional neural networks and sequence models are explained, followed by application to different use cases. The book makes extensive use of the Keras and TensorFlow frameworks.
Ghatak Deep Learning with R jetzt bestellen!

Zielgruppe


Research


Autoren/Hrsg.


Weitere Infos & Material


Preface1 Introduction to R 2 Linear Algebra2.1 Linear Algebra with R2.1.1 Introduction2.1.2 Matrix Notation3 Introduction to Machine Learning and Deep Learning 3.1 Training, Validation and Test Data3.2 Bias and Variance3.3 Underfitting and Overfitting3.3.1 Bayes Error 3.4 Maximum Likelihood Estimation 3.5 Quantifying Loss3.5.1 The Cross-Entropy Loss3.5.2 Negative Log-Likelihood3.5.3 Entropy3.5.4 Cross-Entropy3.5.5 Kullback-Leibler Divergence 3.5.6 Summarizing the Measurement of Loss4 Introduction to Neural Networks4.1 Types of Neural Network Architectures4.1.1 Feedforward Neural Networks (FFNNs) 4.1.2 Convolutional Neural Networks (Convnets)4.1.3 Recurrent Neural Networks (RNNs)4.2 Forward Propagation4.2.1 Notations4.2.2 Input Matrix 4.2.3 Bias matrix 4.2.4 Weight matrix for Layer-14.2.5 Activation function at Layer-14.2.6 Weights matrix of Layer-24.2.7 Activation function at Layer-2 4.3 Activation Functions4.3.1 Sigmoid4.3.2 Hyperbolic tangent (tanh)4.3.3 Rectified Linear Unit (ReLU)4.3.4 leakyReLU4.3.5 Softmax 4.4 Derivatives of Activation Functions4.4.1 Derivative of the Sigmoid4.4.2 Derivative of the tanh4.4.3 Derivative of the ReLU CONTENTS4.4.4 Derivative of the lReLU4.4.5 Derivative of the Softmax4.5 Loss Functions4.6 Derivative of the Cost Function4.6.1 Derivative of Cross Entropy Loss with Sigmoid 4.6.2 Derivative of Cross Entropy Loss with Softmax 4.7 Back Propagation 4.7.1 Backpropagate to the output layer4.7.2 Backpropagate to the second hidden layer4.7.3 Backpropagate to the _rst hidden layer 4.7.4 Vectorization of backprop equations 4.8 Writing a Simple Neural Network Application 4.8.1 Image Classi_cation using Sigmoid Activation Neural Network 4.8.2 Importance of Normalization 5 Deep Neural Networks 5.1 Writing a Deep Neural Network (DNN) algorithm5.2 Implementing a DNN using Keras 6 Regularization and Hyperparameter Tuning 6.1 Initialization 6.1.1 Zero initialization6.1.2 Random initialization6.1.3 Xavier initialization6.1.4 He initialization6.2 Gradient Descent 6.2.1 Gradient Descent or Batch Gradient Descent 6.2.2 Stochastic Gradient Descent6.2.3 Mini Batch Gradient Descent 6.3 Dealing with NaNs 6.3.1 Hyperparameters and Weight Initialization6.3.2 Normalization 6.3.3 Using di_erent Activation functions 6.3.4 Use of NanGuardMode, DebugMode, or MonitorMode 6.3.5 Numerical Stability 6.3.6 Algorithm Related 6.3.7 NaN Introduced by AllocEmpty 6.4 Optimization Algorithms 6.4.1 Simple Update 6.4.2 Momentum based Optimization Update 6.4.3 Nesterov Momentum Optimization Update 6.4.4 Adagrad (Adaptive Gradient Algorithm) Optimization Update 6.4.5 RMSProp (Root Mean Square Propagation) with Momentum Optimization Update 6.4.6 Adam Optimization (Adaptive Moment Estimation) with Momentum Update 6.4.7 Vanishing Gradient and Numerical stability 6.5 Gradient Checking 6.6 Second order methods 6.7 Per-parameter adaptive learning rate methods6.8 Annealing the learning rate6.9 Regularization 6.9.1 Dropout Regularization6.9.2 `2 Regularization6.9.3 Combining dropout and `2 regularization?6.10 Hyperparameter optimization6.11 Evaluation 6.12 Using Keras CONTENTS6.12.1 Adjust epochs6.12.2 Add batch normalization6.12.3 Add dropout 6.12.4 Add weight regularization 6.12.5 Adjust learning rate 6.12.6 Prediction 7 Convolutional Neural Networks8 Sequence Models Bibliography


Abhijit Ghatak is a Data Scientist and holds an M.E. in Engineering and M.S. in Data Science from Stevens Institute of Technology, USA. He began his career as a submarine engineer officer in the Indian Navy and worked on various data-intensive projects involving submarine operations and construction. Thereafter he has worked in academia, technology companies and as a research scientist in the area of Internet of Things (IoT) and pattern recognition for the European Union (EU). He has published several papers in the areas of engineering and machine learning and is currently a consultant in the area of machine learning and deep learning. His research interests include IoT, stream analytics and design of deep learning systems.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.