Deep learning with Python : a hands-on introduction / Nikhil Ketkar.

Discover the practical aspects of implementing deep-learning solutions using the rich Python ecosystem. This book bridges the gap between the academic state-of-the-art and the industry state-of-the-practice by introducing you to deep learning frameworks such as Keras, Theano, and Caffe. The practica...

Full description

Saved in:
Bibliographic Details
Main Author: Ketkar, Nikhil (Author)
Format: eBook
Language:English
Published: [United States] : Apress, 2017.
Subjects:
Online Access:Click for online access
Table of Contents:
  • At a Glance; Contents; About the Author; About the Technical Reviewer; Acknowledgments; Chapter 1: Introduction to Deep Learning; Historical Context; Advances in Related Fields; Prerequisites ; Overview of Subsequent Chapters; Installing the Required Libraries ; Chapter 2: Machine Learning Fundamentals; Intuition; Binary Classification; Regression; Generalization; Regularization; Summary; Chapter 3: Feed Forward Neural Networks; Unit; Overall Structure of a Neural Network; Expressing the Neural Network in Vector Form; Evaluating the output of the Neural Network.
  • Training the Neural NetworkDeriving Cost Functions using Maximum Likelihood; Binary Cross Entropy; Cross Entropy; Squared Error; Summary of Loss Functions; Types of Units/Activation Functions/Layers; Linear Unit; Sigmoid Unit; Softmax Layer; Rectified Linear Unit (ReLU); Hyperbolic Tangent; Neural Network Hands-on with AutoGrad; Summary; Chapter 4: Introduction to Theano; What is Theano; Theano Hands-On; Summary; Chapter 5: Convolutional Neural Networks; Convolution Operation; Pooling Operation; Convolution-Detector-Pooling Building Block; Convolution Variants; Intuition behind CNNs; Summary.
  • Chapter 6: Recurrent Neural NetworksRNN Basics; Training RNNs; Bidirectional RNNs; Gradient Explosion and Vanishing; Gradient Clipping; Long Short Term Memory; Summary; Chapter 7: Introduction to Keras; Summary; Chapter 8: Stochastic Gradient Descent; Optimization Problems; Method of Steepest Descent; Batch, Stochastic (Single and Mini-batch) Descent; Batch; Stochastic Single Example; Stochastic Mini-batch; Batch vs. Stochastic; Challenges with SGD; Local Minima; Saddle Points; Selecting the Learning Rate; Slow Progress in Narrow Valleys; Algorithmic Variations on SGD; Momentum.
  • Nesterov Accelerated Gradient (NAS)Annealing and Learning Rate Schedules; Adagrad; RMSProp; Adadelta; Adam; Resilient Backpropagation; Equilibrated SGD; Tricks and Tips for using SGD; Preprocessing Input Data; Choice of Activation Function; Preprocessing Target Value; Initializing Parameters; Shuffling Data; Batch Normalization; Early Stopping; Gradient Noise; Parallel and Distributed SGD; Hogwild; Downpour; Hands-on SGD with Downhill; Summary; Chapter 9: Automatic Differentiation; Numerical Differentiation; Symbolic Differentiation; Automatic Differentiation Fundamentals.
  • Forward/Tangent Linear ModeReverse/Cotangent/Adjoint Linear Mode; Implementation of Automatic Differentiation; Source Code Transformation; Operator Overloading; Hands-on Automatic Differentiation with Autograd; Summary; Chapter 10: Introduction to GPUs; Summary; Index.