Używamy cookies, aby ułatwić korzystanie z Portalu. Możesz określić warunki przechowywania, dostępu do plików cookies w Twojej przeglądarce. Dowiedz się więcej.
strona główna Strona główna | Nowości | Promocje | Zapowiedzi Twoje konto | Zarejestruj | Schowek | Kontakt | Pomoc
mapa działów
Szukaj: szukanie zaawansowane
Koszyk
Książki \ Programowanie

Fundamentals of Deep Learning Język: 2

978-1-4919-2561-4

Cena Brutto: 168.63

Cena netto: 160.60

Ilość:
Wersja: Drukowana
Autor Nikhil Buduma
Liczba_stron 304
Wydawnictwo O'Reilly Media
Oprawa miękka
Data_Wydania 2017-06-20
Fundamentals of

Deep Learning



With the reinvigoration of neural networks in the 2000s, deep learning has become an extremely active area of research, one that’s paving the way for modern machine learning. In this practical book, author Nikhil Buduma provides examples and clear explanations to guide you through major concepts of this complicated field.


Companies such as Google, Microsoft, and Facebook are actively growing in-house deep-learning teams. For the rest of us, however, deep learning is still a pretty complex and difficult subject to grasp. If you’re familiar with Python, and have a background in calculus, along with a basic understanding of machine learning, this book will get you started.

  • Examine the foundations of machine learning and neural networks
  • Learn how to train feed-forward neural networks
  • Use TensorFlow to implement your first neural network
  • Manage problems that arise as you begin to make networks deeper
  • Build neural networks that analyze complex images
  • Perform effective dimensionality reduction using autoencoders
  • Dive deep into sequence analysis to examine language
  • Learn the fundamentals of reinforcement learning


Pragniemy Państwa zapewnić, iż dokładamy wszelkich możliwych starań, by opisy książek i podręczników, zawarte na naszych stronach internetowych, zawierały bieżące i wiarygodne materiały. Może jednak, mimo naszych wysiłków, w opisy książek wkraść się przekłamanie z naszej strony niezamierzone. Nie może to stanowić powodu do roszczeń. O ile macie Państwo jakiekolwiek pytania lub wątpliwości - prosimy o kontakt z naszym ekspertem lub działem handlowym. Postaramy  się odpowiedzieć na wszystkie Państwa pytania zanim podejmiecie Państwo decyzje o złożeniu zamówienia.
#

  1. Chapter 1 The Neural Network

    1. Building Intelligent Machines

    2. The Limits of Traditional Computer Programs

    3. The Mechanics of Machine Learning

    4. The Neuron

    5. Expressing Linear Perceptrons as Neurons

    6. Feed-Forward Neural Networks

    7. Linear Neurons and Their Limitations

    8. Sigmoid, Tanh, and ReLU Neurons

    9. Softmax Output Layers

    10. Looking Forward

  2. Chapter 2 Training Feed-Forward Neural Networks

    1. The Fast-Food Problem

    2. Gradient Descent

    3. The Delta Rule and Learning Rates

    4. Gradient Descent with Sigmoidal Neurons

    5. The Backpropagation Algorithm

    6. Stochastic and Minibatch Gradient Descent

    7. Test Sets, Validation Sets, and Overfitting

    8. Preventing Overfitting in Deep Neural Networks

    9. Summary

  3. Chapter 3 Implementing Neural Networks in TensorFlow

    1. What Is TensorFlow?

    2. How Does TensorFlow Compare to Alternatives?

    3. Installing TensorFlow

    4. Creating and Manipulating TensorFlow Variables

    5. TensorFlow Operations

    6. Placeholder Tensors

    7. Sessions in TensorFlow

    8. Navigating Variable Scopes and Sharing Variables

    9. Managing Models over the CPU and GPU

    10. Specifying the Logistic Regression Model in TensorFlow

    11. Logging and Training the Logistic Regression Model

    12. Leveraging TensorBoard to Visualize Computation Graphs and Learning

    13. Building a Multilayer Model for MNIST in TensorFlow

    14. Summary

  4. Chapter 4 Beyond Gradient Descent

    1. The Challenges with Gradient Descent

    2. Local Minima in the Error Surfaces of Deep Networks

    3. Model Identifiability

    4. How Pesky Are Spurious Local Minima in Deep Networks?

    5. Flat Regions in the Error Surface

    6. When the Gradient Points in the Wrong Direction

    7. Momentum-Based Optimization

    8. A Brief View of Second-Order Methods

    9. Learning Rate Adaptation

    10. The Philosophy Behind Optimizer Selection

    11. Summary

  5. Chapter 5 Convolutional Neural Networks

    1. Neurons in Human Vision

    2. The Shortcomings of Feature Selection

    3. Vanilla Deep Neural Networks Don’t Scale

    4. Filters and Feature Maps

    5. Full Description of the Convolutional Layer

    6. Max Pooling

    7. Full Architectural Description of Convolution Networks

    8. Closing the Loop on MNIST with Convolutional Networks

    9. Image Preprocessing Pipelines Enable More Robust Models

    10. Accelerating Training with Batch Normalization

    11. Building a Convolutional Network for CIFAR-10

    12. Visualizing Learning in Convolutional Networks

    13. Leveraging Convolutional Filters to Replicate Artistic Styles

    14. Learning Convolutional Filters for Other Problem Domains

    15. Summary

  6. Chapter 6 Embedding and Representation Learning

    1. Learning Lower-Dimensional Representations

    2. Principal Component Analysis

    3. Motivating the Autoencoder Architecture

    4. Implementing an Autoencoder in TensorFlow

    5. Denoising to Force Robust Representations

    6. Sparsity in Autoencoders

    7. When Context Is More Informative than the Input Vector

    8. The Word2Vec Framework

    9. Implementing the Skip-Gram Architecture

    10. Summary

  7. Chapter 7 Models for Sequence Analysis

    1. Analyzing Variable-Length Inputs

    2. Tackling seq2seq with Neural N-Grams

    3. Implementing a Part-of-Speech Tagger

    4. Dependency Parsing and SyntaxNet

    5. Beam Search and Global Normalization

    6. A Case for Stateful Deep Learning Models

    7. Recurrent Neural Networks

    8. The Challenges with Vanishing Gradients

    9. Long Short-Term Memory (LSTM) Units

    10. TensorFlow Primitives for RNN Models

    11. Implementing a Sentiment Analysis Model

    12. Solving seq2seq Tasks with Recurrent Neural Networks

    13. Augmenting Recurrent Networks with Attention

    14. Dissecting a Neural Translation Network

    15. Summary

  8. Chapter 8 Memory Augmented Neural Networks

    1. Neural Turing Machines

    2. Attention-Based Memory Access

    3. NTM Memory Addressing Mechanisms

    4. Differentiable Neural Computers

    5. Interference-Free Writing in DNCs

    6. DNC Memory Reuse

    7. Temporal Linking of DNC Writes

    8. Understanding the DNC Read Head

    9. The DNC Controller Network

    10. Visualizing the DNC in Action

    11. Implementing the DNC in TensorFlow

    12. Teaching a DNC to Read and Comprehend

    13. Summary

  9. Chapter 9 Deep Reinforcement Learning

    1. Deep Reinforcement Learning Masters Atari Games

    2. What Is Reinforcement Learning?

    3. Markov Decision Processes (MDP)

    4. Explore Versus Exploit

    5. Policy Versus Value Learning

    6. Pole-Cart with Policy Gradients

    7. Q-Learning and Deep Q-Networks

    8. Improving and Moving Beyond DQN

    9. Summary

powrót
 
Produkty Podobne
Wzorce projektowe. Leksykon kieszonkowy
Wzorce projektowe. Leksykon kieszonkowy
Tajniki C# i .NET Framework. Wydajne aplikacje dzięki zaawansowanym funkcjom języka C# i architektury .NET
Tajniki C# i .NET Framework. Wydajne aplikacje dzięki zaawansowanym funkcjom języka C# i architektury .NET
Programowanie w języku Go. Koncepcje i przykłady. Wydanie II
Python. Ćwiczenia praktyczne
Gantry. Tworzenie szablonów dla Joomla!
Sass. Nowoczesne arkusze stylów
Platforma Node.js. Przewodnik webdevelopera. Wydanie III
Xamarin. Tworzenie aplikacji cross-platform. Receptury
Więcej produktów