HOME > Detail View

Detail View

Fundamentals of deep learning : designing next-generation machine intelligence algorithms / First edition

Fundamentals of deep learning : designing next-generation machine intelligence algorithms / First edition (Loan 14 times)

Material type
단행본
Personal Author
Buduma, Nikhil. Locascio, Nicholas.
Title Statement
Fundamentals of deep learning : designing next-generation machine intelligence algorithms / Nikhil Buduma ; with contributions by Nicholas Locascio.
판사항
First edition.
Publication, Distribution, etc
Sebastopol, CA :   O'Reilly Media,   c2017.  
Physical Medium
xii, 283 p. : ill. (some col.) ; 24 cm.
ISBN
9781491925614 (paperback) 1491925612 (paperback)
Content Notes
The neural network -- Training feed-forward neural networks -- Implementing neural networks in TensorFlow -- Beyond gradient descent -- Convolutional neural networks -- Embedding and representation learning -- Models for sequence analysis -- Memory augmented neural networks -- Deep reinforcement learning.
Bibliography, Etc. Note
Includes bibliographical references and index.
Subject Added Entry-Topical Term
Artificial intelligence. Machine learning. Neural networks (Computer science).
000 00000cam u2200205 a 4500
001 000045936620
005 20180403113351
008 180327s2017 caua b 001 0 eng d
010 ▼a 2017448783
015 ▼a GBB704534 ▼2 bnb
020 ▼a 9781491925614 (paperback)
020 ▼a 1491925612 (paperback)
035 ▼a (KERIS)REF000018572834
040 ▼a IG$ ▼b eng ▼c IG$ ▼e rda ▼d OCLCO ▼d TEF ▼d BDX ▼d BTCTA ▼d YDXCP ▼d OCLCQ ▼d OCLCF ▼d FM0 ▼d CHVBK ▼d OCLCO ▼d OQX ▼d SHS ▼d U3G ▼d OCLCA ▼d DLC ▼d 211009
050 0 0 ▼a TA347.A78 ▼b B83 2017
082 0 4 ▼a 006.3/1 ▼2 23
084 ▼a 006.31 ▼2 DDCK
090 ▼a 006.31 ▼b B927f
100 1 ▼a Buduma, Nikhil. ▼0 AUTH(211009)90257
245 1 0 ▼a Fundamentals of deep learning : ▼b designing next-generation machine intelligence algorithms / ▼c Nikhil Buduma ; with contributions by Nicholas Locascio.
246 3 0 ▼a Designing next-generation machine intelligence algorithms
250 ▼a First edition.
260 ▼a Sebastopol, CA : ▼b O'Reilly Media, ▼c c2017.
300 ▼a xii, 283 p. : ▼b ill. (some col.) ; ▼c 24 cm.
504 ▼a Includes bibliographical references and index.
505 0 ▼a The neural network -- Training feed-forward neural networks -- Implementing neural networks in TensorFlow -- Beyond gradient descent -- Convolutional neural networks -- Embedding and representation learning -- Models for sequence analysis -- Memory augmented neural networks -- Deep reinforcement learning.
650 0 ▼a Artificial intelligence.
650 0 ▼a Machine learning.
650 0 ▼a Neural networks (Computer science).
700 1 ▼a Locascio, Nicholas.
945 ▼a KLPA

No. Location Call Number Accession No. Availability Due Date Make a Reservation Service
No. 1 Location Main Library/Western Books/ Call Number 006.31 B927f Accession No. 111788556 Availability Available Due Date Make a Reservation Service B M
No. 2 Location Science & Engineering Library/Sci-Info(Stacks2)/ Call Number 006.31 B927f Accession No. 121244069 Availability In loan Due Date 2021-10-02 Make a Reservation Available for Reserve R Service M
No. Location Call Number Accession No. Availability Due Date Make a Reservation Service
No. 1 Location Main Library/Western Books/ Call Number 006.31 B927f Accession No. 111788556 Availability Available Due Date Make a Reservation Service B M
No. Location Call Number Accession No. Availability Due Date Make a Reservation Service
No. 1 Location Science & Engineering Library/Sci-Info(Stacks2)/ Call Number 006.31 B927f Accession No. 121244069 Availability In loan Due Date 2021-10-02 Make a Reservation Available for Reserve R Service M

Contents information

Author Introduction

니킬 부두마(지은이)

1차 의료기관을 위한 데이터 기반의 새로운 시스템을 구축하는 Remedy 사의 공동 창립자이자 수석 과학자이다. 16세의 나이에 신약 개발 실험실을 관리하며 자원이 제한된 커뮤니티들을 위해 새로운 저비용의 스크리닝 방법을 개발했다. 19세 때까지 국제 생물학 올림피아드에서 두 차례 금메달을 받았으며 이후 MIT에서 의료 전달체계와 정신 건강 및 의학 연구에 영향을 주는 대규모 데이터 시스템 개발에 집중했다. 또한, MIT에서 국가 비영리 단체인 'Lean On Me'를 공동 설립했다. 이 단체는 동료 지원 효율을 높이고 정신 건강과 건강 관리에 도움 되는 데이터를 활용한 익명의 문자 핫라인을 대학 캠퍼스에 제공한다. 벤처 펀드인 'Q Venture Partners'를 설립하고 하드 테크놀로지 및 데이터 회사에 투자했으며, 밀워키 브루어스(Milwaukee Brewers) 구단의 데이터 분석팀을 관리하며 여가를 보낸다.

Information Provided By: : Aladin

Table of Contents

Section	Section Description	Page Number
Preface	p. ix
1	The Neural Network	p. 1
    Building Intelligent Machines	p. 1
    The Limits of Traditional Computer Programs	p. 2
    The Mechanics of Machine Learning	p. 3
    The Neuron	p. 7
    Expressing Linear Perceptrons as Neurons	p. 8
    Feed-Forward Neural Networks	p. 9
    Linear Neurons and Their Limitations	p. 12
    Sigmoid, Tanh, and ReLU Neurons	p. 13
    Softmax Output Layers	p. 15
    Looking Forward	p. 15
2	Training Feed-Forward Neural Networks	p. 17
    The Fast-Food Problem	p. 17
    Gradient Descent	p. 19
    The Delta Rule and Learning Rates	p. 21
    Gradient Descent with Sigmoidal Neurons	p. 22
    The Backpropagation Algorithm	p. 23
    Stochastic and Minibatch Gradient Descent	p. 25
    Test Sets, Validation Sets, and Overfitting	p. 27
    Preventing Overfitting in Deep Neural Networks	p. 34
    Summary	p. 37
3	Implementing Neural Networks in TensorFlow	p. 39
    What Is TensorFlow?	p. 39
    How Does TensorFlow Compare to Alternatives?	p. 40
    Installing TensorFlow	p. 41
    Creating and Manipulating TensorFlow Variables	p. 43
    TensorFlow Operations	p. 45
    Placeholder Tensors	p. 45
    Sessions in TensorFlow	p. 45
    Navigating Variable Scopes and Sharing Variables	p. 48
    Managing Models over the CPU and GPU	p. 51
    Specifying the Logistic Regression Model in TensorFlow	p. 52
    Logging and Training the Logistic Regression Model	p. 55
    Leveraging Tensor Board to Visualize Computation Graphs and Learning	p. 58
    Building a Multilayer Model for MNIST in TensorFlow	p. 59
    Summary	p. 62
4	Beyond Gradient Descent	p. 63
    The Challenges with Gradient Descent	p. 63
    Local Minima in the Error Surfaces of Deep Networks	p. 64
    Model Identifiability	p. 65
    How Pesky Are Spurious Local Minima in Deep Networks?	p. 66
    Flat Regions in the Error Surface	p. 69
    When the Gradient Points in the Wrong Direction	p. 71
    Momentum-Based Optimization	p. 74
    A Brief View of Second-Order Methods	p. 77
    Learning Rate Adaptation	p. 78
    AdaGrad-Accumulating Historical Gradients	p. 79
    RMSProp-Exponentially Weighted Moving Average of Gradients	p. 80
    Adam-Combining Momentum and RMSProp	p. 81
    The Philosophy Behind Optimizer Selection	p. 83
    Summary	p. 83
5	Convolutional Neural Networks	p. 85
    Neurons in Human Vision	p. 85
    The Shortcomings of Feature Selection	p. 86
    Vanilla Deep Neural Networks Don''t Scale	p. 89
    Filters and Feature Maps	p. 90
    Full Description of the Convolutional Layer	p. 95
    Max Pooling	p. 98
    Full Architectural Description of Convolution Networks	p. 99
    Closing the Loop on MNIST with Convolutional Networks	p. 101
    Image Preprocessing Pipelines Enable More Robust Models	p. 103
    Accelerating Training with Batch Normalization	p. 104
    Building a Convolutional Network for CIFAR-10	p. 107
    Visualizing Learning in Convolutional Networks	p. 109
    Leveraging Convolutional Filters to Replicate Artistic Styles	p. 113
    Learning Convolutional Filters for Other Problem Domains	p. 114
    Summary	p. 115
6	Embedding and Representation Learning	p. 117
    Learning Lower-Dimensional Representations	p. 117
    Principal Component Analysis	p. 118
    Motivating the Autoencoder Architecture	p. 120
    Implementing an Autoencoder in TensorFlow	p. 121
    Denoising to Force Robust Representations	p. 134
    Sparsity in Autoencoders	p. 137
    When Context Is More Informative than the Input Vector	p. 140
    The Word2Vec Framework	p. 143
    Implementing the Skip-Gram Architecture	p. 146
    Summary	p. 152
7	Models for Sequence Analysis	p. 153
    Analyzing Variable-Length Inputs	p. 153
    Tackling seq2seq with Neural N-Grams	p. 155
    Implementing a Part-of-Speech Tagger	p. 156
    Dependency Parsing and SyntaxNet	p. 164
    Beam Search and Global Normalization	p. 168
    A Case for Stateful Deep Learning Models	p. 172
    Recurrent Neural Networks	p. 173
    The Challenges with Vanishing Gradients	p. 176
    Long Short-Term Memory (LSTM) Units	p. 178
    TensorFlow Primitives for RNN Models	p. 183
    Implementing a Sentiment Analysis Model	p. 185
    Solving seq2seq Tasks with Recurrent Neural Networks	p. 189
    Augmenting Recurrent Networks with Attention	p. 191
    Dissecting a Neural Translation Network	p. 194
    Summary	p. 217
8	Memory Augmented Neural Networks	p. 219
    Neural Turing Machines	p. 219
    Attention-Based Memory Access	p. 221
    NTM Memory Addressing Mechanisms	p. 223
    Differentiable Neural Computers	p. 226
    Interference-Free Writing in DNCs	p. 229
    DNC Memory Reuse	p. 230
    Temporal Linking of DNC Writes	p. 231
    Understanding the DNC Read Head	p. 232
    The DNC Controller Network	p. 232
    Visualizing the DNC in Action	p. 234
    Implementing the DNC in TensorFlow	p. 237
    Teaching a DNC to Read and Comprehend	p. 242
    Summary	p. 244
9	Deep Reinforcement Learning	p. 245
    Deep Reinforcement Learning Masters Atari Games	p. 245
    What Is Reinforcement Learning?	p. 247
    Markov Decision Processes (MDP)	p. 248
        Policy	p. 249
        Future Return	p. 250
        Discounted Future Return	p. 251
    Explore Versus Exploit	p. 251
    Policy Versus Value Learning	p. 253
        Policy Learning via Policy Gradients	p. 254
    Pole-Cart with Policy Gradients	p. 254
        OpenAI Gym	p. 254
        Creating an Agent	p. 255
        Building the Model and Optimizer	p. 257
        Sampling Actions	p. 257
        Keeping Track of History	p. 257
        Policy Gradient Main Function	p. 258
        PGAgent Performance on Pole-Cart	p. 260
    Q-Learning and Deep Q-Networks	p. 261
        The Bellman Equation	p. 261
        Issues with Value Iteration	p. 262
        Approximating the Q-Function	p. 262
        Deep Q-Network (DQN)	p. 263
        Training DQN	p. 263
        Learning Stability	p. 263
        Target Q-Network	p. 264
        Experience Replay	p. 264
        From Q-Function to Policy	p. 264
        DQN and the Markov Assumption	p. 265
        DQN''s Solution to the Markov Assumption	p. 265
        Playing Breakout wth DQN	p. 265
        Building Our Architecture	p. 268
        Stacking Frames	p. 268
        Setting Lip Training Operations	p. 268
        Updating Our Target Q-Network	p. 269
        Implementing Experience Replay	p. 269
        DQN Main Loop	p. 270
        DQNAgent Results on Breakout	p. 272
    Improving and Moving Beyond DQN	p. 273
        Deep Recurrent Q-Networks (DRQN)	p. 273
        Asynchronous Advantage Actor-Critic Agent (A3C)	p. 274
        Unsupervised REinforcement and Auxiliary Learning (UNREAL)	p. 275
    Summary	p. 276
Index	p. 277

New Arrivals Books in Related Fields

Cartwright, Hugh M. (2021)
한국소프트웨어기술인협회. 빅데이터전략연구소 (2021)