HOME > 상세정보

상세정보

Tensorflow for deep learning : from linear regression to reinforcement learning

Tensorflow for deep learning : from linear regression to reinforcement learning (17회 대출)

자료유형
단행본
개인저자
Ramsundar, Bharath. Zadeh, Reza Bosagh.
서명 / 저자사항
Tensorflow for deep learning : from linear regression to reinforcement learning / Bharath Ramsundar and Reza Bosagh Zadeh.
발행사항
Beijing :   O'Reilly,   c2018.  
형태사항
xii, 240 p. : ill. ; 24 cm.
ISBN
9781491980453
일반주기
Includes index.  
일반주제명
Machine learning. Reinforcement learning. Artificial intelligence.
000 00000nam u2200205 a 4500
001 000045940116
005 20180424143044
008 180424s2018 cc a 001 0 eng d
020 ▼a 9781491980453
040 ▼a 211009 ▼c 211009 ▼d 211009
082 0 4 ▼a 006.31 ▼2 23
084 ▼a 006.31 ▼2 DDCK
090 ▼a 006.31 ▼b R183t
100 1 ▼a Ramsundar, Bharath.
245 1 0 ▼a Tensorflow for deep learning : ▼b from linear regression to reinforcement learning / ▼c Bharath Ramsundar and Reza Bosagh Zadeh.
260 ▼a Beijing : ▼b O'Reilly, ▼c c2018.
300 ▼a xii, 240 p. : ▼b ill. ; ▼c 24 cm.
500 ▼a Includes index.
630 0 0 ▼a TensorFlow (Electronic resource).
650 0 ▼a Machine learning.
650 0 ▼a Reinforcement learning.
650 0 ▼a Artificial intelligence.
700 1 ▼a Zadeh, Reza Bosagh.
945 ▼a KLPA

소장정보

No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 과학도서관/Sci-Info(2층서고)/ 청구기호 006.31 R183t 등록번호 121244311 도서상태 대출가능 반납예정일 예약 서비스 B M
No. 2 소장처 과학도서관/Sci-Info(2층서고)/ 청구기호 006.31 R183t 등록번호 121247242 도서상태 대출가능 반납예정일 예약 서비스 B M

컨텐츠정보

저자소개

바라스 람순다르(지은이)

생물학적 빅데이터를 구축하는 블록체인(blockchain) 회사인 데이터마인드(Datamined)의 공동 창립자이자 최고기술책임자(CTO)다. 또한 신약 개발에 딥러닝을 적용하는 DeepChem1 라이브러리의 수석 개발자이자 MoleculeNet의 공동 개발자다. UC 버클리에서 EECS와 수학 분야 학사 학위를 받았으며, 최근 스탠퍼드 대학교에서 컴퓨터과학으로 박사 학위를 받았다. 또한 과학 분야 최우수 대학원생을 지원하는 허츠 펠로우십(Hertz Fellowship)에 선정돼 비제이 판데(Vijay Pande) 교수에게 연구 지도를 받았다.

레자 자데(지은이)

매트로이드(Matroid) 창립 CEO이자 스탠퍼드 대학교 부교수. 머신러닝, 분산 컴퓨팅, 이산응용수학을 연구하고 있다. KDD 최고논문상과 진 골룹 우수논문상을 수상한 바 있고, 마이크로소프트와 데이터브릭스에서 기술자문위원을 역임하기도 했다. 연구 중 트위터의 '팔로우 추천' 알고리즘을 머신러닝으로 구축했고, 이는 트위터에 머신러닝을 적용한 첫 사례였다. 아파치 스파크의 선형대수 패키지의 초기 제작자였고, 당시 작업물이 업계 및 학계의 클러스터 컴퓨팅 환경에서 사용되고 있다. 현재 스탠퍼드 대학교에서 분산 알고리즘과 최적화, 이산수학 및 알고리즘 강의를 개설해 가르치고 있다.

정보제공 : Aladin

목차

CONTENTS
Preface = ix
1. Introduction to Deep Learning = 1
 Machine Learning Eats Computer Science = 1
 Deep Learning Primitives = 3
  Fully Connected Layer = 3
  Convolutional Layer = 4
  Recurrent Neural Network Layers = 4
  Long Short-Term Memory Cells = 5
 Deep Learning Architectures = 6
  LeNet = 6
  AlexNet = 6
  ResNet = 7
  Neural Captioning Model = 8
  Google Neural Machine Translation = 9
  One-Shot Models = 10
  AlphaGo = 12
  Generative Adversarial Networks = 13
  Neural Turing Machines = 14
 Deep Learning Frameworks = 15
  Limitations of TensorFlow = 16
 Review = 17
2. Introduction to TensorFlow Primitives = 19
 Introducing Tensors = 19
  Scalars, Vectors, and Matrices = 20
  Matrix Mathematics = 24
  Tensors = 25
   Tensors in Physics = 27
  Mathematical Asides = 28
 Basic Computations in TensorFlow = 29
  Installing TensorFlow and Getting Started = 29
  Initializing Constant Tensors = 30
  Sampling Random Tensors = 31
  Tensor Addition and Scaling = 32
  Matrix Operations = 33
  Tensor Types = 35
  Tensor Shape Manipulations = 35
  Introduction to Broadcasting = 37
 Imperative and Declarative Programming = 37
  TensorFlow Graphs = 39
  TensorFiow Sessions = 39
  Tensor Flow Variables = 40
 Review = 42
3. linear and logistic Regression with Tensorflow = 43
 Mathematical Review = 43
  Functions and Differentiability = 44
  Loss Functions = 45
  Gradient Descent = 50
  Automatic Differentiation Systems = 53
 Learning with TensorFlow = 55
  Creating Toy Datasets = 55
  New TensorFlow Concepts = 60
 Training Linear and Logistic Models in Tensor Flow = 64
  Linear Regression in TensorFlow = 64
  Logistic Regression in TensorFlow = 73
 Review = 79
4. fully Connected Deep Networks = 81
 What Is a Fully Connected Deep Network? = 81
 "Neurons" in Fully Connected Networks = 83
  Learning Fully Connected Networks with Backpropagation = 85
  Universal Convergence Theorem = 87
  Why Deep Networks? = 88
 Training Fully Connected Neural Networks = 89
  Learnable Representations = 89
  Activations = 89
  Fully Connected Networks Memorize = 90
  Regularization = 90
  Training Fully Connected Networks = 94
 Implementation in TensorFlow = 94
  Installing Deep Chem = 94
  Tox21 Dataset = 95
  Accepting Minibatches of Placeholders = 96
  Implementing a Hidden Layer = 96
  Adding Dropout to a Hidden Layer = 97
  Implementing Minibatching = 98
  Evaluating Model Accuracy = 98
  Using Tensor Board to Track Model Convergence = 99
 Review = 101
5. Hyperparameter Optimization = 103
 Model Evaluation and Hyperparameter Optimization = 104
 Metrics, Metrics, Metrics = 105
  Binary Classification Metrics = 106
  Multiclass Classification Metrics = 108
  Regression Metrics = 110
 Hyperparameter Optimization Algorithms = 110
  Setting Up a Baseline = 111
  Graduate Student Descent = 113
  Grid Search = 114
  Random Hyperparameter Search = 115
  Challenge for the Reader = 116
 Review = 117
6. Convolutional Neural Networks = 119
 Introduction to Convolutional Architectures = 120
  Local Receptive Fields = 120
  Convolutional Kernels = 122
  Pooling Layers = 125
  Constructing Convolutional Networks = 125
  Dilated Convolutions = 126
 Applications of Convolutional Networks = 127
  Object Detection and Localization = 127
  Image Segmentation = 128
  Graph Convolutions = 129
  Generating Images with Variational Autoencoders = 131
 Training a Convolutional Network in TensorFiow = 134
  The MNIST Dataset = 134
  Loading MNIST = 135
  Tensor Flow Convolutional Primitives = 138
  The Convolutional Architecture = 140
  Evaluating Trained Models = 144
  Challenge for the Reader = 146
 Review = 146
7. Recurrent Neural Networks = 149
 Overview of Recurrent Architectures = 150
 Recurrent Cells = 152
  Long Short-Term Memory (LSTM) = 152
  Gated Recurrent Units (GRU) = 154
 Applications of Recurrent Models = 154
  Sampling from Recurrent Networks = 154
  Seq2seq Models = 155
 Neural Turing Machines = 157
 Working with Recurrent Neural Networks in Practice = 159
 Processing the Penn Treebank Corpus = 159
  Code for Preprocessing = 160
  Loading Data into TensorFlow = 162
  The Basic Recurrent Architecture = 164
  Challenge for the Reader = 166
 Review = 166
8. Reinforcement Learning = 169
 Markov Decision Processes = 173
 Reinforcement Learning Algorithms = 175
  Q-Learning = 176
  Policy Learning = 177
  Asynchronous Training = 179
 Limits of Reinforcement Learning = 179
 Playing Tic-Tac-Toe = 181
  Object Orientation = 181
  Abstract Environment = 182
  Tic-Tac-Toe Environment = 182
  The Layer Abstraction = 185
  Defining a Graph of Layers = 188
 The A3C Algorithm = 192
  The A3C Loss Function = 196
  Defining Workers = 198
  Training the Policy = 201
  Challenge for the Reader = 203
 Review = 203
9. Training Large Deep Networks = 205
 Custom Hardware for Deep Networks = 205
 CPU Training = 206
  GPU Training = 207
  Tensor Processing Units = 209
  Field Programmable Gate Arrays = 211
  Neuromorphic Chips = 211
 Distributed Deep Network Training = 212
  Data Parallelism = 213
  Model Parallelism = 214
 Data Parallel Training with Multiple GPUs on Cifar10 = 215
  Downloading and Loading the DATA = 216
  Deep Dive on the Architecture = 218
  Training on Multiple GPUs = 220
  Challenge for the Reader = 223
 Review = 223
10. The Future of Deep Learning = 225
 Deep Learning Outside the Tech Industry = 226
  Deep Learning in the Pharmaceutical Industry = 226
  Deep Learning in Law = 227
  Deep Learning for Robotics = 227
  Deep Learning in Agriculture = 228
 Using Deep Learning Ethically = 228
 Is Artificial General Intelligence Imminent? = 230
 Where to Go from Here? = 231
Index = 233

관련분야 신착자료

Stevens, Eli (2020)