HOME > 상세정보

상세정보

A first course in machine learning 2nd ed

A first course in machine learning 2nd ed

자료유형
단행본
개인저자
Rogers, Simon, 1979-. Girolami, Mark, 1963-.
서명 / 저자사항
A first course in machine learning / Simon Rogers, Mark Girolami.
판사항
2nd ed.
발행사항
Boca Raton : CRC Press, c2017.
형태사항
xxix, 397 p. : ill. ; 25 cm.
총서사항
Chapman & Hall/CRC machine learning & pattern recognition series
ISBN
9781498738484 (hbk.) 1498738486 (hbk.) 9781498738569 (ebk.)
서지주기
Includes bibliographical references and index.
일반주제명
Machine learning. Machine Learning.
000 00000cam u2200205 a 4500
001 000045903462
005 20170421101859
008 170420s2017 flua b 001 0 eng d
020 ▼a 9781498738484 (hbk.)
020 ▼a 1498738486 (hbk.)
020 ▼a 9781498738569 (ebk.)
035 ▼a (KERIS)REF000018209965
040 ▼a OHS ▼e rda ▼c OHS ▼d OHS ▼d CHVBK ▼d OCLCF ▼d UUM ▼d 211009
050 4 ▼a Q325.5 ▼b .R64 2017
082 0 0 ▼a 006.31 ▼2 23
084 ▼a 006.31 ▼2 DDCK
090 ▼a 006.31 ▼b R729f2
100 1 ▼a Rogers, Simon, ▼d 1979-.
245 1 2 ▼a A first course in machine learning / ▼c Simon Rogers, Mark Girolami.
250 ▼a 2nd ed.
260 ▼a Boca Raton : ▼b CRC Press, ▼c c2017.
300 ▼a xxix, 397 p. : ▼b ill. ; ▼c 25 cm.
490 1 ▼a Chapman & Hall/CRC machine learning & pattern recognition series
504 ▼a Includes bibliographical references and index.
650 0 ▼a Machine learning.
650 1 2 ▼a Machine Learning.
700 1 ▼a Girolami, Mark, ▼d 1963-.
830 0 ▼a Chapman & Hall/CRC machine learning & pattern recognition series.
945 ▼a KLPA

소장정보

No. 소장처 청구기호 등록번호 도서상태 반납예정일 예약 서비스
No. 1 소장처 중앙도서관/서고6층/ 청구기호 006.31 R729f2 등록번호 111771120 도서상태 대출가능 반납예정일 예약 서비스 B M

컨텐츠정보

저자소개

Mark Girolami(지은이)

사이먼 로저스(지은이)

세계에서 가장 유명한 데이터 저널리즘 사이트이자 온라인 데이터 창고인 가디언 데이터 스토어(guardian.co.uk/data)를 편집하고 만들었습니다. 수많은 자료를 게재하여 사용자들이 보고 분석하는 역할을 합니다. 그는 이전에 샌프란시스코의 트위터에서 수석 데이터 편집자로 근무했습니다.

정보제공 : Aladin

목차

Linear Modelling: A Least Squares Approach
LINEAR MODELLING
De ning the model
Modelling assumptions
De ning a good model
The least squares solution?a worked example
Worked example
Least squares t to the Olympic data
Summary
MAKING PREDICTIONS
A second Olympic dataset
Summary
VECTOR/MATRIX NOTATION
Example
Numerical example
Making predictions
Summary
NON-LINEAR RESPONSE FROM A LINEAR MODEL
GENERALISATION AND OVER-FITTING
Validation data
Cross-validation
Computational scaling of K-fold cross-validation
REGULARISED LEAST SQUARES
EXERCISES
FURTHER READING

Linear Modelling: A Maximum Likelihood Approach
ERRORS AS NOISE
Thinking generatively
RANDOM VARIABLES AND PROBABILITY
Random variables
Probability and distributions
Adding probabilities
Conditional probabilities
Joint probabilities
Marginalisation
Aside?Bayes' rule
Expectations
POPULAR DISCRETE DISTRIBUTIONS
Bernoulli distribution
Binomial distribution
Multinomial distribution
CONTINUOUS RANDOM VARIABLES { DENSITY
FUNCTIONS
POPULAR CONTINUOUS DENSITY FUNCTIONS
The uniform density function
The beta density function
The Gaussian density function
Multivariate Gaussian
SUMMARY
THINKING GENERATIVELY...CONTINUED
LIKELIHOOD
Dataset likelihood
Maximum likelihood
Characteristics of the maximum likelihood solution
Maximum likelihood favours complex models
THE BIAS-VARIANCE TRADE-OFF
Summary
EFFECT OF NOISE ON PARAMETER ESTIMATES
Uncertainty in estimates
Comparison with empirical values
Variability in model parameters?Olympic data
VARIABILITY IN PREDICTIONS
Predictive variability?an example
Expected values of the estimators
CHAPTER SUMMARY
EXERCISES
FURTHER READING

The Bayesian Approach to Machine Learning
A COIN GAME
Counting heads
The Bayesian way
THE EXACT POSTERIOR
THE THREE SCENARIOS
No prior knowledge
The fair coin scenario
A biased coin
The three scenarios?a summary
Adding more data
MARGINAL LIKELIHOODS
Model comparison with the marginal likelihood
HYPERPARAMETERS
GRAPHICAL MODELS
SUMMARY
A BAYESIAN TREATMENT OF THE OLYMPIC 100m DATA 122
The model
The likelihood
The prior
The posterior
A first-order polynomial
Making predictions
MARGINAL LIKELIHOOD FOR POLYNOMIAL MODEL
ORDER SELECTION
CHAPTER SUMMARY
EXERCISES
FURTHER READING
Bayesian Inference
NON-CONJUGATE MODELS
BINARY RESPONSES
A model for binary responses
A POINT ESTIMATE?THE MAP SOLUTION
THE LAPLACE APPROXIMATION
Laplace approximation example: Approximating a
gamma density
Laplace approximation for the binary response model
SAMPLING TECHNIQUES
Playing darts
The Metropolis{Hastings algorithm
The art of sampling
CHAPTER SUMMARY
EXERCISES
FURTHER READING

Classification
THE GENERAL PROBLEM
PROBABILISTIC CLASSIFIERS
The Bayes classifier
Likelihood?class-conditional distributions
Prior class distribution
Example?Gaussian class-conditionals
Making predictions
The naive-Bayes assumption
Example?classifying text
Smoothing
Logistic regression
Motivation
Non-linear decision functions
Non-parametric models?the Gaussian process
NON-PROBABILISTIC CLASSIFIERS
K-nearest neighbours
Choosing K
Support vector machines and other kernel methods
The margin
Maximising the margin
Making predictions
Support vectors
Soft margins
Kernels
Summary
ASSESSING CLASSIFICATION PERFORMANCE
Accuracy?0/1 loss
Sensitivity and speci city
The area under the ROC curve
Confusion matrices
DISCRIMINATIVE AND GENERATIVE CLASSIFIERS
CHAPTER SUMMARY
EXERCISES
FURTHER READING

Clustering
THE GENERAL PROBLEM
K-MEANS CLUSTERING
Choosing the number of clusters
Where K-means fails
Kernelised K-means
Summary
MIXTURE MODELS
A generative process
Mixture model likelihood
The EM algorithm
Updating _k
Updating _k
Updating _k
Updating qnk
Some intuition
Example
EM nds local optima
Choosing the number of components
Other forms of mixture component
MAP estimates with EM
Bayesian mixture models
CHAPTER SUMMARY
EXERCISES
FURTHER READING

Principal Components Analysis and Latent Variable Models
THE GENERAL PROBLEM
Variance as a proxy for interest
PRINCIPAL COMPONENTS ANALYSIS
Choosing D
Limitations of PCA
LATENT VARIABLE MODELS
Mixture models as latent variable models
Summary
VARIATIONAL BAYES
Choosing Q(_)
Optimising the bound
A PROBABILISTIC MODEL FOR PCA
Q_ (_ )
Qxn(xn)
Qwm(wm)
The required expectations
The algorithm
An example
MISSING VALUES
Missing values as latent variables
Predicting missing values
NON-REAL-VALUED DATA
Probit PPCA
Visualising parliamentary data
Aside?relationship to classification
CHAPTER SUMMARY
EXERCISES
FURTHER READING

Advanced Topics

Gaussian Processes
PROLOGUE?NON-PARAMETRIC MODELS
GAUSSIAN PROCESS REGRESSION
The Gaussian process prior
Noise-free regression
Noisy regression
Summary
Noisy regression?an alternative route Alternative covariance functions
Linear
Polynomial
Neural network
ARD
Composite covariance functions
Summary
GAUSSIAN PROCESS CLASSIFICATION
A classi cation likelihood
A classi cation roadmap
The point estimate approximation
Propagating uncertainty through the sigmoid
The Laplace approximation
Summary
HYPERPARAMETER OPTIMISATION
EXTENSIONS
Non-zero mean
Multiclass classi cation
Other likelihood functions and models
Other inference schemes
CHAPTER SUMMARY
EXERCISES
FURTHER READING

Markov Chain Monte Carlo Sampling
GIBBS SAMPLING
EXAMPLE: GIBBS SAMPLING FOR GP
CLASSIFICATION
Conditional densities for GP classi cation via Gibbs sampling
Summary
WHY DOES MCMC WORK?
SOME SAMPLING PROBLEMS AND SOLUTIONS
Burn-in and convergence
Autocorrelation
Summary
ADVANCED SAMPLING TECHNIQUES
Adaptive proposals and Hamiltonian Monte Carlo
Approximate Bayesian computation
Population MCMC and temperature schedules
Sequential Monte Carlo
CHAPTER SUMMARY
EXERCISES
FURTHER READING

Advanced Mixture Modelling
A GIBBS SAMPLER FOR MIXTURE MODELS
COLLAPSED GIBBS SAMPLING
AN INFINITE MIXTURE MODEL
The Chinese restaurant process
Inference in the in nite mixture model
Summary
DIRICHLET PROCESSES
Hierarchical Dirichlet processes
Summary
BEYOND STANDARD MIXTURES?TOPIC MODELS
CHAPTER SUMMARY
EXERCISES
FURTHER READING
Glossary
Index

 


정보제공 : Aladin

관련분야 신착자료

Taulli, Tom (2020)