HOME > Detail View

Detail View

A first course in machine learning 2nd ed

A first course in machine learning 2nd ed (Loan 6 times)

Material type
단행본
Personal Author
Rogers, Simon, 1979-. Girolami, Mark, 1963-.
Title Statement
A first course in machine learning / Simon Rogers, Mark Girolami.
판사항
2nd ed.
Publication, Distribution, etc
Boca Raton :   CRC Press,   c2017.  
Physical Medium
xxix, 397 p. : ill. ; 25 cm.
Series Statement
Chapman & Hall/CRC machine learning & pattern recognition series
ISBN
9781498738484 (hbk.) 1498738486 (hbk.) 9781498738569 (ebk.)
Bibliography, Etc. Note
Includes bibliographical references and index.
Subject Added Entry-Topical Term
Machine learning. Machine Learning.
000 00000cam u2200205 a 4500
001 000045903462
005 20170421101859
008 170420s2017 flua b 001 0 eng d
020 ▼a 9781498738484 (hbk.)
020 ▼a 1498738486 (hbk.)
020 ▼a 9781498738569 (ebk.)
035 ▼a (KERIS)REF000018209965
040 ▼a OHS ▼e rda ▼c OHS ▼d OHS ▼d CHVBK ▼d OCLCF ▼d UUM ▼d 211009
050 4 ▼a Q325.5 ▼b .R64 2017
082 0 0 ▼a 006.31 ▼2 23
084 ▼a 006.31 ▼2 DDCK
090 ▼a 006.31 ▼b R729f2
100 1 ▼a Rogers, Simon, ▼d 1979-.
245 1 2 ▼a A first course in machine learning / ▼c Simon Rogers, Mark Girolami.
250 ▼a 2nd ed.
260 ▼a Boca Raton : ▼b CRC Press, ▼c c2017.
300 ▼a xxix, 397 p. : ▼b ill. ; ▼c 25 cm.
490 1 ▼a Chapman & Hall/CRC machine learning & pattern recognition series
504 ▼a Includes bibliographical references and index.
650 0 ▼a Machine learning.
650 1 2 ▼a Machine Learning.
700 1 ▼a Girolami, Mark, ▼d 1963-.
830 0 ▼a Chapman & Hall/CRC machine learning & pattern recognition series.
945 ▼a KLPA

Holdings Information

No. Location Call Number Accession No. Availability Due Date Make a Reservation Service
No. 1 Location Main Library/Western Books/ Call Number 006.31 R729f2 Accession No. 111771120 Availability In loan Due Date 2021-06-01 Make a Reservation Available for Reserve R Service M

Contents information

Table of Contents

Linear Modelling: A Least Squares Approach
LINEAR MODELLING
De ning the model
Modelling assumptions
De ning a good model
The least squares solution?a worked example
Worked example
Least squares t to the Olympic data
Summary
MAKING PREDICTIONS
A second Olympic dataset
Summary
VECTOR/MATRIX NOTATION
Example
Numerical example
Making predictions
Summary
NON-LINEAR RESPONSE FROM A LINEAR MODEL
GENERALISATION AND OVER-FITTING
Validation data
Cross-validation
Computational scaling of K-fold cross-validation
REGULARISED LEAST SQUARES
EXERCISES
FURTHER READING

Linear Modelling: A Maximum Likelihood Approach
ERRORS AS NOISE
Thinking generatively
RANDOM VARIABLES AND PROBABILITY
Random variables
Probability and distributions
Adding probabilities
Conditional probabilities
Joint probabilities
Marginalisation
Aside?Bayes' rule
Expectations
POPULAR DISCRETE DISTRIBUTIONS
Bernoulli distribution
Binomial distribution
Multinomial distribution
CONTINUOUS RANDOM VARIABLES { DENSITY
FUNCTIONS
POPULAR CONTINUOUS DENSITY FUNCTIONS
The uniform density function
The beta density function
The Gaussian density function
Multivariate Gaussian
SUMMARY
THINKING GENERATIVELY...CONTINUED
LIKELIHOOD
Dataset likelihood
Maximum likelihood
Characteristics of the maximum likelihood solution
Maximum likelihood favours complex models
THE BIAS-VARIANCE TRADE-OFF
Summary
EFFECT OF NOISE ON PARAMETER ESTIMATES
Uncertainty in estimates
Comparison with empirical values
Variability in model parameters?Olympic data
VARIABILITY IN PREDICTIONS
Predictive variability?an example
Expected values of the estimators
CHAPTER SUMMARY
EXERCISES
FURTHER READING

The Bayesian Approach to Machine Learning
A COIN GAME
Counting heads
The Bayesian way
THE EXACT POSTERIOR
THE THREE SCENARIOS
No prior knowledge
The fair coin scenario
A biased coin
The three scenarios?a summary
Adding more data
MARGINAL LIKELIHOODS
Model comparison with the marginal likelihood
HYPERPARAMETERS
GRAPHICAL MODELS
SUMMARY
A BAYESIAN TREATMENT OF THE OLYMPIC 100m DATA 122
The model
The likelihood
The prior
The posterior
A first-order polynomial
Making predictions
MARGINAL LIKELIHOOD FOR POLYNOMIAL MODEL
ORDER SELECTION
CHAPTER SUMMARY
EXERCISES
FURTHER READING
Bayesian Inference
NON-CONJUGATE MODELS
BINARY RESPONSES
A model for binary responses
A POINT ESTIMATE?THE MAP SOLUTION
THE LAPLACE APPROXIMATION
Laplace approximation example: Approximating a
gamma density
Laplace approximation for the binary response model
SAMPLING TECHNIQUES
Playing darts
The Metropolis{Hastings algorithm
The art of sampling
CHAPTER SUMMARY
EXERCISES
FURTHER READING

Classification
THE GENERAL PROBLEM
PROBABILISTIC CLASSIFIERS
The Bayes classifier
Likelihood?class-conditional distributions
Prior class distribution
Example?Gaussian class-conditionals
Making predictions
The naive-Bayes assumption
Example?classifying text
Smoothing
Logistic regression
Motivation
Non-linear decision functions
Non-parametric models?the Gaussian process
NON-PROBABILISTIC CLASSIFIERS
K-nearest neighbours
Choosing K
Support vector machines and other kernel methods
The margin
Maximising the margin
Making predictions
Support vectors
Soft margins
Kernels
Summary
ASSESSING CLASSIFICATION PERFORMANCE
Accuracy?0/1 loss
Sensitivity and speci city
The area under the ROC curve
Confusion matrices
DISCRIMINATIVE AND GENERATIVE CLASSIFIERS
CHAPTER SUMMARY
EXERCISES
FURTHER READING

Clustering
THE GENERAL PROBLEM
K-MEANS CLUSTERING
Choosing the number of clusters
Where K-means fails
Kernelised K-means
Summary
MIXTURE MODELS
A generative process
Mixture model likelihood
The EM algorithm
Updating _k
Updating _k
Updating _k
Updating qnk
Some intuition
Example
EM nds local optima
Choosing the number of components
Other forms of mixture component
MAP estimates with EM
Bayesian mixture models
CHAPTER SUMMARY
EXERCISES
FURTHER READING

Principal Components Analysis and Latent Variable Models
THE GENERAL PROBLEM
Variance as a proxy for interest
PRINCIPAL COMPONENTS ANALYSIS
Choosing D
Limitations of PCA
LATENT VARIABLE MODELS
Mixture models as latent variable models
Summary
VARIATIONAL BAYES
Choosing Q(_)
Optimising the bound
A PROBABILISTIC MODEL FOR PCA
Q_ (_ )
Qxn(xn)
Qwm(wm)
The required expectations
The algorithm
An example
MISSING VALUES
Missing values as latent variables
Predicting missing values
NON-REAL-VALUED DATA
Probit PPCA
Visualising parliamentary data
Aside?relationship to classification
CHAPTER SUMMARY
EXERCISES
FURTHER READING

Advanced Topics

Gaussian Processes
PROLOGUE?NON-PARAMETRIC MODELS
GAUSSIAN PROCESS REGRESSION
The Gaussian process prior
Noise-free regression
Noisy regression
Summary
Noisy regression?an alternative route Alternative covariance functions
Linear
Polynomial
Neural network
ARD
Composite covariance functions
Summary
GAUSSIAN PROCESS CLASSIFICATION
A classi cation likelihood
A classi cation roadmap
The point estimate approximation
Propagating uncertainty through the sigmoid
The Laplace approximation
Summary
HYPERPARAMETER OPTIMISATION
EXTENSIONS
Non-zero mean
Multiclass classi cation
Other likelihood functions and models
Other inference schemes
CHAPTER SUMMARY
EXERCISES
FURTHER READING

Markov Chain Monte Carlo Sampling
GIBBS SAMPLING
EXAMPLE: GIBBS SAMPLING FOR GP
CLASSIFICATION
Conditional densities for GP classi cation via Gibbs sampling
Summary
WHY DOES MCMC WORK?
SOME SAMPLING PROBLEMS AND SOLUTIONS
Burn-in and convergence
Autocorrelation
Summary
ADVANCED SAMPLING TECHNIQUES
Adaptive proposals and Hamiltonian Monte Carlo
Approximate Bayesian computation
Population MCMC and temperature schedules
Sequential Monte Carlo
CHAPTER SUMMARY
EXERCISES
FURTHER READING

Advanced Mixture Modelling
A GIBBS SAMPLER FOR MIXTURE MODELS
COLLAPSED GIBBS SAMPLING
AN INFINITE MIXTURE MODEL
The Chinese restaurant process
Inference in the in nite mixture model
Summary
DIRICHLET PROCESSES
Hierarchical Dirichlet processes
Summary
BEYOND STANDARD MIXTURES?TOPIC MODELS
CHAPTER SUMMARY
EXERCISES
FURTHER READING
Glossary
Index

 


Information Provided By: : Aladin

New Arrivals Books in Related Fields

데이터분석과인공지능활용편찬위원회 (2021)
Harrison, Matt (2021)
Stevens, Eli (2020)