HOME > Detail View

Detail View

Three-dimensional computer vision

Three-dimensional computer vision (Loan 2 times)

Material type
단행본
Personal Author
Shirai, Yoshiaki.
Title Statement
Three-dimensional computer vision / Yoshiaki Shirai.
Publication, Distribution, etc
Berlin ;   New York :   Springer-Verlag ,   c1987.  
Physical Medium
xii, 297 p. : ill. ; 25 cm.
Series Statement
Symbolic computation. Computer graphics--systems and applications
ISBN
0387151192 (U.S.)
Bibliography, Etc. Note
Includes bibliographical references (p. [293]-297).
Subject Added Entry-Topical Term
Computer vision. Three-dimensional display systems.
000 00929namuu2200277 a 4500
001 000045510445
005 20090323142504
008 090320s1987 gw a rb 000 0 eng d
010 ▼a 86025996
020 ▼a 0387151192 (U.S.)
035 ▼a (KERIS)REF000014236722
040 ▼a 211009 ▼c 211009 ▼d 211009
050 0 0 ▼a TA1632 ▼b .S55 1987
082 0 0 ▼a 006.4/2 ▼2 19
082 0 4 ▼a 006.37 ▼2 22
090 ▼a 006.37 ▼b S558t
100 1 ▼a Shirai, Yoshiaki.
245 1 0 ▼a Three-dimensional computer vision / ▼c Yoshiaki Shirai.
260 ▼a Berlin ; ▼a New York : ▼b Springer-Verlag , ▼c c1987.
300 ▼a xii, 297 p. : ▼b ill. ; ▼c 25 cm.
490 0 ▼a Symbolic computation. ▼p Computer graphics--systems and applications
504 ▼a Includes bibliographical references (p. [293]-297).
650 0 ▼a Computer vision.
650 0 ▼a Three-dimensional display systems.
945 ▼a KINS

Holdings Information

No. Location Call Number Accession No. Availability Due Date Make a Reservation Service
No. 1 Location Science & Engineering Library/Sci-Info(Stacks2)/ Call Number 006.37 S558t Accession No. 121176282 Availability Available Due Date Make a Reservation Service B M

Contents information

Table of Contents

CONTENTS
1. Introduction = 1
  1.1 Three-Dimensional Computer Vision = 1
  1.2 Related Fields = 3
    1.2.1 Image Processing 3
    1.2.2 Pattern Classification and Pattern Recognition = 4
    1.2.3 Computer Graphics = 5
  1.3 Mainstream of 3D Computer Vision Research = 5
    1.3.1 Pioneering Work = 5
    1.3.2 First Generation Robot Vision = 6
    1.3.3 Interpretation of Line Drawings = 7
    1.3.4 Feature Extraction = 7
    1.3.5 Range Data Processing = 8
    1.3.6 Realizability of Line Drawings = 8
    1.3.7 Use of Knowledge About Scenes = 9
    1.3.8 Use of Physics of Imaging = 9
    1.3.9 Marr's Theory of Human Vision and Computer Vision = 10
2. Image Input = 11
  2.1 Imaging Geometry = 11
  2.2 Image Input Devices = 13
    2.2.1 Image Dissector = 14
    2.2.2 Vidicon = 14
    2.2.3 Solid Devices = 15
  2.3 Color = 17
    2.3.1 Color Representation = 17
  2.4 Color Input = 20
    2.4.1 TV Signals = 20
  2.5 Range = 22
    2.5.1 Optical Time of Flight = 22
    2.5.2 Ultrasonic Ranging = 23
    2.5.3 Spot Projection = 24
    2.5.4 Light-Stripe Method = 24
  2.6 Moir e ´ Topography = 26
  2.7 Preprocessing = 28
    2.7.1 Noise Reduction = 28
    2.7.2 Geometrical Correction = 29
    2.7.3 Gray-Level Correction = 30
    2.7.4 Correction of Defocusing = 30
3. Image Feature Extraction = 32
  3.1 Edge Point Detection = 32
    3.1.1 Edge Types for a Polyhedral Image = 32
    3.1.2 One-Dimensional Edge operators = 33
    3.1.3 Two-Dimensional Edge Operators = 36
    3.1.4 Pattern Matching operations = 37
    3.1.5 Color Edge Operators = 39
    3.1.6 Determination of Edge Points = 40
    3.1.7 Zero-Crossing Method = 41
    3.1.8 Edge of a Curved Surface = 44
  3.2 Local Edge Linking = 45
    3.2.1 Roberts' Edge-Linking Method = 45
    3.2.2 Edge Linking by Relaxation = 46
  3.3 Edge Point Clustering in Parameter Space = 49
    3.3.1 Hough Transformation = 48
    3.3.2 Extension of Hough Transformation = 50
  3.4 Edge-Following Methods = 51
    3.4.1 Detection of Starting Point = 52
    3.4.2 Prediction of Next Edge Point = 52
    3.4.3 Detection of Edge Point on Basis of Prediction = 53
    3.4.4 Determination of Next Step = 54
    3.4.5 Obtaining Connected Edge Points It = 56
  3.5 Region Methods = 57
    3.5.1 Region Merging = 58
    3.5.2 Region Splitting = 62
      3.5.2.1 Region Splitting by Mode Methods = 63
      3.5.2.2 Region Splitting Based on Discrirminant Criterion = 65
4. Image Feature Description = 69
  4.1 Representation of Lines = 69
    4.1.1 Spline Functions = 69
    4.1.2 Smoothing Splines = 70
    4.1.3 Parametric Splines = 71
    4.1.4 B-Splines = 72
  4.2 Segmentation of a Sequence of Points = 74
    4.2.1 Approximation by Straight Lines = 74
    4.2.2 Approximation by Curves = 75
  4.3 Fitting Line Equations = 79
    4.3.1 Using Errors Along a Single Axis = 79
    4.3.2 Using Errors of Line Equations With Two Variables = 79
    4.3.3 Using Distance From Each Point to Fitted Line = 80
  4.4 Conversion Between Lines and Regions = 83
    4.4.1 Boundary Detection = 83
    4.4.2 Boundary Following = 84
    4.4.3 Labeling Connected Regions = 86
5. Interpretation of Line Drawings = 90
  5.1 Roberts' Matching Method = 90
  5.2 Decomposition of Line Drawings Into Objects = 93
  5.3 Labeling Line Drawings = 95
    5.3.1 Vertex Type = 95
    5.3.2 Interpretation Labeling = 98
    5.3.3 Sequential Labeling Procedure = 98
    5.3.4 Labeling by Relaxation Method = 100
    5.3.5 Line Drawings with Shadows and Cracks = 102
    5.3.6 Interpretation of Curved Objects = 105
    5.3.7 Interpretation of Origami World = 106
  5.4 Further Problems in Line Drawing Interpretation = 108
6. Realizability of Line Drawings = 110
  6.1 Line Drawings Without Interpretations = 110
  6.2 Use of Gradient Space = 111
    6.2.1 Gradient Space = 111
    6.2.2 Construction of Gradient Image = 113
  6.3 Use of Linear Equation Systems = 115
    6.3.1 Solving Linear Equation Systems = 115
    6.3.2 Position-Free Line Drawings = 117
    6.3.3 Realizability of Position-Constrained Line Drawings = 119
7. Stereo Vision = 122
  7.1 Stereo Image Geometry = 122
  7.2 Area-Based Stereo = 125
    7.2.1 Feature Point Extraction = 125
    7.2.2 Similarity Measures = 127
    7.2.3 Finding Correspondence = 129
    7.2.4 Multistage Matching = 133
    7.2.5 Matching by Dynamic Programming = 134
  7.3 Feature-Based Stereo = 136
    7.3.1 Feature-Based Stereo for Simple Scenes = 136
    7.3.2 Marr-Poggio-Grimson Algorithm = 138
8. Shape from Monocular Images = 141
  8.1 Shape from Shading = 141
    8.1.1 Reflectance Map = 141
    8.1.2 Photometric Stereo = 145
    8.1.3 Use of Surface Smoothness Constraint = 147
    8.1.4 Use of Shading and Line Drawing = 151
  8.2 Use of Polarized Light = 153
  8.3 Shape from Geometrical Constraint on Scene = 156
    8.3.1 Surface Orientation from Parallel Lines = 157
    8.3.2 Shape fron Texture = 159
      8.3.2.1 Shape from Shape of Texture Elements = 159
      8.3.2.2 Shape from Parallel Lines in Texture = 160
      8.3.2.3 Shape from Parallel Lines Extracted from Texture = 162
9. Range Data Processing = 164
  9.1 Range Data = 165
  9.2 Edge Point Detection Along a Stripe Image = 165
    9.2.1 one-Dimensional Jump Edge = 165
    9.2.2 One-Dimensional Discontinuous Edge = 166
    9.2.3 One-Dimensional Corner Edge = 167
  9.3 Two-Dimensional Edge Operators for Range Images = 167
    9.3.1 Two-Dirnensional Jump Edge = 167
    9.3.2 Two-Dimensional Discontinuous Edge = 168
    9.3.3 Two-Dimensional Corner Edge = 169
  9.4 Scene Segmentation Based on Stripe Image Analysis = 171
    9.4.1 Segmentation of Stripe Image = 172
    9.4.2 Construction of Planes = 174
  9.5 Linking Three-Dimensional Edges = 175
  9.6 Three-Dimensional Region Growing = 178
    9.6.1 Outline of Region-Growing Method = 178
    9.6.2 Construction of Surface Elements = 179
    9.6.3 Merging Surface Elements = 180
      9.6.3.1 Kernel Finding = 180
      9.6.3.2 Region Merging = 181
    9.6.4 Classification of Elementary Regions = 183
    9.6.5 Merging Curved Elementary Regions = 184
      9.6.5.1 Kernel Finding = 185
      9.6.5.2 Region Merging = 185
    9.6.6 Making Descriptions = 186
      9.6.6.1 Fitting Quadratic Surfaces to Curved Regions = 186
      9.6.6.2 Edges of Regions = 187
      9.6.6.3 Properties of Regions and Relations Between Them = 188
10. Three-Dimensional Description and Representation = 189
  10.1 Three-Dimensional Curves = 189
    10.1.1 Three-Dimensional Curve Segments = 189
    10.1.2 Three-Dimensional B-Splines = 191
  10.2 Surfaces = 191
    10.2.1 Coons Surface Patches = 191
    10.2.2 B-Spline Surfaces = 193
  10.3 Interpolation of Serial Sections with Surface Patches = 194
    10.3.1 Description of Problem = 195
    10.3.2 Determination of Initial Pair = 196
    10.3.3 Selection of Next Vertex = 197
  10.4 Generalized Cylinders = 199
    10.4.1 Properties of Generalized Cylinders = 199
    10.4.2 Describing Range Data by Generalized Cylinders = 200
  10.5 Geometric Models = 203
  10.6 Extended Gaussian Image = 207
11. Knowledge Representation and Use = 209
  11.1 Types of Knowledge = 209
    11.1.1 Knowledge About Scenes = 209
    11.1.2 Control = 210
    11.1.3 Bottom-Up Control = 210
    11.1.4 Top-Down Control = 211
    11.1.5 Feedback Control = 212
    11.1.6 Heterarchical Control = 212
  11.2 Knowledge Representation = 213
    11.2.1 Procedural and Declarative Representations = 213
    11.2.2 Iconic Models = 214
    11.2.3 Graph Models = 215
    11.2.4 Demons = 216
    11.2.5 Production Systems = 217
    11.2.6 Blackboards = 218
    11.2.7 Frames = 219
12. Image Analysis Using Knowledge About Scenes = 221
  12.1 Analysis of Intensity Images Using Knowledge About Polyhedral Objects = 221
    12.1.1 General Strategy = 221
    12.1.2 Contour Finding = 223
    12.1.3 Hypothesizing Lines = 224
    12.1.4 Example of Line-Finding Procedure = 228
    12.1.5 Verifying Hypothetical Line Segments = 229
    12.1.6 Circular Search = 230
    12.1.7 Extending Lines by Edge Following = 232
    12.1.8 Experimental Results = 234
  12.2 Analysis of Range Images with the Aid of a Junction Dictionary = 237
    12.2.1 Possible Junctions = 237
    12.2.2 Junction Dictionary = 240
    12.2.3 System Organization = 241
      12.2.3.1 Contour Finder = 241
      12.2.3.2 Line-Segment Finder = 241
      12.2.3.3 Edge Follower = 243
      12.2.3.4 Straight-Line Fitter = 243
      12.2.3.5 Body Partitioner = 243
      12.2.3.6 Vertex-Position Adjuster = 244
    12.2.4 Outline of Behavior of System = 244
    12.2.5 Experimental Results = 245
    12.2.6 Extension to Scenes with Curved Objects = 247
13. Image Understanding Using Two-Dintensional Models = 250
  13.1 Recognition of Isolated Curved Objects Using a Graph Model = 250
    13.1.1 Scene Description = 250
    13.1.2 Evaluation of Matching = 250
    13.1.3 Matching Strategy = 251
  13.2 Interpretation of Imperfect Regions Using a Scene Model = 252
    13.2.1 Scene Description = 252
    13.2.2 Relational Model of Scene = 253
    13.2.3 Interpretation by Relaxation Method = 254
    13.2.4 Region Merging by Interpretation = 255
  13.3 Recognition of Multiple Objects Using 2D Object Models = 256
    13.3.1 Control = 257
    13.3.2 Edge Finder and Description Maker = 257
    13.3.3 Recognizer = 258
    13.3.4 Total System = 260
14. Image Understanding Using Three-Dimensional Models = 263
  14.1 Matching for Verification Vision = 263
    14.1.1 Matching of Feature Points = 264
    14.1.2 Matching of Features Without Finding Correspondence = 266
    14.1.3 Matching Gray Images Synthesized from Surface Models = 267
  14.2 Object Recognition by Predicting Image Features from Models = 269
    14.2.1 Modeling = 269
    14.2.2 Prediction = 270
    14.2.3 Making Descriptions = 271
    14.2.4 Interpretation = 271
  14.3 Matching Geometric Models to the Description of a Single Object = 273
    14.3.1 Recognition of Glossy Objects from Surface Normals = 274
    14.3.2 Matching in the Extended Gaussian image = 276
    14.3.3 Recognition of Objects Using EGIs as Higher Level Models = 277
  14.4 Recognition of Multiple Objects After Segmentation = 279
  14.5 Recognition Without Segmentation = 282
    14.5.1 Outline of Recognition Process = 283
    14.5.2 Description of Scenes = 284
    14.5.3 Kernel Selection = 285
    14.5.3.1 Selecting the Principal Part of a Kernel = 285
    14.5.3.2 Selecting the Subordinate Part of a Kernel = 285
    14.5.4 Model Selection = 285
      14.5.4.1 Kernel Consisting of Only the Principal Part = 286
      14.5.4.2 Kernel Consisting of the Principal Part and the Subordinate Part = 286
    14.5.5 Matching Between Regions = 287
    14.5.6 Scene Interpretation = 289
References = 292

New Arrivals Books in Related Fields

Deisenroth, Marc Peter (2020)