Image processing : the fundamentals /

Saved in:
Bibliographic Details
Author / Creator:Petrou, Maria.
Imprint:Chichester, [England] ; New York : Wiley, c1999.
Description:xx, 333 p. : ill. ; 26 cm.
Language:English
Subject:Image processing -- Digital techniques.
Image processing -- Digital techniques.
Format: Print Book
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/4040791
Hidden Bibliographic Details
Other authors / contributors:Bosdogianni, Panagiota.
ISBN:0471998834 (alk. paper)
Notes:Includes bibliographical references (p. [325]-327) and index.
Table of Contents:
  • Preface
  • List of Figures
  • 1. Introduction
  • Why do we process images?
  • What is an image?
  • What is the brightness of an image at a pixel position?
  • Why are images often quoted as being 512 [times] 512, 256 [times] 256, 128 [times] 128 etc?
  • How many bits do we need to store an image?
  • What is meant by image resolution?
  • How do we do Image Processing?
  • What is a linear operator?
  • How are operators defined?
  • How does an operator transform an image?
  • What is the meaning of the point spread function?
  • How can we express in practice the effect of a linear operator on an image?
  • What is the implication of the separability assumption on the structure of matrix H?
  • How can a separable transform be written in matrix form?
  • What is the meaning of the separability assumption?
  • What is the "take home" message of this chapter?
  • What is the purpose of Image Processing?
  • What is this book about?
  • 2. Image Transformations
  • What is this chapter about?
  • How can we define an elementary image?
  • What is the outer product of two vectors?
  • How can we expand an image in terms of vector outer products?
  • What is a unitary transform?
  • What is a unitary matrix?
  • What is the inverse of a unitary transform?
  • How can we construct a unitary matrix?
  • How should we choose matrices U and V so that g can be represented by fewer bits than f?
  • How can we diagonalize a matrix?
  • How can we compute matrices U, V and [Lambda]1/2 needed for the image diagonalization?
  • What is the singular value decomposition of an image?
  • How can we approximate an image using SVD?
  • What is the error of the approximation of an image by SVD?
  • How can we minimize the error of the reconstruction?
  • What are the elementary images in terms of which SVD expands an image?
  • Are there any sets of elementary images in terms of which ANY image can be expanded?
  • What is a complete and orthonormal set of functions?
  • Are there any complete sets of orthonormal discrete valued functions?
  • How are the Haar functions defined?
  • How are the Walsh functions defined?
  • How can we create the image transformation matrices from the Haar and Walsh functions?
  • What do the elementary images of the Haar transform look like?
  • Can we define an orthogonal matrix with entries only +1 or -1?
  • What do the basis images of the Hadamard/Walsh transform look like?
  • What are the advantages and disadvantages of the Walsh and the Haar transforms?
  • What is the Haar wavelet?
  • What is the discrete version of the Fourier transform?
  • How can we write the discrete Fourier transform in matrix form?
  • Is matrix U used for DFT unitary?
  • Which are the elementary images in terms of which DFT expands an image?
  • Why is the discrete Fourier transform more commonly used than the other transforms?
  • What does the convolution theorem state?
  • How can we display the discrete Fourier transform of an image?
  • What happens to the discrete Fourier transform of an image if the image is rotated?
  • What happens to the discrete Fourier transform of an image if the image is shifted?
  • What is the relationship between the average value of a function and its DFT?
  • What happens to the DFT of an image if the image is scaled?
  • What is the discrete cosine transform?
  • What is the "take home" message of this chapter?
  • 3. Statistical Description of Images
  • What is this chapter about?
  • Why do we need the statistical description of images?
  • Is there an image transformation that allows its representation in terms of uncorrelated data that can be used to approximate the image in the least mean square error sense?
  • What is a random field?
  • What is a random variable?
  • How do we describe random variables?
  • What is the probability of an event?
  • What is the distribution function of a random variable?
  • What is the probability of a random variable taking a specific value?
  • What is the probability density function of a random variable?
  • How do we describe many random variables?
  • What relationships may n random variables have with each other?
  • How do we then define a random field?
  • How can we relate two random variables that appear in the same random field?
  • How can we relate two random variables that belong to two different random fields?
  • Since we always have just one version of an image how do we calculate the expectation values that appear in all previous definitions?
  • When is a random field homogeneous?
  • How can we calculate the spatial statistics of a random field?
  • When is a random field ergodic?
  • When is a random field ergodic with respect to the mean?
  • When is a random field ergodic with respect to the autocorrelation function?
  • What is the implication of ergodicity?
  • How can we exploit ergodicity to reduce the number of bits needed for representing an image?
  • What is the form of the autocorrelation function of a random field with uncorrelated random variables?
  • How can we transform the image so that its autocorrelation matrix is diagonal?
  • Is the assumption of ergodicity realistic?
  • How can we approximate an image using its K-L transform?
  • What is the error with which we approximate an image when we truncate its K-L expansion?
  • What are the basis images in terms of which the Karhunen-Loeve transform expands an image?
  • What is the "take home" message of this chapter?
  • 4. Image Enhancement
  • What is image enhancement?
  • How can we enhance an image?
  • Which methods of the image enhancement reason about the grey level statistics of an image?
  • What is the histogram of an image?
  • When is it necessary to modify the histogram of an image?
  • How can we modify the histogram of an image?
  • What is histogram equalization?
  • Why do histogram equalization programs usually not produce images with flat histograms?
  • Is it possible to enhance an image to have an absolutely flat histogram?
  • What if we do not wish to have an image with a flat histogram?
  • Why should one wish to perform something other than histogram equalization?
  • What if the image has inhomogeneous contrast?
  • Is there an alternative to histogram manipulation?
  • How can we improve the contrast of a multispectral image?
  • What is principal component analysis?
  • What is the relationship of the Karhunen-Loeve transformation discussed here and the one discussed in Chapter 3?
  • How can we perform principal component analysis?
  • What are the advantages of using principal components to express an image?
  • What are the disadvantages of principal component analysis?
  • Some of the images with enhanced contrast appear very noisy. Can we do anything about that?
  • What are the types of noise present in an image?
  • What is a rank order filter?
  • What is median filtering?
  • What if the noise in an image is not impulse?
  • Why does lowpass filtering reduce noise?
  • What if we are interested in the high frequencies of an image?
  • What is the ideal highpass filter?
  • How can we improve an image which suffers from variable illumination?
  • Can any of the objectives of image enhancement be achieved by the linear methods we learned in Chapter 2?
  • What is the "take home" message of this chapter?
  • 5. Two-Dimensional Filters
  • What is this chapter about?
  • How do we define a 2D filter?
  • How are the system function and the unit sample response of the filter related?
  • Why are we interested in the filter function in the real domain?
  • Are there any conditions which h(k,l) must fulfil so that it can be used as a convolution filter?
  • What is the relationship between the 1D and the 2D ideal lowpass filters?
  • How can we implement a filter of infinite extent?
  • How is the z-transform of a digital 1D filter defined?
  • Why do we use z-transforms?
  • How is the z-transform defined in 2D?
  • Is there any fundamental difference between 1D and 2D recursive filters?
  • How do we know that a filter does not amplify noise?
  • Is there an alternative to using infinite impulse response filters?
  • Why do we need approximation theory?
  • How do we know how good an approximate filter is?
  • What is the best approximation to an ideal given system function?
  • Why do we judge an approximation according to the Chebyshey norm instead of the square error?
  • How can we obtain an approximation to a system function?
  • What is windowing?
  • What is wrong with windowing?
  • How can we improve the result of the windowing process?
  • Can we make use of the windowing functions that have been developed for 1D signals, to define a windowing function for images?
  • What is the formal definition of the approximation problem we are trying to solve?
  • What is linear programming?
  • How can we formulate the filter design problem as a linear programming problem?
  • Is there any way by which we can reduce the computational intensity of the linear programming solution?
  • What is the philosophy of the iterative approach?
  • Are there any algorithms that work by decreasing the upper limit of the fitting error?
  • How does the maximizing algorithm work?
  • What is a limiting set of equations?
  • What does the La Vallee Poussin theorem say?
  • What is the proof of the La Vallee Poussin theorem?
  • What are the steps of the iterative algorithm?
  • Can we approximate a filter by working fully in the frequency domain?
  • How can we express the system function of a filter at some frequencies as a function of its values at other frequencies?
  • What exactly are we trying to do when we design the filter in the frequency domain only?
  • How can we solve for the unknown values H(k,l)?
  • Does the frequency sampling method yield optimal solutions according to the Chebyshev criterion?
  • What is the "take home" message of this chapter?
  • 6. Image Restoration
  • What is image restoration?
  • What is the difference between image enhancement and image restoration?
  • Why may an image require restoration?
  • How may geometric distortion arise?
  • How can a geometrically distorted image be restored?
  • How do we perform the spatial transformation?
  • Why is grey level interpolation needed?
  • How does the degraded image depend on the undegraded image and the point spread function of a linear degradation process?
  • How does the degraded image depend on the undegraded image and the point spread function of a linear shift invariant degradation process?
  • What form does equation (6.5) take for the case of discrete images?
  • What is the problem of image restoration?
  • How can the problem of image restoration be solved?
  • How can we obtain information on the transfer function H(u, v) of the degradation process?
  • If we know the transfer function of the degradation process, isn't the solution to the problem of image restoration trivial?
  • What happens at points (u, v) where H(u, v) = 0?
  • Will the zeroes of H(u, v) and G(u, v) always coincide?
  • How can we take noise into consideration when writing the linear degradation equation?
  • How can we avoid the amplification of noise?
  • How can we express the problem of image restoration in a formal way?
  • What is the solution of equation (6.37)?
  • Can we find a linear solution to equation (6.37)?
  • What is the linear least mean square error solution of the image restoration problem?
  • Since the original image f(r) is unknown, how can we use equation (6.41) which relies on its cross-spectral density with the degraded image, to derive the filter we need?
  • How can we possibly use equation (6.47) if we know nothing about the statistical properties of the unknown image f(r)?
  • What is the relationship of the Wiener filter (6.47) and the inverse filter of equation (6.25)?
  • Assuming that we know the statistical properties of the unknown image f(r), how can we determine the statistical properties of the noise expressed by Svv(r)?
  • If the degradation process is assumed linear, why don't we solve a system of linear equations to reverse its effect instead of invoking the convolution theorem?
  • Equation (6.76) seems pretty straightforward, why bother with any other approach?
  • Is there any way by which matrix H can be inverted?
  • When is a matrix block circulant?
  • When is a matrix circulant?
  • Why can block circulant matrices be inverted easily?
  • Which are the eigenvalues and the eigenvectors of a circulant matrix?
  • How does the knowledge of the eigenvalues and the eigenvectors of a matrix help in inverting the matrix?
  • How do we know that matrix H that expresses the linear degradation process is block circulant?
  • How can we diagonalize a block circulant matrix?
  • OK, now we know how to overcome the problem of inverting H; however, how can we overcome the extreme sensitivity of equation (6.76) to noise?
  • How can we incorporate the constraint in the inversion of the matrix?
  • What is the relationship between the Wiener filter and the constrained matrix inversion filter?
  • What is the "take home" message of this chapter?
  • 7. Image Segmentation and Edge Detection
  • What is this chapter about?
  • What exactly is the purpose of image segmentation and edge detection?
  • How can we divide an image into uniform regions?
  • What do we mean by "labelling" an image?
  • What can we do if the valley in the histogram is not very sharply defined?
  • How can we minimize the number of misclassified pixels?
  • How can we choose the minimum error threshold?
  • What is the minimum error threshold when object and background pixels are normally distributed?
  • What is the meaning of the two solutions of (7.6)?
  • What are the drawbacks of the minimum error threshold method?
  • Is there any method that does not depend on the availability of models for the distributions of the object and the background pixels?
  • Are there any drawbacks to Otsu's method?
  • How can we threshold images obtained under variable illumination?
  • If we threshold the image according to the histogram of ln f(x, y), are we thresholding it according to the reflectance properties of the imaged surfaces?
  • Since straightforward thresholding methods break down under variable illumination, how can we cope with it?
  • Are there any shortcomings of the thresholding methods?
  • How can we cope with images that contain regions that are not uniform but they are perceived as uniform?
  • Are there any segmentation methods that take into consideration the spatial proximity of pixels?
  • How can one choose the seed pixels?
  • How does the split and merge method work?
  • Is it possible to segment an image by considering the dissimilarities between regions, as opposed to considering the similarities between pixels?
  • How do we measure the dissimilarity between neighbouring pixels?
  • What is the smallest possible window we can choose?
  • What happens when the image has noise?
  • How can we choose the weights of a 3 [times] 3 mask for edge detection?
  • What is the best value of parameter K?
  • In the general case, how do we decide whether a pixel is an edge pixel or not?
  • Are Sobel masks appropriate for all images?
  • How can we choose the weights of the mask if we need a larger mask owing to the presence of significant noise in the image?
  • Can we use the optimal filters for edges to detect lines in an image in an optimal way?
  • What is the fundamental difference between step edges and lines?
  • What is the "take home" message of this chapter?
  • Bibliography
  • Index