next up previous
Next: The SECB Method Up: Image Analysis Previous: Image Analysis

Comparisons and Counterexamples in Image Restoration

Alfred S. Carasso, ACMD

As the world progresses towards creating a global information network, the range of applications of Image Processing is expanding rapidly. The increasing use of imaging systems in scientific, industrial, medical, and commercial environments, along with the advent of such fields as Teleradiology whereby an image obtained on a patient in one part of the world, is interpreted by a medical specialist in another part of the world, are some of the factors driving this expansion.

Most imaging systems are imperfect and tend to produce noisy blurred images. Image Restoration seeks to recover the true image by operating mathematically on the given degraded image. This is a large field of current research, drawing investigators from many disciplines, including mathematics, statistics, physics, astronomy, computer science, as well as several branches of engineering and medicine. As a result, a very diverse variety of approaches has evolved. Many of these approaches are computationally intensive, requiring elaborate hardware and software, as well as large amounts of computing time. Image Restoration is fundamentally an ill-posed problem, and sophisticated mathematical and statistical analyses are necessary to solve that problem.

Iterative techniques are often recommended. One advantage lies in the possibility of incorporating nonlinear constraints, such as positivity, into the solution algorithm. In a series of controlled deblurring experiments, with the exact image known a-priori, several such procedures were investigated. The widely used Landweber (or VanCittert) iteration, supplemented by a positivity constraint, needed some 5000 iterations to converge to a sharp image. This required about 24 hours of computing time on an SGI Indigo. In contrast, the noniterative SECB method recently developed by the author, produced a sharper image in less than 20 seconds.

Higher order iterative methods developed by Morris-Richards-Hayes in the late 1980's offer very substantial convergence acceleration. Second, third, and fourth order methods were studied extensively. Typically, such methods converged in two or three dozen iterations. However, the resulting image was invariably unsharp. Apparently, these procedures rapidly restore the low frequency components of the image but seem to have little effect on the all-important moderate to high frequency information. Most importantly, when positivity constraints were added, these higher order methods converged to the wrong solution. Several examples of misleading artifacts were found.

Wiener filtering is a classical signal processing technique that guarantees best-possible results in some probabilistic sense. This method requires a great deal of a-priori information, namely, the power spectra of each of the noise component and of the unknown true image. For a image, this represents over 500,000 numbers that must be known a-priori. In practice, such information is not available, and educated guesses for these power spectra must be used. (In that case, good results are no longer guaranteed). In a controlled experiment with synthetically added noise, the required true power spectra were calculated and fed into the Wiener algorithm, resulting in an excellent image. However, that image differed insignificantly from the SECB restoration which only requires 3 a-priori numbers. Moreover, slight perturbations of the true noise spectrum, simulating educated guesses such as might be used in practice, produced substantially poorer Wiener restorations.

Little systematic evaluation of the large variety of approaches to Image Restoration has been made in the literature. There is an urgent need for a sober assessment of the improvement in image quality claimed for many of these techniques, versus the attendant computational cost. The SECB procedure provides a valuable frame of reference in such studies. Future comparisons should focus on probabilistic approaches such as the maximum likelihood and maximum entropy methods. The ultimate aim is to solidify ACMD's knowledge base in this important area of information technology.



next up previous
Next: The SECB Method Up: Image Analysis Previous: Image Analysis