MEDICAL IMAGE PROCESSING. Instructor : Prof. D. Cavouras Electronic Engineer, B.Sc., M.Sc., Ph.D.

November 30, 2017 | Author: Aron Dixon | Category: N/A
Share Embed Donate


Short Description

1 MEDICAL IMAGE PROCESSING Instructor : Prof. D. Cavouras Electronic Engineer, B.Sc., M.Sc., Ph.D.2 Table of contents Ta...

Description

MEDICAL IMAGE PROCESSING

Instructor : Prof. D. Cavouras Electronic Engineer, B.Sc., M.Sc., Ph.D. ([email protected])

Table of contents

Table of contents............................................................................................................2 I. INTRODUCTION. .....................................................................................................3 1.1. The Computerized Imaging System....................................................................3 1.2. Image Display Process........................................................................................4 1.3. The Quality of the Medical Image. .....................................................................6 1.4. The Discrete Fourier Transform for image processing.......................................8 1.4.1. Properties of the DFT ..................................................................................9 2.1. Grey Scale Manipulation Techniques...............................................................13 2.1.1. Windowing Techniques. ............................................................................13 2.1.2. Histogram Modification Techniques. ........................................................17 2.2. Time Domain Image Matrix Manipulation Techniques. ..................................20 2.2.1. Image Smoothing. ......................................................................................20 2.2.2. Image Sharpening ......................................................................................27 2.3. Frequency Domain Image Enhancement. .........................................................34 2.3.2. Butterworth Filters.....................................................................................35 2.3.3. Exponential Filters.....................................................................................35 2.4. Time against frequency domain image enhancement.......................................38 III. IMAGE RESTORATION......................................................................................39 3.2. Wiener Filter. ....................................................................................................40 3.3. Power spectrum equalization or Homomorphic Filter......................................41 3.4. Generalized Wiener Filters. ..............................................................................44 IV.TOMOGRAPHIC IMAGE RECONSTRUCTION ALGORITHMS.....................45 4.1.The Fourier Reconstruction Algorithm..............................................................45 4.2 Convolution/Filter Back-Projection Algorithm. ................................................48 REFERENCES. ...........................................................................................................51 COMPUTER PROGRAMS.........................................................................................52 %Program_1 ............................................................................................................52 %Graphics version of Program_1............................................................................54 %Program 2 .............................................................................................................56 %Graphics version of Program 2.............................................................................60 %Program 3 .............................................................................................................62 %Graphics version of program 3 .............................................................................65 %Program 4 .............................................................................................................67 %Graphics version of Program 4.............................................................................70 %Program 5 .............................................................................................................72 %Graphics version of Program 5.............................................................................76 %Program 6 .............................................................................................................78 %Graphics version of program 6 .............................................................................83 %Graphics version of program 7 .............................................................................86

Prof. D. Cavouras, Ph.D.

2

I. INTRODUCTION. 1.1. The Computerized Imaging System. Figure (1) shows a block diagram of the basic components of a typical imaging system whose operation is controlled by a dedicated computer. The operation of such a system may be divided into four principal categories: acquisition, digitization, processing, and display. For the first operation, the data acquisition device may be the scintillation camera, the gantry of a CT or MR unit, the probe of the ultrasound unit, the image intensifier of the D.S.A., to mention the most important computerized imaging systems. These are the transducers that transform the various types of radiation to analogue electric currents prior to entering the host computer.

RAM

Printer-Plotter Laser Copier

Acquisition Interface

Mass Storage

Dialogue Screen

VDU

Figure 1 The second operation, the digitization process, in effect transforms the analogue signals into digital form, suitable for input to the computer system. This function is performed by the analogue to digital converters (ADC) and it is a part of acquisition interface which connects the acquisition device to the host computer. The design of both the acquisition device and the interface varies amongst the various imaging modalities, but they share a common purpose; that of transforming the oncoming radiation from (or through) the patient's body into a two dimensional digital signal, the so-called digital image, and storing it in the computer memory. A major part of the digital image formation process plays the third operation, the data processing function, which is accomplished by the computer itself and the specialized hardware built to speed up processing time, such as array processors. The main parts of the computer system are shown in Figure (2).

Prof. D. Cavouras, Ph.D.

3

CPU

Mass Storage

VDU BUS

Detectors ADC’s

Image Processor

Figure 2 The result of the acquisition-digitization-processing functions is the digital image (used here in its discrete form x(n1,n2) by analogy to the discrete signal x(n) ), which in fact refers to a (discrete) matrix whose row n2 and column n1 indices identify a point in the image. The corresponding matrix element value x(n1,n2) is the result of the interaction of the radiation with tissue and it is displayed on a monitor by a brightness level or grey level. That point is usually called pixel (picture element). 1.2. Image Display Process. As shown in Figure (3) the digital image is stored in the RAM memory of the CPU as a two dimensional array x(n1,n2) or image matrix, whose dimensions may vary from 32x32 up to 1024x1024 depending on the modality used. The picture displayed on the monitor ( see Figure(3) ) consists of bright points, whose brightness is proportional to the value held in a corresponding position (byte or word) in the image matrix of the CPU, which, in turn, is proportional to the radiation intensity that strikes the imaging system's detector.

Acquisition Interface (P,PMT,ADC,MUL)

Φ i, j

Xi,j

Φ i, j ∝ X i, j ∝ I i, j

Figure (3)

Prof. D. Cavouras, Ph.D.

4

Ii,j

The visual display unit (VDU). As shown in Figure (4) it consists of the controller and the display screen. The VDU controller consists of a RAM memory and a micro-processor unit. The RAM is used to store the picture to be displayed, so that the imaging system's CPU is turned over to other tasks, such as receiving new data etc. VDU screen μP ADC

RAM

VDU Controller

Figure 4 The microprocessor is dedicated to reading each matrix row and from the value stored in each row position (e.g. a byte) it calculates a dc voltage, which is then applied, through the DAC, across the electron gun of the VDU. The VDU display screen. It consists of the electron gun which every 16 nsecs or less sweeps the whole of the VDU screen in parallel consecutive lines. At each instant position it shoots against the phosphor layer of the VDU screen an electron beam, whose electrons attain kinetic energies proportional to the numbers stored in the corresponding bytes of the image matrix. The effect is the emission for about 16 nsecs of visible light of brightness proportional to the kinetic energy of the electron beam . That bright spot is a picture or image element and it is the building block of the picture (e.g. 128x128 pixels constituting a picture). In this way, it is possible to display a picture stored in the RAM memory of the controller and to keep it there by (refreshing) sweeping the screen every 16 nsecs or less. The pixel brightness levels depend on the particular VDU in use, but for most medical imaging systems the brightness levels were, up to recently, restricted to 16 levels of brightness or grey levels that form the grey scale, usually displayed alongside most digital medical images. There are two major reasons for the limited number of available grey-levels on the screens of most medical systems. The first reason is the cost and the second is that it has been proved that the trained optical system of the radiologist can distinguish only 16 (for others 80) grey-levels in a picture. However, for most medical images (eg. CT images), the range of values (densities) in their image matrix is by far larger (about 3000 for CT images) than the available 16 screen greylevels. Any attempt to depict all image values on the screen at the same time will result in the loss of image contrast, giving a diagnostically poor picture. For this reason, image enhancement techniques called 'grey-scale modification techniques', described in chapter (2), are employed in order to display part of the range of imagevalues (quantization levels) at a time.

Prof. D. Cavouras, Ph.D.

5

1.3. The Quality of the Medical Image. The quality of the medical image is as good as is the clarity of the specific information sought for in the image by the observing physician. However, there is a need to define objective image quality features in order to assess the quality of the image produced by an imaging system or by a processing technique. The quality of the medical image depends on and is assessed by three parameters: sharpness, contrast, and noise. Image sharpness, concerns mainly the ability to distinguish image detail. Image detail deterioration or blurring is mainly due to the imaging system's impulse response, the so called Point Spread Function (PSF). Image sharpness is assessed by the image spatial resolution. The latter is defined as the ability of the imaging system to distinguish (i.e. clearly display) two small high contrast dots close to each other. Quantitatively, the spatial resolution is determined by the smallest distance between two distinguishable high contrast dots or by the magnitude of the FWHM (Full Width at Half Maximum) or by the LSF (the number of distinguishable lines per cm) or by the MTF (Modulation Transfer Function). Figure (5) gives the meaning of the PSF, while it shows the distance FWHM, which is the limit of the distance that the two point sources should differ in order to be distinguishable. Another way of quantifying Image Sharpness is the MTF, which in effect is the Fourier Transform of the PSF shown in Figure 6. It defines the cut-off frequency fco at the 5% of its maximum magnitude. S

S2

S1

PSF2

PSF2

PSF

FWHM

FWHM

Figure 5

Prof. D. Cavouras, Ph.D.

6

PSF

PSF

MTF

MTF DFT

DFT

Figure 6

Image contrast, concerns the ability to distinguish image detail of low contrast with its surrounding background. In other words, it is the ability of the imaging system to sense small variations in radiation intensity at its detector end and to display that intensity variation. A source of image contrast degradation is the presence of noise. For the assessment of image contrast the term contrast resolution is used and it is defined as the smallest distinguishable intensity difference between a small image area (of specific shape and size) and its background. Image contrast can be quantified by equation (1) I area − I background %contrast = x100 (1) I background Image noise, is of statistical nature and signal dependent but, without great loss of accuracy, it can be considered additive and white. The noise contribution to each pixel of the medical image is not known but an assessment of the overall noise contribution to the image can be obtained from: i/ the standard deviation of the pixel intensity in an image area, where the signal is relatively constant. Equation (2) gives the formula used for calculating the standard deviation 1 2 ⎤

⎡ N (x − μ) i ⎥ SD = ⎢∑ (2) N −1 ⎥ ⎢ i =1 ⎣ ⎦ where, N: number of pixels involved μ :mean intensity value xi:pixel value ii/ The noise power spectrum, which can be approximately assessed from the image high frequencies, where the signal to noise ratio is small. If the image noise is considered white, then across the spatial frequency spectrum of the image the noise power spectrum is constant. Finally, the overall image quality may be approximately evaluated by equation (3) 2

Prof. D. Cavouras, Ph.D.

7

(Sharpness) 2 (contrast ) 2 (3) Noise _ Power _Spectrum Formula (3) can be used in the evaluation of various medical imaging systems using a suitable phantom. Im age _Quality = K

1.4. The Discrete Fourier Transform for image processing. Definition. The 1-d DFT is defined in (1) 2π − j kn 1 N −1 Fx( k) = ∑ x( n)e N (1) N n= 0 where, x(n) is the discrete periodic signal in the time/spatial domain, N is the period of x(n) or the length of the sampled signal, and n,k are the time/space and frequency variables. The Inverse DFT or IDFT is defined in (2)

x ( n) =

N −1

∑ Fx( k )e

j

2π kn N

(2)

k=0

The discrete Fourier pair defined in (1) and (2) may also be written as in (3) Fx( k ) =

N −1

∑ x( n)W − kn

n=0

(3a) x( n) =

1 N −1 Fx( k ) W kn ∑ N k =0

(3b)

2π N

⎛ 2π ⎞ ⎛ 2π ⎞ = cos⎜ ⎟ + j sin⎜ ⎟ from Euler's relations. ⎝ N⎠ ⎝ N⎠ Since Fx(k) is a complex, then we have where W = e

j

Fx( k ) = Fx( k ) e jϕ x ( k ) = Rx( k ) + jIx( k )

(4)

where |

(

Fx( k ) = [ Rx( k )] + [ Ix( k )] φx( k) = tan −1

2

1 2 2

)

... amplitude

Ix( k) Rx( k)

... phase

2

Px( k)= Fx( k) ... power relations (4) are used to generate the amplitude, phase, and (frequency) spectra of the discrete signal x(n).

Prof. D. Cavouras, Ph.D.

8

power

The 2-dimensional DFT. For a N1xN2 image matrix, the 2-DFT pair is defined in (5), by analogy to (1) and (2): ⎤ −k n 1 1 N 2 −1 ⎡ N1 −1 −k n ⎢ ∑ x( n 1 ,n 2 ) W 1 1 ⎥ W 2 2 Fx( k 1 ,k 2 ) = (5a) ∑ N 1 N 2 n = 0 ⎢n =0 ⎥ 2 ⎣ 1 ⎦ x( n 1 ,n 2 ) =

N 2 −1 ⎡ N1 −1

∑ ⎢∑

k = 0 ⎢k = 0 2 ⎣ 1

Fx( k 1 ,k 2 )W

k 1n1

⎤ k n ⎥W 2 2 ⎥⎦

(5b)

From (5a) we get 1 Fx( k 1 ,k 2 ) = N2

N 2 −1 ⎡

1 ∑ ⎢N n =0 ⎢ 1 2 ⎣

N1 −1



n =0 1

x( n 1 ,n 2

−k n )W 1 1

⎤ −k n ⎥W 2 2 ⎥⎦

or 1 Fx( k 1 ,k 2 ) = N2

N 2 −1

∑ [ Fx′( k 1n 2 )]W

−k 2 n 2

(6a)

n =0 2

where Fx′( k 1 ,n 2 ) =

1 N1

N1 −1



n =0 1

x( n 1 ,n 2 ) W

− k 1n1

(6b)

In (6b) n2 is considered constant and n1 is variable. It is obvious that (6b) is the 1-dimensional DFT of image column n2. It is, therefore, concluded that the first step to calculate the 2-dimensional DFT of an image N1xN2 is to form a new N1xN2 complex matrix of the 1-dimensional DFT of the columns of the image matrix. In relation (6a) k1 is considered constant and n2 is the variable. Also, (6a) is the 1-d DFT of the complex 2-d signal Fx'(k1,n2), which is the row k1 of the new complex image-matrix form, found in the previous step by means of relation (6b). Thus, the second and final step of calculating the 2-d DFT of an image matrix N1xN2 is to take the 1-d DFT of the rows of the complex image matrix found in step 1 and store the results in the final complex matrix N1xN2. A similar procedure is followed for the computation of the 2-d IDFT. The Fourier Transform of an image x(n1,n2) is usually computed using the Fast Fourier Transform (FFT). 1.4.1. Properties of the DFT Shift theorem 1-d case

:

if 1-d DFT[x(n)] = Fx(k)

then 1-d DFT[x(n-h)] = W-khFx(k)

(1)

then 1-d IDFT[Fx(k-h)] = Wkhx(n)

(2)

also if 1-d IDFT[Fx(k)] = x(n)

Prof. D. Cavouras, Ph.D.

9

2-d case

:

if 2-d DFT[x(n1,n2] = Fx(k1,k2) 2d_DFT[x(n1-h1,n2-h2)] =

W-(k1h1+k2h2)

then Fx(k1,k2)

(3)

also if 2d_ IDFT[Fx(k1,k2)] = x(n1,n2)

then

2d_IDFT[Fx(k1-h1,k2-h2)] = W (n1h1+n2h2) x(n1,n2)

(4)

Now, if n1=n2=N/2 W (n1h1+n2h2) x(n1,n2)=...=(-1) n1+n2x(n1,n2) = N N⎞⎤ ⎡ ⎛ = 2 − d IDFT⎢ Fx⎜ k 1 − ,k 2 − ⎟ ⎥ 2 2 ⎠⎦ ⎣ ⎝

(5)

Thus, if we multiply the image matrix x(n1,n2) by the factor ( −1) 1 2 we can shift the origin in the frequency domain to the point (N/2,N/2). This is often useful when we attempt to display the amplitude or power spectrum of an image. n +n

Periodicity: 1-d case : In the definition of the DFT (relation 1) the discrete signal x(n) was considered periodic with period equal to the length of the signal, i.e. x(n)=x(n+N), where n:0,1,2,...,N-1 and N is the period. The DFT of x(n), that is the Fx(k) where k:0,1,2,...,N-1, is a periodic complex signal with period N, equal to the period N of the signal x(n). Also, one period of the signal x(n) or of the complex signal Fx(k) is enough to completely specify the DFT and the IDFT respectively. 2-d case : Similar considerations hold for the 2-d case, that is:

Fx(k1,k2)=Fx(k1+N,k2)=Fx(k1,k2+N)=Fx(k1+N,k2+N)

(6)

Complex Conjugate Theorem. 1-d case : For the DFT the complex conjugate theorem states:

⎛N ⎞ ⎛N ⎞ Fx⎜ + L⎟ = Fx* ⎜ − L⎟ ⎠ ⎝2 ⎝2 ⎠ where, L=0,1,2,...,

(7)

N ⎛N ⎞ -1 and Fx* ⎜ − L⎟ is the complex conjugate. In other ⎝2 ⎠ 2

words, we only need to calculate half the DFT of a signal x(n), since the other half is the complex conjugate of the first N/2 values of the Fx(k). Taking under consideration the fact that the magnitudes of the Fx(k) and its complex conjugate Fx* (k) equal to each other

Prof. D. Cavouras, Ph.D.

10

⎛N ⎞ ⎛N ⎞ Fx⎜ + L⎟ = Fx* ⎜ − L⎟ ⎝2 ⎠ ⎝2 ⎠

(8)

we can then demonstrate the symmetric property of the Fx(k) given in (8). Usually, the way to demonstrate the Frequency Spectrum of a signal is to shift it by N/2 in the frequency domain, i.e. taking the DFT of (-1) nx(n) as explained in the shift theorem paragraph. 2-d case: Similar to 1-d case, we get the 2-d complex conjugate theorem:

N N ⎛N ⎞ ⎛N ⎞ Fx⎜ + L1 , + L 2 ⎟ = Fx* ⎜ − L1 , − L2 ⎟ ⎝2 ⎠ ⎝2 ⎠ 2 2

where, L1,L2=0,1,2,...,

(9)

N −1 2

Also, for displaying the Fourier Spectrum of a picture we use relation (10):

[

Fx( k1,k2) = 2d − DFT ( −1) n1+ n2 x( n1,n2)

]

(10)

Figure (7) shows the picture and its Fourier Spectrum displaced at (N/2,N/2).

Σχήμα 7

Prof. D. Cavouras, Ph.D.

11

Convolution/Correlation Theorem. 1-d case : The discrete convolution theorem between two periodic signals x1(n) and x2(n) is defined as in (11) 1 N −1 (11) z ( n ) = x 1 ( n )∗ x 2 ( n ) = ∑ x ( m) x 2 ( m − n) N m= 0 1

where '*' denotes the convolution operation. Similarly the discrete correlation between two periodic signals is defined as z′( n) = x1 ( n)⊗ x 2 ( n) =

1 N

N −1

∑ x1 ( m) x 2 ( m + n)

(12)

m= 0

where ' ⊗ ' denotes the correlation operation. It can be proved that taking the DFT of 11 and 12 then DFT[z(n)] = DFT[x1(n)*x2(n)] = Fx1(k)Fx2(k) = Fz(k) DFT[z'(n)] = DFT[x1(n) ⊗ x2(n)] = Fx1* (k)Fx2(k) = Fz'(k)

(13) (14)

where, the term Fx1* (k) denotes the complex conjugate of Fx1(k). Equations (13) and (14) indicate that we can obtain the convolution/correlation of two signals by taking the inverse DFT of their product in the frequency domain (in the case of the correlation the complex conjugate of the DFT of the first signal is used). Equations (13) and (14) are referred to as the convolution/correlation theorems. Wrap-around error. It is an error associated with the convolution of two discrete signals x1(n) and x2(n) of length N (period). Their convolution is also a periodic signal of length N. It has been shown that unless we expand with zeroes the signals to a length of M>=2N-1, the periods of the convolution will overlap giving rise to false components in the overlapping areas. However, in the case of medical images this error is not serious and it can be overlooked with a good approximation, if we take under consideration the increase in the number of calculations associated with doubling the size of the image matrices. 2-d case : Analogous to equations (13) and (14) are the 2-d versions of the convolution/correlation theorems: 1 1 N −1N −1 z( n1 ,n 2 ) = x1 ( n1 ,n 2 )∗ x 2 ( n1 ,n 2 ) = ∑ ∑ x ( m,l) x 2 ( m − n1 ,l − n 2 ) (15a) N N m= 0 l = 0 1

[

z( n1 ,n 2 ) = 2d _IDFT Fx1 ( k 1 ,k 2 ) Fx 2 ( k 1 ,k 2 ) z′ ( n 1 , n 2 ) = x 1 ( n 1 ,n 2 )⊗ x 2 ( n 1 , n 2 ) =

(15b).

1 1 N −1N −1 ∑ ∑ x ( m,l) x 2 ( m + n1 ,l + n 2 ) (16a) N N m= 0 l = 0 1

[

z′( n 1 ,n 2 ) = 2d _ IDFT Fx *1 ( k 1 ,k 2 ) Fx 2 ( k 1 ,k 2 )

Prof. D. Cavouras, Ph.D.

]

12

]

(16b).

II. IMAGE ENHANCEMENT.

The purpose of the image enhancement techniques is to suitably modify the image so that the resulting image will be of superior quality to the eye of the observing physician. There are several image enhancement techniques and their employment depends on the specific application problem and the observer himself. They may be distinguished into 2 major categories: i/ Grey scale manipulation for increasing image contrast and ii/ Image matrix manipulation for decreasing image noise and blur. Furthermore, image enhancement by image matrix modification can be achieved in either the spatial or the frequency domain, since the convolution process can be carried out in either domain. For this reason, in this text, image enhancement is examined in both domains. First, we examine the grey scale manipulation techniques. 2.1. Grey Scale Manipulation Techniques. These are image enhancement techniques which, although simple and easy to implement, they provide impressive results. They attempt to deal with the problem that in most medical images the range of values of the image matrix (quantization levels) is by far larger than the available range of intensity or grey levels. Any attempt to display the whole of the image matrix range will result in the loss of image contrast. There are two types of grey scale manipulation techniques: a/the windowing techniques and b/ the histogram modification techniques, both widely used in computerized Medical Imaging Systems. 2.1.1. Windowing Techniques. In the windowing techniques only part of the range of the image matrix values is displayed with available greyscale at a time. The term "window" refers to the section of the image value range that is displayed, which of cource can "slide" across the whole image value range, displaying different parts at a time. The "windowing" principle is shown in Figure (1).

z2 z1 I15

Im

I0 M0

Ma Mm Mb

M255

Figure 1(a)

Prof. D. Cavouras, Ph.D.

13

q16

qm

q1 Va

Vm

Vb

Figure 1(b) Let the image (quantization) values be 256 (28) from V1 to V256 and let the VDU grey levels be 16 (24) from g1 to g16, represented by the horizontal and vertical axes of Figure (1) respectively. One way to display the image (z1 display method) is to assign the whole range of 256 values to the 16 greylevels. By this method, each one of the 16 grey-levels represents 16 consecutive (256/16) image values on the screen providing a low contrast image. The other way (z2 displaying method) is to choose a range of values [Va,Vb], such as 64, and display that range using 16 greylevels. In the latter case, there will be 4 consecutive (instead of 16) image values assigned to each greylevel, thus part of the image is displayed highly contrasted. The algorithm that calculates the greylevel (qm) that each value (Vm) of the selected sub-range [Va,Vb] will be assigned to, is given below. From the similar triangles in Figure (1b) it is easily deducted that: q 16 − q 1 q m − q 1 = or Vb − Va Vm − Va q − q1 q m = 16 (V − Va ) + q 1 for Va ≤ Vm ≤ Vb Vb − Va m or = q1

for VmVb

(1)

Remarks: 1/[Va,Vb] is called window and Vb-Va is the window width, usually chosen by the user interactively as the first parameter for displaying a medical image. 2/ The mean value (Vb+Va)/2 or window level or level is the second parameter chosen by the user interactively and it determines the position (value) of the image-value subrange at the left and right of which the window will be equally stretched.

Figure (2) shows the same image displayed at a window level.

Prof. D. Cavouras, Ph.D.

14

Figure 2.

3/Following similar reasoning, in some cases we may have two windows at different levels as shown in Figure (3a). This is particular useful if the system's VDU has a large number of greylevels (eg. 64 or 256). In that case, various parts of an image, such as the lungs and the spine of a CT slice, can be displayed at the same time and with a good contrast. I

c1

c2

w1

w2 Figure 3(a): Double window

Prof. D. Cavouras, Ph.D.

15

M

Figure 3(b): Double window and histograms.

4/ The way image values may be assigned to greylevels does not always have to be linear but it may follow other patterns as in Figure (3c).

I z1 z2 z3

M

Figure 3c: Non-linear functions of windowing.

Prof. D. Cavouras, Ph.D.

16

2.1.2. Histogram Modification Techniques. The histogram is a graph describing the number of pixels assigned to each greylevel of a digital image. Each displayed medical image has a unique histogram and the histogram modification techniques attempt to enhance image contrast by altering the image histogram. The new histogram may be that of a known picture or a histogram with equal number of pixels per greylevel. The latter case, which is the most popular in medical systems, is called the histogram equalization procedure or technique. Histogram equalization Algorithm Let hi be the histogram of an image, digitized to 256 levels, as shown in Table 1, where we have assumed that the VDU screen can provide 8 distinct greylevels and that the image under consideration is chosen, for the sake of simplicity, to be part of a larger image matrix. Let ni be the new equalized image histogram, where there is an equal number of pixels assigned to each greylevel. Table 1: Histogram equalization example. 00 32 64 96 128 160 192 225 : Look up table 31 63 95 127 159 191 224 256 : of diplay routine. i: 0 1 2 3 4 5 6 7 : Greylevels hi: 2 4 12 14 14 12 4 2 : Original Histogram ni: 8 8 8 8 8 8 8 8 : Equalization Hist. Algorithm description: As a general principal we form ni by borrowing pixels from hi as follows: 1/ n0=h0+h1+2h2, where 2h2 denotes that we choose 2 pixels from the twelve pixels of h2 by one of the following principles: a/ we choose the pixels whose corresponding matrix value is nearest to the border value 63 of h1 (see Table 1). b/ we choose 2 pixels at random. All chosen pixels of h0, h1, and 2h2 are now assigned to greylevel 0, thus, in effect those image pixels are now squashed or compressed under the same greylevel. 2. n1=8h2, that is 8 of the remaining 10 pixels of h2 are used to form n1 and the last 2 pixels will be used for the formation of n2. 3/ n2=2h2+6h3 and so on... We see that the 12 points of h2 have been spread or stretched amongst 3 greylevels of the equalized histogram, i.e. n0, n1, n2. This means that we can now distinguish those points on the screen, i.e. the image contrast has been improved. Figure (4) shows the result of applying histogram equalization techniques.

Prof. D. Cavouras, Ph.D.

17

Figure 4

CDF-based histogram modification

CDF: Cumulative Distribution Function i

i

k =0

k =0

∑ h( k ) = ∑ g( k ), i = 0,2,3,..,7 Grey levels

h(i)

CDFh(i)

q(i)

CDFq(i)

0

1(0)

1

16

16

Σ

1

7(0)

8

16

32

Σ

2

21(1)

29

16

48

Δ

3

35(3)

64

16

64

Δ

4

35(5)

99

16

80

Δ

5

21(6)

120

16

96

Δ

6

7(7)

127

16

112

Σ

7

1(7)

128

16

128

Σ

Prof. D. Cavouras, Ph.D.

18

⎛ CDFh(i) ⎞ t = (int)⎜ − 1⎟ for approximate but fast calculation. Result is shown in Fig 5. ⎝ 16 ⎠

Figure 5

Prof. D. Cavouras, Ph.D.

19

2.2. Time Domain Image Matrix Manipulation Techniques. These techniques attempt to reduce one of the sources of image degradation, described by the general image degradation model and given by equation (1). x * ( n 1 ,n 2 ) = x( n1 ,n 2 )∗ h( n1 ,n 2 ) + d ( n1 ,n 2 ) (1)

1/ The convolution term x(n1,n2)*h(n1,n2) usually suppresses the medium to high frequency image content with result the blurring of the image boundaries and edges and considerable loss of image detail (h(n1,n2) is the same as the Point Spread Function or PSF of the system). 2/The noise degradation d(n1,n2) is of statistical nature and to a good approximation it is considered as additive and signal independent (although signal dependency may sometimes be severe). Noise resides mostly in the high frequency spectrum of the medical image and when its magnitude is larger than the difference in intensity of two neighboring areas (e.g. metastatic area against normal tissue in liver) then distinguishing those two areas is difficult. 2.2.1. Image Smoothing. Image smoothing concerns image processing techniques used for suppressing noise, which for various reasons (e.g. sampling, transmission, statistical nature of radiation) resides in the image. Usually, medical image noise is statistical and occupies mainly the image high frequency band. Thus, techniques for image smoothing are in effect high frequencies suppression techniques or low pass filtering techniques. The obvious repercussion is the suppression of useful high frequency image information, resulting in image boundary blurring and image detail degradation. Thus, caution should be exercised on the amount of image smoothing. Local Averaging. 1-dimensional case…: Figure (1) presents the signal degradation process for signal x(n) through system h(n) and with additive noise d(n). Output signal y(n) is described by equation (1):

y(n)=x(n) ∗ h(n)+d(n)=x'(n)+d(n) where x'(n)=x(n)*h(n)

Prof. D. Cavouras, Ph.D.

(1)

20

h(n1,n2)

+

P.S.F. x(n1,n2)

d(n1,n2)

y(n1,n2)

Image

Noise

Degraded Image

Figure 1 Let hf(n) be the digital filter impulse response then the degraded signal y(n) is filtered as shown below. 1 If h f ( n) = [111 , , ,...,1] (N terms) then the filtered output y'(n) is given by: N (2) y'(n)=y(n) ∗ hf(n)=[x'(n)+d(n)] ∗ hf(n)= N −1

=

∑ [ x ′ ( n − k ) + d ( n − k ) ]h f ( k )

(3)

k=0

1 [11, ] and from (3) 2 1 1 y ′( n) = ( x ′( n) + x ′( n −1)) + (d ( n) + d ( n −1)) (4) 2 2 Similarly for N=M 1 1 y ′( n) = ( x ′( n) + x ′( n −1) +...+ x ′( n − M)) + (d ( n) + d ( n −1) +...+ d ( n − M)) (5) M 2 Now, if noise is considered white then we would expect its mean value to tend to zero, thus the larger the M in (5) the closer to zero is the second term of (5). Therefore, when we want to filter out the noise from signal x(n) we replace each signal point by the average of M neighboring points. Caution should be taken as to the number of M points used, since a large M would smooth severely the image, suppressing the high frequencies signal content. Usually, equation (5) is used as in (6) Now for N=2, h f ( n) =

y ′ ( n) =

1 M

k=

M −1 2

∑ x( k )

(6)

M −1 k =− 2

where M is odd, i.e. M=3,5,7 etc. Sometimes the filter is called moving average since it is applied locally by moving across the signal.

Prof. D. Cavouras, Ph.D.

21

Frequency response of 2-point averaging.

It is interesting to examine the frequency response of the local averaging filter to prove that it is a low pass filter. For this, we examine the simplest 2-point averaging described by (7): 1 1 y( n) = ( x( n) + x( n −1)) + (d ( n) + d ( n −1) ) (7) 2 2 where n:0,1,2,...,N-1. if the noise term in (7) is considered negligible then y( n) =

1 ( x( n) + x( n −1)) 2

(8)

Taking the DFT of (8) Fx( k ) ⎛ x( n −1) ⎞ Fx( k ) ⎛ x( n) ⎞ + W −k = Fy( k ) = DFT⎜ ⎟= ⎟ + DFT⎜ ⎝ 2 ⎠ ⎝ 2 ⎠ 2 2 ⎛ 2π ⎞ − j⎜ ⎟ k ⎞ 1 ⎛⎜ = ⎜ 1+ e ⎝ N ⎠ ⎟⎟ Fx( k ) 2⎝ ⎠ which means that Fy(k) = Fh(k) Fx(k)

(9)

− j⎜ ⎟ k ⎞ 1⎛ where, Fh( k )= ⎜⎜ 1+ e ⎝ N ⎠ ⎟⎟ 2⎝ ⎠ ⎛ 2π ⎞

(10)

using trigonometric relations (11) ⎛ ϕ⎞ ⎛ ϕ⎞ ⎛ ϕ⎞ cos(ϕ ) =2cos 2 ⎜ ⎟ −1 , sin( φ) = 2sin⎜ ⎟ cos⎜ ⎟ ⎝ 2⎠ ⎝ 2⎠ ⎝ 2⎠

(11)

⎛ πk ⎞

⎛ πk ⎞ − j⎜ ⎟ We get Fh( k ) = cos⎜ ⎟ e ⎝ N ⎠ ⎝ N⎠ from where it is obvious that πk ⎛ πk ⎞ Fh( k ) = cos⎜ ⎟ and φ( k ) = − ⎝ N⎠ N

(12)

(13)

which is a low-pass filter.

Prof. D. Cavouras, Ph.D.

22

2-dimensional case or Local Averaging Filter. Let image degradation equation be described by (14)

y( n1 ,n 2 ) = x( n1 ,n 2 )∗ h( n 1 ,n 2 ) + d ( n1 ,n 2 )

(14)

Let, also, the filtering procedure described by (15): y ′( n1 ,n 2 ) = y( n1 ,n 2 )∗ h f ( n1 ,n 2 )

(15)

Now, let hf(n1,n2) be given by (16)

h f ( n 1 ,n 2 ) =

1 M

1 1 …. 1 1 1 …. 1 …..…….. 1 1 …. 1 1 1 …. 1

(16)

then equation (15) can be written as in (17)

y ′( n1 ,n 2 ) = y( n1 ,n 2 )∗ h f ( n1 ,n 2 ) = l=

=

M −1 M −1 k= 2 2



∑ x( n 1 − l, n 2 − k ) h f ( l,k )

(17)

M −1 M −1 l =− k =− 2 2

Let M=3 then equation (17) says that a 3x3 matrix (or mask) will move over the entire image matrix, locally performing the 2-d convolution of (17). This is shown in Figure (3). ………………………… . . . . . O7 O8 O1…… …….. O6 x O2…… …….. O5 O4 O3 …… …………………………

Figure 3

Prof. D. Cavouras, Ph.D.

23

For clarification reasons, points on the image matrix have been labeled as O1,O2,..,O8 and x is the point whose y(n1,n2) is to be computed. Then the convolution operation of (17) is simply described by (18) ⎞ 1⎛ 8 y = ⎜ ∑ Oi + x⎟ (18) 9 ⎝ i =1 ⎠

1 h f ( n 1 ,n 2 ) = 9

1 1 1 1 1 1 1 1 1

Now, there are various filtering masks that combine various points around x in Figure (3) and with different weights. The most popular masks are described in (19)

1 SM 1 = 9

1 1 1 1 1 1 1 1 1

Smoothing Mask 1

1 5

0 1 0 1 1 1 0 1 0

Smoothing Mask 2

SM 2 =

(19) 1 SM 3 = 10

SM 4 =

1 9

1 1 1 1 2 1 1 1 1

Smoothing Mask 3

1 2 1 2 4 2 1 2 1

Smoothing Mask 4

Prof. D. Cavouras, Ph.D.

24

Figure (5) shows the smoothing effect of mask SM3 on the original image of Figure (4).

Figure 4

Figure 5

Selective Local Averaging. As mentioned before, local averaging may reduce image noise, but at the same time it reduces high frequency image content, i.e. blurring of boundaries, edges, and of image detail. To prevent this from happening, conditional local averaging may be introduced. The idea behind the conditional or selective local averaging is that image points are left unchanged if some condition is satisfied. This methodology is clarified by the following examples. Image noise reduction without boundary or edge blurring. For this, some sort of an edge or boundary detector is applied and local averaging is performed when such features are not present. Let, local averaging be performed by SM1 mask and equation (17). First, we form the edge-detector condition, which is tested for each point x of the image matrix (see Figure (3)): 1 8 (20) if x ′ = ∑ Oi and x − x ′ ≥ T then an edge is present 8 i =1 where T is a threshold representing a mean magnitude of intensity change close to the image edges/boundaries. Thus, the Selective Local Averaging Algorithm is as in ⎞ 1⎛ 8 y = ⎜ ∑ Oi + x⎟ (21): 9 ⎝ i =1 ⎠ (21)

και

y=

{

x − x ′ 0 y ( n1 , n 2 ) ≤ 0

(29)

Figure (8) shows the result of applying LM1 mask. Again, the edge-enhancement effect and the loss of low frequency information are obvious. These techniques are mainly used for shape description, in image analysis algorithms, and for boundary detection.

Prof. D. Cavouras, Ph.D.

30

Figure 8 High emphasis filtering. Edge enhancement may be of value in many cases of image processing but equally important is image sharpening, that is retaining the low frequency image information and at the same time sharpening the edges and the overall image detail. This is achieved by the so-called high emphasis filter, which is best demonstrated by the following example: 1-d case : Let x(n) be a signal consisting of a ramp (2 4 6 8 10 ) and a step (edge:.. 10 10 15 15 ...) as shown below:

Table 1: Sharpening filters example 0 2 4 6 8 10 2 2 2 2 2 0 … 0 0 0 0 -2 0 2 4 6 8 12 0 1 2 3 4 5 where, Diff concerns the differencing filter: Lapl concerns the 1-d Laplacian filter: High E concerns the 1-d High Emphasis filter: Signal Diff. Lapl High E Space

10 0 0 10 6

10 0 0 10 7

10 0 0 10 8

10 5 5 5 9

15 0 -5 20 10

15 … … … 11

Δx(n)=x(n+1)-x(n) (30) 2 Δ x(n)=x(n+1)+x(n-1)-2x(n) (31) y(n)=x(n)-Δ2x(n) (32)

First, for the sake of demonstration, we apply the difference filter (relation 30) and as it can be observed (Differ row) sharp changes occur near the edges (time intervals [4,5], [9,10], [14,15]), while the valuable information, such as ramp detail, is destroyed revealing the characteristic effect the differencing filter has on the signal as described in the previous section. Second, we apply the Laplacian filter (relation 31) on the original signal and we observe (Laplac row) that all signal information vanishes (becomes 0) except at the edges, revealing the edge enhancement character of the filter. Third, the need to derive a filter which preserves low frequency information and at the same time enhances edges is satisfied by the high emphasis filter (relation 32), which simply is defined as the difference between the original signal and the Laplacian filter (High E row). The example used shows that for the high emphasis filter low frequency information is retained (time intervals [5,9] and [10,14]), signal detail is preserved (given by the ramp in the time interval [0-4]) and edges are emphasized (time intervals [4,5], [9,10], [14,15]) . Prof. D. Cavouras, Ph.D.

31

2-d High Emphasis Filter. Referring to equations (25) and (32) the 2-d high emphasis filter is given by: y= x-(O1+ O2+ O3+ O4 - 4x)= 5x-(O1+ O2+ O3+ O4) (33) which is formally described by (34) in accordance with

y( n 1 ,n 2 ) =

1

1

∑ ∑ x( n1 − m,n 2 − l) h( m,l)

m=−1 l =−1

where,

h( m,l)=

0 -1 0 -1 5 -1 HEM1 (High Emphasis Mask 1) 0 -1 0

and where h(m,l) will be referred as High Emphasis Mask 1 or HEM1, corresponding to the Laplacian mask LM1. Similarly, the HEM masks described in (35) are the ones corresponding to the LM masks given in (28): HEM 2 =

-1 -1 -1 -1 9 -1 HEM2 (High Emphasis Mask 2) -1 -1 -1

HEM 3 =

-1 -2 -1 -2 13 -2 HEM3 (High Emphasis Mask 3) -1 -2 -1

HEM 4 =

1 -2 1 -2 -5 -2 1 -2 1

HEM4 (High Emphasis Mask 4)

Again, negative values in the resulting image are dealt with by relation (29). Figure (9) shows the result of applying HEM1 mask.

Prof. D. Cavouras, Ph.D.

32

(35)

F

Figure (9)

Prof. D. Cavouras, Ph.D.

33

2.3. Frequency Domain Image Enhancement. Image enhancement in the frequency domain, whether low-pass, high-pass, bandreject or band-pass is performed as described by equation (1): if 2-d DFT[x(n1,n2)] = Fx(k1,k2) is the image DFT (a) and 2-d DFT[hf(n1,n2)]= Fhf(k1,k2) the filter DFT (b) (1) then y(n1,n2) = 2-d DFT[Fx(k1,k2) Fhf(k1,k2)] (c) is the enhanced image. Equation (1c) gives the convolution procedure, which in the frequency domain is a simple multiplication of the corresponding DFT's of the image and filter. The filters described in this section are real functions in the frequency domain, the so-called zero-phase-shift filters. They equally affect the real and imaginary parts of the image transform Fx(k1,k2) and they do not alter the phase of the transform. Several examples of popular filter transfer functions are presented in this section in their 1-dimensional version, since their 2-dimensional version is easily obtained using (2):

if then

k 12 + k 22 ≤ N Fh f ( k 1 ,k 2 ) = Fh f

(

k 12 + k 22

)

(2)

else Fhf(k1,k2) = Fhf(N) where N is the dimension of the square image matrix. 2.3.1. Ideal Filters. Equations (3) to (6) give the ideal filters which can be implemented only by computer: 1/ Low-pass ideal

Fh LP ( k ) =

{

Fh HP ( k ) =

{

2/ High-pass ideal

1, 0,

0, 1,

for for

k ≤ fco k > fco

(3)

for for

k ≤ fco k > fco

(4)

3/ Band-reject ideal ⎧ ⎪1, ⎪ ⎪ HP Fh ( k ) = ⎨0, ⎪ ⎪ ⎪⎩1,

for for for

k≤d−

w 2

w w image_depth) We=image_depth;end; Ws=We-WW; if(Ws=Ws & ival
View more...

Comments

Copyright � 2017 SILO Inc.