Image enhancement is used as the first step of image preprocessing for
this study for this study. This study used contrast stretching as it
comparatively performs better on the gray scale image as contrast
increased without distorting relative gray level intensities. As a
result, it does not yield any artificial looking image like histogram
equalization. Contrast stretching increases the contrast of the image by
stretching the range of intensity values of the image to span the
desired range from 0 to 1. It eliminates the ambiguity which may appear
in different regions in the image of the dataset [8]. Fig. 5
indicates the results of image enhancement for this
study
test image.
Figure 5 (A), Gray image, (B), Histogram of (A), (C) Histogram
equalization,
(D) Histogram of (C), (E) Contrast enhancement (contrast
stretching), (F) Histogram of (E).
segmentation
The image segmentation in DIP is a major process required to define the
Region Of Interest (ROI) in image. The segmentation can be done
manually, semi-automatic or automatically. The drawback of manual
segmentation is that it consumes huge time and its accuracy is depending
on the operator knowledge whereas automatic segmentation is apart from
this [9,10]. The segmentation with image processing for brain MRI is
divided in many techniques as Otsu segmentation, K-means clustering,
Fuzzy C-means and other methods etc.
Otsu Segmentation
Otsu’s method is one of the effective processes employed for the
selection of threshold and is well known for its rare time consumption.
Otsu’s thresholding method involves iteration along the entire probable
threshold values and evaluation of standard layout for the entire pixel
levels that occupy each side of the threshold. The algorithm involves
iterating through all the possible threshold values and calculating a
measure of spread for the pixel levels each side of the threshold, the
pixels that either fall in foreground or background. The aim is to find
the threshold value where the sum of foreground and background spreads
is at its minimum. We can define the within-class variance as the
weighted sum of the variance of each cluster:
\(\sigma_{w}^{2}\left(I\right)=W_{f}\sigma_{f}^{2}\left(I\right)+W_{b}\sigma_{b}^{2}\left(I\right)\)(2)
where \(\sigma_{w}^{2}\left(I\right)\) is within-class variance,\(\sigma_{f}^{2}\left(I\right)\) the variance of the foreground\(\sigma_{b}^{2}\left(I\right)\) is the variance of the background,\(W_{f}\) the weight of the foreground,\(\text{\ W}_{b}\) the weight of
the background.
K-means Clustering
K-means is a widely used clustering algorithm to partition data into k
clusters. Clustering is the process for grouping data points with
similar feature vectors into a single cluster and for grouping data
points with dissimilar feature vectors into different clusters. Let the
feature vectors derived from l clustered data be X= {xi│i=1,2….,
l}. The generalized algorithm initiates k cluster centroids C=
{cj│j=1,2,….k} by randomly selecting k feature vectors are
grouped into k clusters using a selected distance measure such as
Euclidean distance so that,[11] :
\(d=||x_{i}-c_{j}||\) (3)
The next step is to recompute the cluster centroids based on their group
members and then regroup the feature vector according to the new cluster
centroids. The clustering procedure stops only when all cluster
centroids tend to converge [11].
Fuzzy C-means
The FCM algorithm [5], [9], attempts to partition a finite
collection of pixels into a collection of “C” fuzzy clusters with
respect to some given criterion. Depending on the data and the
application, different types of similarity measures may be used to
identify classes. This algorithm is based on minimization of following
objective function [11]:
\(J=\sum_{i=1}^{N}{\sum_{j=1}^{C}\mu_{\text{ij}}^{m}}{||x_{i}-c_{j}||}^{2}\)(4)
where m is any real number greater than 1, \(\mu\)ij is the degree of
membership of xi in the cluster J, xi is the ith of d-dimensional
measured data, cj is the d-dimension center of the cluster.