Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Applications of Computational Intelligence in Multi-Disciplinary Research
Applications of Computational Intelligence in Multi-Disciplinary Research
Applications of Computational Intelligence in Multi-Disciplinary Research
Ebook653 pages6 hours

Applications of Computational Intelligence in Multi-Disciplinary Research

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Applications of Computational Intelligence in Multi-Disciplinary Research provides the readers with a comprehensive handbook for applying the powerful principles, concepts, and algorithms of computational intelligence to a wide spectrum of research cases. The book covers the main approaches used in computational intelligence, including fuzzy logic, neural networks, evolutionary computation, learning theory, and probabilistic methods, all of which can be collectively viewed as soft computing. Other key approaches included are swarm intelligence and artificial immune systems. These approaches provide researchers with powerful tools for analysis and problem-solving when data is incomplete and when the problem under consideration is too complex for standard mathematics and the crisp logic approach of Boolean computing.

  • Provides an overview of the key methods of computational intelligence, including fuzzy logic, neural networks, evolutionary computation, learning theory, and probabilistic methods
  • Includes case studies and real-world examples of computational intelligence applied in a variety of research topics, including bioinformatics, biomedical engineering, big data analytics, information security, signal processing, machine learning, nanotechnology, and optimization techniques
  • Presents a thorough technical explanation on how computational intelligence is applied that is suitable for a wide range of multidisciplinary and interdisciplinary research
LanguageEnglish
Release dateFeb 14, 2022
ISBN9780128241769
Applications of Computational Intelligence in Multi-Disciplinary Research

Related to Applications of Computational Intelligence in Multi-Disciplinary Research

Related ebooks

Science & Mathematics For You

View More

Related articles

Related categories

Reviews for Applications of Computational Intelligence in Multi-Disciplinary Research

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Applications of Computational Intelligence in Multi-Disciplinary Research - Ahmed A. Elngar

    Chapter 1

    Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern

    Prajoy Podder¹, M. Rubaiyat Hossain Mondal¹ and Joarder Kamruzzaman²,    ¹Bangladesh University of Engineering and Technology, Institute of Information and Communication Technology, Dhaka, Bangladesh,    ²School of Engineering and Information Technology, Federation University Australia, Churchill, VIC, Australia

    Abstract

    In this chapter, a novel feature extraction method is proposed for faster iris recognition. This new method is a hybrid process combining three-level Haar wavelet transform (HWT) and modified local binary pattern (MLBP). In this hybrid method, firstly, HWT is applied to the normalized iris image, resulting in four output images including the approximation image known as LL subband. This LL subband is then further decomposed using HWT into four subimages. The resultant second-level LL is decomposed using HWT into the third-level LL subband. The application of repeated HWT extracts the major information-containing region, reducing the information size. Next, MLBP is applied to the obtained LL, where MLBP includes local binary pattern and Exclusive OR operations. The output of MLBP is a binary iris template. The effectiveness of this proposed hybrid HWT–MLBP method is experimentally evaluated using three different datasets, namely CASIA-IRIS-V4, CASIA-IRIS-V1, and MMU. The proposed HWT–MLBP method can obtain a reduced feature vector length of 1×64. For instance, when applied to the CASIA-IRIS-V1 dataset, HWT–MLBP can obtain an average correct recognition rate of 98.30% and a false acceptance rate of 0.003%. Results indicate that the proposed HWT–MLBP outperforms existing methods in terms of reduced feature length, which ensures faster iris recognition.

    Keywords

    Haar wavelet transformation; modified local binary pattern; feature extraction; iris recognition; feature length

    Abbreviations

    2D cepstrum with representing quefrency coordinates

    2D discrete-time Fourier Transform

    2D Gabor function

    2D discrete cosine transform (DCT) coefficient matrix

    W Angular frequency

    Standard deviations of x and y

    The x-axis coordinate of the iris circle

    The y-axis coordinate of the iris circle

    r Radius of the iris circle

    Gray level of the center pixel, c

    Gray level of the neighboring pixel, p

    Binary iris code obtained as XOR output

    MLBP operator

    1.1 Introduction

    The concern of high security and surveillance in the present world has made the identification of people an increasingly important issue. Among various identification modes, biometric has been considered over the last few decades for its reliable and accurate identification [1–5]. Commonly used biometric features include the face, fingerprint, iris, retina, hand geometry, and DNA identifications. Among them, nowadays, iris recognition has attracted significant interest in research and commercialization [6–15]. Iris recognition has several applications in the security systems of banks, border control, restricted areas, etc. [1–3]. One key part of such a system is the extraction of prominent texture information or features in the iris. This feature extraction method generates feature vectors or feature codes. The feature vectors of the unknown images are used to match those of the stored known ones. In an iris recognition system, the matching process matches the extracted feature code of a given image with the feature codes previously stored in the database. In this way, the identity of the given iris image can be known.

    A generalized iris recognition scheme is presented in Fig. 1.1. There are two major parts of Fig. 1.1, one showing the feature extraction and the other describing the identification portion of an iris. The system starts with image acquisition and ends with matching, that is, the decision of acceptance or rejection of the identity. In between, there are two main stages: iris image preprocessing and feature extraction [3,4]. Furthermore, iris image preprocessing includes the stages of iris segmentation, normalization, and enhancement [5,11]. In the acquisition stage, cameras are used to capture images of the iris. The acquired images are then segmented. In iris segmentation, the inner and the outer boundaries are detected to separate the iris from the pupil and sclera. A circular edge detection method is used to segment the iris region by finding the pixels of the image that have sharp intensity differences with neighboring pixels [3]. Estimating the center and the radius of each of the inner and outer circles refers to iris localization. After iris segmentation, any image artifacts are suppressed. Next is the normalization step in which the images are transformed from Cartesian to pseudo polar scheme. This is shown in Fig. 1.1, where boundary points are aligned at an angle. Image enhancement is then performed. As a part of feature extraction, the important features are extracted and then used to generate an iris code or template. Finally, iris recognition is performed by calculating the difference between codes with the use of a matching algorithm. For this purpose, Hamming and Euclidian are well known and also considered in this chapter [15]. The matching score is compared with a threshold to determine whether the given iris is authentic or not.

    Figure 1.1 Example of a typical iris recognition system: (A) process of feature extraction from an iris image; (B) identification of an iris.

    Despite significant research results so far [3–9,11,12,14], there are several challenges in iris recognition [13,15–26]. One problem is the occlusion, that is, the hiding of the iris caused by eyelashes, eyelids, specular reflection, and shadows [21]. Occlusion can introduce irrelevant parts and hide useful iris texture [21]. The movement of the eye can also cause problems in iris region segmentation and thus accurate recognition. Another issue is the computation time of iris identification. For large population sizes, the matching time of the iris can sometimes become exceedingly high for real-time applications, and the identification delay increases with the increase in the population size and the length of feature codes. It has been reported in the recent literature [13,18,22] that the existing iris recognition methods still suffer from long run times apart from other factors. This is particularly true when the sample size is very large, and the iris images are nonideal and captured from different types of cameras. Hence, devising a method that reduces the run time of iris recognition without compromising accuracy is still an important research problem. The identification delay can be reduced by reducing the feature vector of iris images. Thus this chapter focuses on the issue of reducing the feature vector which will lead to a reduction in identification delay without lowering the identification accuracy. For lowering the feature vector, the concept of Haar wavelet along with modified local binary pattern (MLBP) is used in this work. Note that in the context of face recognition [27–30] and fingerprint identification [31], the Haar wavelet transform demonstrates an excellent recognition rate at a low computation time. In Ref. [32], the Haar wavelet is also proposed without the use of MLBP.

    The main contributions of this chapter can be summarized as follows.

    1. A new iris feature extraction method is proposed. This new method is based on repeated Haar wavelet transformation (HWT) and MLBP. Note that MLBP is the local binary pattern (LBP) operation followed by Exclusive OR (XOR). This proposed method is different from the technique described in Ref. [30], which uses single-level HWT and LBP (without XOR) in the context of face recognition.

    2. The efficacy of the HWT–MLBP method is evaluated using three well-known benchmark datasets: CASIA-Iris-V4 [33], CASIA-Iris-V1 [34], and MMU iris database [35].

    3. A comparison is made of this new technique with the existing methods of feature extraction in terms of feature vector length, false acceptance rate (FAR), and false rejection rate (FRR). It is shown here that the proposed method outperforms the existing ones in terms of feature vector length.

    The remainder of this chapter is organized as follows. Section 1.2 provides a literature survey of the relevant research. Section 1.3 shows the iris localization part where the inner boundary and outer boundary can be detected. Section 1.4 describes iris normalization. Section 1.5 illustrates our proposed approach for the purpose of encoding the iris features. Section 1.6 describes the iris recognition process by matching score. The effectiveness of the new method is evaluated in Section 1.7. Finally, Section 1.8 provides a summary of the research work followed by the challenges and future work.

    1.2 Related works

    A number of research papers describe iris feature extraction techniques, which are discussed in the following.

    Ma et al. [3] applied a bank of spatial filters to acquire local details of the iris. These spatial filters generate discriminating texture features for an iris image based on the characteristics of the iris. Ma et al. [4] considered a bank of circular symmetric filters for iris feature extraction. These filters [4] are modulated by a circular symmetric sinusoidal function, which is different from the Gabor filter modulated by an orientated sinusoidal function. Monro et al. [5] used discrete cosine transform (DCT) for iris recognition. Daugman [6] introduced the idea of using a 2D Gabor wavelet filter for extracting features of an iris image. Furthermore, Masek et al. [9] used 1D and 2D Log-Gabor filters for feature extraction. Li et al. [8] used a convolutional neural network (CNN) algorithm, which is a form of deep learning, to extract iris features. Umer et al. [12] used a novel texture code defined over a small region at each pixel. This texture code was developed with vector ordering based on the principal component of the texture vector space. Soliman et al. [11] considered feature extraction using the Gabor filter, where the original Gabor features were masked via a random projection scheme. The masking was performed to increase the level of security. In this scheme, the effects of eyelids and eyelashes were removed. An iris feature extraction method using wavelet-based 2D mel-cepstrum was proposed in Ref. [14], where the cepstrum of a signal is the inverse Fourier transform of the logarithm of the estimated signal spectrum. The 2D cepstrum of an image can be defined by the following expression:

    where is the 2D cepstrum with representing quefrency coordinates, IDFT represents the inverse discrete Fourier transform, and is the 2D discrete-time Fourier Transform of the image. This scheme applied the Cohen–Daubechies–Feauveau 9/7 filter bank for extracting features. In wavelet cepstrum, nonuniform weights are assigned to the frequency bins. In this way, the high-frequency components of the iris image are emphasized, resulting in greater recognition reliability. Furthermore, this wavelet cepstrum method helps to reduce the feature set.

    Barpanda et al. [15] used a tunable filter bank to extract region-based iris features. These filters were used for recognizing noncooperative images instead of high-quality images collected in cooperative scenarios. The filters in this filter bank were based on the halfband polynomial of 14th order where the filter coefficients were extracted from the polynomial domain. To apply the filter bank, the iris template was divided into six equispaced parts and the features were extracted from all the parts except the second one, which mainly contains artifacts. Betancourt et al. [13] proposed a robust key points–based feature extraction method. To identify distinctive key points, three detectors, namely Harris–Laplace, Hessian–Laplace, and Fast-Hessian detectors, were used. This method is suitable for iris recognition under variable image quality conditions.

    For iris feature extraction, Sahua et al. in [22] used phase intensive local pattern (PILP), which consists of density-based spatial clustering and key-point reduction. This technique groups some closely placed key points into a single key point, leading to high-speed matching. Jamaludin et al. [18] used a 1D Log-Gabor filter and considered the subiris region for feature extraction. This filter has a symmetrical frequency response on the log axis. In this case, only the lower iris regions that are free from noise, as well as occlusions, are considered.

    In Ref. [17], combined discrete wavelet transform (DWT) and DCT were used for the extraction of iris features. Firstly, DWT was performed where the output of this stage was in the spatial domain. Next, DCT was performed to transform the spatial domain signal to the frequency domain and to obtain better discriminatory features. Another feature extraction method is the discrete dyadic wavelet transform reported in Ref. [16]. In dyadic wavelet transform, the decomposition at each level is done in a way that the bandwidth of the output signal is half of that of the input. In Ref. [26], a PILP technique is used for feature extraction and to obtain a feature vector of size 1×128. In this PILP method, there are four stages: key-point detection via phase-intensive patterns, removal of edge features, computation of oriented histogram, and formation of the feature vector. Iris features were extracted using 1D DCT and relational measure (RM), where RM encodes the difference in intensity levels of local regions of iris images [21]. The matching scores of these two approaches were fused using a weighted average. The score-level fusion technique compensates for some images that are rejected by one method but accepted by the other [21]. Another way of extracting feature vectors from iris images is by the use of linear predictive coding coefficients (LPCC) and linear discriminant analysis (LDA) [24]. Llano et al. in [19] used a 2D Gabor filter for feature extraction. Before applying this filter, the fusion of three different algorithms was performed at the segmentation level (FSL) of the iris images to improve the textual information of the images. Oktiana et al. [36] proposed an iris feature extraction system using an integration of Gradientface-based normalization (GRF), where GRF uses an image gradient to remove the variation in the illumination level. Furthermore, the work in Ref. [19] concatenated the GRF with a Gabor filter, a difference of Gaussian (DoG) filter, binary statistical image feature (BSIF), and LBP for iris feature extraction in a cross-spectral system. Shuai et al. proposed [37] an iris feature extraction method based on multiple-source feature fusion performed by a Gaussian smoothing filter and texture histogram equalization. Besides, there have been some recent studies in the field of iris recognition [38–49], with some focusing on iris feature extraction methods [38,40–42,45,49] and some on iris recognition tasks [39,44,46,48].

    The 2D Gabor function can be described mathematically by using the following expression:

    and 2D DCT can be defined as:

    where f(X,Y) is the image space matrix; (X, Y) is the position of the current image pixel and F is the transform coefficient matrix; W is the angular frequency; and σx and σy are the standard deviations of x and y, respectively.

    The concepts of machine learning (ML)-driven methods for example, neural networks and genetic algorithms have been reported [46], while the idea of deep CNNs has also been applied [40]. Moreover, researchers are now investigating the effectiveness of multimodal biometric recognition systems [43,47].

    A comparative summary of some of the most relevant works on iris feature extraction is shown in Table 1.1. It can be seen that there are several algorithms and these are applied to different datasets, achieving varying performance results.

    Table 1.1

    1.3 Iris localization

    This section discusses the iris localization step that employs circular Hough transformation, which is capable of properly detecting circles in the images. Hough transform searches for a triplet of parameters ( ) determining ( ), where , and represent the x-axis coordinate, y-axis coordinate, and the radius of the iris circle, respectively. In this case, ( ) represents the coordinates of any of the i points on the circle. With this consideration, the Hough transform can be defined as follows.

    (1.3)

    In this regard, edge points are detected first. For each of the edge points, a circle is drawn having the center in the middle of the edge. In this way, each of the edge points constitutes circles with the desired radius. Next, an accumulator matrix is formed to track the intersection points of the circles in the Hough space, where the accumulator has the number of circles. The largest number in the Hough space points to the center of the image circles. Several circular filters with different radius values are considered and the best one is selected.

    1.4 Iris normalization

    This section describes the iris normalization step. The size of different acquired iris images will vary because of the variation in the distance from the camera, angle of image capturing, illumination level, etc. For the purpose of extracting image features, the iris image is to be segmented and the resultant segments must not be sensitive to the orientation, size, and position of the patterns. For this, after segmentation, the resultant element is transformed to Cartesian. In other words, the circular iris image is transformed into a fixed dimension.

    Fig. 1.2 illustrates the normalization of iris images from three datasets. For each of the datasets, the one original input image is shown, followed by its inner and outer boundary detection, and then its segmented version, and finally its normalized version. Fig. 1.2A describes Daugman’s rubber sheet model for iris recognition. Three original images from three datasets are shown in Fig. 1.2B, F, and J. First of all, Fig. 1.2B is one original image from the CASIA-Iris-V4 dataset [33]. For the iris image in Fig. 1.2B–E represent the corresponding inner and outer boundaries, the segmented version, and the normalized version, respectively. Secondly, Fig. 1.2F is one original image from the CASIA-Iris-V1 dataset [34]. For the iris image in Fig. 1.2F–I represent the corresponding inner and outer boundaries, the segmented version, and the normalized version, respectively. Thirdly, Fig. 1.2J is one original image from the MMU iris database [35], and Fig. 1.2K–M represent the corresponding inner and outer boundaries, the segmented version, and the normalized version, respectively.

    Figure 1.2 Illustrations of (A) Daugman’s rubber sheet model; (B, F, J) original input images; (C, G, K) images with inner and outer boundary detection; (D, H, L) segmented iris regions, and (E, I, M) iris images after normalization.

    1.5 The proposed feature extraction scheme

    This section describes the proposed iris feature extraction method. Fig. 1.3 represents the block diagram of the proposed three-level HWT and MLBP. The decomposition of the image three times by HWT results in the reduction in feature size without significant loss in the image quality or important attributes. The use of MLBP further reduces the feature vector size without loss in image attributes. Fig. 1.4 shows the three-level HWT. It can be seen from the figure that at each level of HWT, the input image is divided into four output images. These output images are denoted as horizontal detail (HL), vertical detail (VL), diagonal detail (HH), and approximation (LL) images. The LL subimage, also known as the LL subband, contains significant information about the original image. In other words, the LL subband is a coarse approximation of an image and it does not contain high-frequency information. Next, the three-level HWT algorithm is discussed.

    Figure 1.3 Block diagram of the proposed approach for iris feature extraction.

    Figure 1.4 Three-level HWT.

    Algorithm 1: HWT

    Input: Normalized iris image

    Output: Approximation part of level three

    Main Process:

    Step 1: Apply first-level HWT to the normalized iris image to generate its wavelet coefficients.

    Step 2: Apply second-level HWT on the approximation part obtained from Step 1 to generate its wavelet coefficients.

    Step 3: Apply third-level HWT on the approximation part obtained from Step 2 to generate its wavelet coefficients.

    Step 4: Get the level three approximation part obtained from Step 3.

    The main idea of using HWT is that wavelet decomposition can transform a detailed image into approximation images. The approximation parts contain a major portion of the energy of the images. The HWT is repeatedly executed to shrink the information size. The results of the three-level decomposition produce a reduced characteristics region having little loss. This is shown in Fig. 1.5. It can be noted that most of the information of the iris image is contained in the extracted LL (low-frequency) region on the multidivided iris image as indicated by Fig. 1.5. The other regions have less information as indicated by their low intensity (dark) levels. Fig. 1.6 illustrates the size of each level for the three-level HWT. The application of level 1 HWT to the normalized image of size 64×512 results in wavelet coefficients of LL1, LH1, HL1, and HH1. In this case, the approximation part of level 1, denoted as LL1, becomes of size 32×256. Next, level 2 HWT is applied to LL1, which generates wavelet coefficients of LL2, LH2, HL2, and HH2. In this case, the approximation part of level 2 (LL2) becomes of size 16×128. After that, level 3 HWT is applied to LL2 to generate its wavelet coefficients LL3, LH3, HL3, and HH3. In this case, the approximation part of level 3 (LL3) becomes of size 8×64. Hence a major distinctive region LL3 is obtained by performing the wavelet transformation three times. Next, the LL3 region is used for the MLBP

    Enjoying the preview?
    Page 1 of 1