24 modified keyword-based retrieval on fabric images

14
Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images - 1 - QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14. http://www.qjoest.com © 2020 Quantum Academic Publisher 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES BIRJANDI , M. 1 MOHANNA, F. 1* 1 Department of Communication Engineering, University of Sistan and Baluchestan, Zahedan, Iran. *Corresponding author e-mail: [email protected] (Received 05 th November 2020; accepted 27 th November 2020) Abstract. Considering diversity of patterns and texture of fabrics in the global markets, keyword-based image retrieval has gained more interest due to its efficiency. Recent researches show that the user defined keywords have not led to enough precision because of human error. These keywords should be revised before image retrieval. Therefore, success of the keyword-based image retrieval depends on the rate of correcting keywords which those are specified by user. In this paper, a method is presented for the keyword-based retrieval on fabric images to improve the user defined keywords. It is done by eliminating wrong keywords, and also adding new keywords to the fabric images. The proposed approach was implemented on 1000 images with different texture, and pattern where the results represented %91 to %100 retrieval precision. Keywords: patterns, texture, fabrics, global market Introduction Fabrics with different texture and patterns are in the global markets. Therefore, an efficient system is needed for their retrieval. The most important image retrieval methods are content-based image retrieval (CBIR) (Nazir et al., 2018; Dharani and Aroquiaraj, 2013; Rajam and Valli 2013; Datta et al., 2008) and Tag-based image retrieval (TBIR) (Wakchaure Sujit and Shamkuwar Devendra, 2014; Yang et al., 2011). Retrieved results of the CBIR system are based on visual features similarity, between the query image and the searched images. However, the main disadvantage of the CBIR system is its extracted low level visual features, which cannot describe the image properly. Therefore, the retrieval precision of the CBIR system is limited due to this fact. On the other hand, Improved Completed Robust Local Binary Pattern (ICRLBP) (Kurniawardhani et al., 2015) is one of the robust texture extraction for image retrieval that is rotation invariant. However, it extracted a lot of features during recognition process. Therefore, it leaded to high time consuming and curse of dimensionality. Overcoming these issues were tried in (Kurniawardhani et al., 2016) by reduction insignificant or unnecessary ICRLBP features and examination of the effect of these reductions on the precision and recall of the retrieval process. The TBIR system could solve this problem due to applying the user-defined keywords that are assigned to each image in the database. In the TBIR system, to retrieve a query image, user enters some keywords to describe that image and retrieval results are extracted based on these keywords. Efficiency of the TBIR system is much higher than the CBIR system (Zhu et al., 2010), because it eliminates semantic distance. On the other hand, precision of the TBIR system depends strongly on correction of the user keywords due to human mistakes. These mistakes have two general classes,

Upload: others

Post on 11-Jan-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 1 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC

IMAGES

BIRJANDI, M.1 – MOHANNA, F.

1*

1 Department of Communication Engineering, University of Sistan and Baluchestan, Zahedan,

Iran.

*Corresponding author e-mail: [email protected]

(Received 05th November 2020; accepted 27

th November 2020)

Abstract. Considering diversity of patterns and texture of fabrics in the global markets, keyword-based

image retrieval has gained more interest due to its efficiency. Recent researches show that the user

defined keywords have not led to enough precision because of human error. These keywords should be

revised before image retrieval. Therefore, success of the keyword-based image retrieval depends on the

rate of correcting keywords which those are specified by user. In this paper, a method is presented for the

keyword-based retrieval on fabric images to improve the user defined keywords. It is done by eliminating

wrong keywords, and also adding new keywords to the fabric images. The proposed approach was

implemented on 1000 images with different texture, and pattern where the results represented %91 to

%100 retrieval precision. Keywords: patterns, texture, fabrics, global market

Introduction

Fabrics with different texture and patterns are in the global markets. Therefore, an

efficient system is needed for their retrieval. The most important image retrieval

methods are content-based image retrieval (CBIR) (Nazir et al., 2018; Dharani and

Aroquiaraj, 2013; Rajam and Valli 2013; Datta et al., 2008) and Tag-based image

retrieval (TBIR) (Wakchaure Sujit and Shamkuwar Devendra, 2014; Yang et al., 2011).

Retrieved results of the CBIR system are based on visual features similarity, between

the query image and the searched images. However, the main disadvantage of the CBIR

system is its extracted low level visual features, which cannot describe the image

properly. Therefore, the retrieval precision of the CBIR system is limited due to this

fact. On the other hand, Improved Completed Robust Local Binary Pattern (ICRLBP)

(Kurniawardhani et al., 2015) is one of the robust texture extraction for image retrieval

that is rotation invariant. However, it extracted a lot of features during recognition

process. Therefore, it leaded to high time consuming and curse of dimensionality.

Overcoming these issues were tried in (Kurniawardhani et al., 2016) by reduction

insignificant or unnecessary ICRLBP features and examination of the effect of these

reductions on the precision and recall of the retrieval process.

The TBIR system could solve this problem due to applying the user-defined

keywords that are assigned to each image in the database. In the TBIR system, to

retrieve a query image, user enters some keywords to describe that image and retrieval

results are extracted based on these keywords. Efficiency of the TBIR system is much

higher than the CBIR system (Zhu et al., 2010), because it eliminates semantic distance.

On the other hand, precision of the TBIR system depends strongly on correction of the

user keywords due to human mistakes. These mistakes have two general classes,

Page 2: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 2 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

including ignoring some realities of an image by user and misspelling of some image

features.

One way to cope with above problems is automatic annotation method, in which

keywords are created based on the image features (Bhargava et al., 2010; Li and Wang,

2008; Akbas and Yarman Vural, 2007). Implementation of the annotation method

requires images classification with respect to a set of training data. The classifier detects

presented keywords in any test data based on training data. Therefore, the system

becomes aware of these data using features of each keyword (Guillaumin et al., 2009;

Grangier and Bengio, 2008; Chang et al., 2007). In such a system, rate of the trained

keywords is limited, due to time consuming and human costs annotation to correct

keywords. Some methods like optimized annotation algorithm (Wang et al., 2007) and

distance based training methods improved the annotation method efficiency (Wang et

al., 2008). Some graph-based methods were presented for automatic annotation (Zha et

al., 2009). In Wu et al. (2013), obtaining the keywords was initially improved by

creating a keywords matrix driven from user annotation, where its member was 1 if

keyword j was assigned to image i and zero otherwise. Then, another matrix termed,

visual featured matrix, was considered automatically to the keywords matrix by

updating the relevance score of all images keywords. The visual featured matrix

included visual features of each image. Finally completed keywords matrix was used in

the TBIR system. In all previous annotation methods, a set of certain visual features

were extracted for each user query that was the same for all keywords. In this paper an

algorithm is proposed to revise faulty keywords and to add missed keywords in order to

improve retrieval speed. In the proposed algorithm, user keywords are divided to

different classes and some visual predetermined features are specified for each class.

For example, for automatic annotation on striped fabric images, it is not necessary to

use texture features extracted from these images. Therefore, only shape features, related

to striped design are imposed on striped fabric images for annotation. Meanwhile, for

each keyword classes, optimized methods in terms of precision and speed are used to

improve annotation speed and performance. This is another advantage of keywords

classification that searched by user. Further to the retrieval speed improvement,

applying a number of keywords in fabric images annotation, eliminates training data,

which reduces annotation time consuming and human cost. Finally, the proposed

algorithm is implemented on 1000 fabric images with diversity of patterns.

Materials and Methods

Proposed algorithm steps

Block diagram of the proposed algorithm is shown in Figure 1. In first step,

keywords of each fabric image are presented in a matrix as shown in Figure 2 (Wu et

al., 2013). In this matrix, all the keywords and all the fabric images in our database are

aligned in columns and rows, respectively. Each member of this matrix is zero, if image

i has not keyword j and vice versa.

Page 3: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 3 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

Figure 1. Block diagram of the proposed algorithm.

Page 4: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 4 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

Figure 2. A keywords Matrix.

Then, value 1 shows presence of a keyword in a corresponding image. It should be

noted that in a keywords matrix production, a keyword may be repeated in several

images. Therefore, no new column should be considered for repeated keywords.

Algorithm 1 shows keywords matrix production automatically.

Algorithm 1: keywords matrix production:

1. Input user keywords for all images.

2. Count number of images, n, and all keywords, m

3. Eliminate repetitive keywords and retain the others

4. Consider an n*m matrix T, with zero values.

5. Set positions in T that keywords are existed in those corresponding images.

6. Output the keywords matrix T

In second step, after establishing a keywords matrix, user enters a keyword as a

query. Then the entered keyword is classified automatically to one of the pre-defined

classes. These classes include line, circle, simple, mixed, pattern and texture. Also, by

determining a class for entered keyword, visual features of that class, are extracted

automatically. Visual features matrix (V) production, is considered as the next step.

Matrix V is an n*z matrix where n is the number of images in the database, and z is the

number of keywords existed in the determined class of a query keyword. Position of a

query keyword among keywords of the determined class is k. As a sample, if query

keyword is striped, z is the number of keywords belongs to the line class and k is

position of the stripped among the line class keywords.

For any keyword k z, corresponding positions of all images, which have this

keyword, will be set in matrix V, and vice versa. Algorithm 2 shows the visual features

matrix production.

Algorithm 2: Visual features matrix production

1. Input query keyword.

2. Query keyword Classification automatically, and count the number of keywords

existed in the determined class of the query keyword (z).

Page 5: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 5 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

3. Extract automatically the visual features of that class from all images in the

database.

4. Consider the n*z matrix V with zero values.

5. Find k as a query keyword among the keywords of the determined class

6. Set positions in column V(:,k) that corresponding images have keyword k.

7. Output Visual features matrix V

Comparison of these two matrixes, presents the TBIR results, respectively as follow;

1. All the images corresponding to “1” in T and V.

2. All the images corresponding to “1” in T.

3. All the images corresponding to “1” in V

Note that repetitive images are not shown in steps 2, and 3. User can also select the

number of illustrated results.

It is expected that the proposed algorithm, totally, extracts the lower visual features,

reduces the retrieval time. In most cases, there is no need to train data, because of the

keyword classification, which minimizes human costs. Furthermore, some images are

added to the results in step 3, that they do not even have query keyword, which will

reduce the human errors on annotation process. In the following, 6 classes which are

defined above, are discussed in detail.

Keywords classification

Line class

In the line class, fabric images with the stripped and checkered pattern have been

retrieved among all other fabric images, applying line feature. In the retrieval process,

first any fabric image is convolved with masks shown in Figure 3. Therefore, four

outputs are obtained in this stage. Next, the Hough transform (Duda and Hart, 1972) is

applied to these four outputs. Then the number of detected lines is calculated in each

transformed images. If only one mask detects lines by thresholding, and all others do

not detect any lines with the same thresholding, it means that fabric image is stripped,

otherwise it is checkered. Threshold value and the Hough transform parameters are

selected by trial and error. Minimum size of a line, minimum distance between two

lines, and threshold value are chosen 27, 2 and 3 respectively. Combination of the

Hough transform and the masks in Figure 2, increases speed and precision of the

proposed algorithm. Table 1 shows results of applying this combination to the stripped,

simple and mixed fabric images. As it is shown, the proposed algorithm has only

detected lines in the stripped fabric images.

Table 1. Results of applying combination of the Hough transform and the masks in Figure 2,

to the stripped, simple and mixed fabric images. Mixed fabric Simple fabric Stripped fabric

Page 6: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 6 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

Algorithm detection: lineless Algorithm detection: lineless Algorithm detection: lined

Figure 3. Masks for line detection; a) horizontal, b) vertical, c) +45˚, d) -45˚

Circle class

The spotted fabric images are retrieved among all other fabric images applying circle

feature. In order to find circles in a fabrics, first the stripped and checkered fabric

images are separated from all other images. Next, by applying the Circle Hough

transform to all remainder fabric images, the spotted fabrics are detected by

thresholding. The spotted fabrics include light and dark circles, which need to be

considered as a selected parameter in the Hough transform. This parameter, which is

named range of the radius, is selected a number between 10 to 30 pixels. Threshold

value is selected 10. Sensitivity rate of the Hough transform is also selected 0.84 by trial

and error. If the circle Hough transform detects overlaid circles, one of the circles only

remains and the others are eliminated by thresholding. It should be noted that patterns

like petals and semi-circles create failure in the Hough transform, but our selected

threshold values cope with this problem. Table 2 shows results of applying the proposed

algorithm to the spotted and petals fabric images.

Table 2. Results of applying the proposed algorithm to the spotted and petals fabric images. Spotted fabric Spotted fabric Spotted fabric Petals fabric

Algorithm detection:

Spotted Algorithm detection:

Spotted Algorithm detection:

Spotted Algorithm detection:

Non-spotted

Simple class

First the edges of all the fabric images, are extracted applying the LOG filter (Maini

and Aggarwal. 2009) Next, mean of the edge pixels is calculated. By thresholding, the

simple fabric images are then separated from other fabrics. Threshold value is selected 7

Page 7: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 7 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

by Trial and error. Outputs of the LOG filter, are black on the simple fabric images.

Table 3 shows results of applying the proposed algorithm to the simple, mixed, and

checkered fabric images.

Table 3. Results of applying the proposed algorithm to the simple, mixed and checkered

fabric images. Simple fabric Mixed fabric Checkered fabric

Algorithm detection: Simple Algorithm detection: Not

simple

Algorithm detection: S\Not

simple

Mixed class

In this class, if the fabric image does not belong to the line, circle, and simple

classes, it belongs to the mixed class.

Pattern class

Our database includes fabric images with various patterns such as pictures of woman,

man, animals, math expressions, skeleton, heart and etc. Size and direction of these

patterns are fixed. To find these patterns, different methods can be applied. Each of

these methods has advantages and disadvantages (Mahalakshmi et al., 2012). In the

proposed algorithm, the cross-correlation method is applied to detect pattern of interest

as follows.

1. Calculate average of the patterned fabric image I, according to formula (1) in the

M×N mask.

(1)

2. Calculate the image local intensity range by formula (2).

(2)

3. Calculate by formula (3)

(3)

where;

4. Select maximum peak of the by thresholding, which presents existence of

the pattern of interest. Threshold value is selected 0.77. Note that patterns of all

Page 8: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 8 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

the fabric images in the database are established in the off-line stage of the

proposed algorithm.

Texture class

Texture feature can be extracted with various methods such as, the GLCM

(Mohanaiah et al., 2013), in which the grey level co-occurrence matrix is applied, and

the GLDM (Conners and Harlow, 1980), in which the grey level difference probability

density function of an image is calculated. The RABGLD (Yang and Guo, 2011), is

another method that extracts texture based on the regional average binary grey level

difference co-occurrence matrix. The wavelet transform (Ruiz et al., 2004), is also

another feature extraction method which extracts texture robustly with different scales.

Applying the 2D DFT method (Tao et al., 2003), is common for the feature extraction

as well. However widely use of the Gabor transform (Roslan and Jamil, 2012) is

reported for the texture extraction with high precision and efficient. The Gabor

transform preserves image information in both time and frequency domain in different

directions (Hammouda and Jernigan, 2000).

The Gabor filter bank

The 2D Gabor filter bank is defined by formula (4).

(4)

where; , , are the central frequency, which in where the filter produces the

greatest response. , , are the standard deviation of the Gaussian function along x

and y direction. , is the position of an image pixel. Selected values for the Gabor

filter bank parameters by trial and error, are shown in Table 4.

Table 4. Experimental frequencies and orientations of the 2D Gabor filter on the database. Frequency θ = 0 θ = 120 θ = 240

F1

Data training

There is no need of training data for any class that has introduced so far. However for

the texture class, training data is critical. The training data include 20 images of jean

fabrics, 12 images of cotton fabrics, and 8 images knotted fabrics that a number of these

images shown in Table 5. In this stage, the Gabor transform is applied to all training

fabric images. Next, combination of all the extracted Gabor feature vectors in each

training category is established. Therefore, the outputs are combined as the Gabor

feature vector for each category of the training data.

Page 9: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 9 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

Table 5. A number of the training data images. Sample 1 Sample 2 Sample 3

Jean

Cotton

Knotted

Texture classification

Various methods are available for the classification. One, is the Euclidean distance

between feature vectors of the input image and the training image (Rajam and Valli,

2013). The lower distance between two images, presents more similarity.

The proposed method shows high sensitivity to this distance for classification. For

example, the Euclidean distances for cotton and jean fabrics are calculated too close,

and therefore, it cannot be a good criterion for separation of these two fabrics. More

efficient classification method, is the K-Nearest Neighbour (KNN) classifier (Kotsiantis

et al., 2007), which is used in the proposed algorithm. The parameter, k of the KNN

classifier is selected 3 by trial and error.

Results and Discussion

Experimental results were obtained for 1000 fabric images in the database. The

variety of fabric images including 300 patterned, 149 stripped, 216 Checkered, 25

circled, 99 mixed, 100 jean, 52 cotton, 24 knotted, and 35 simple, were selected for the

database. A number of the fabric images captured by Canon digital camera #7035.

Some of them captured by Nokia Lumia 525. The remainder captured by smart phone.

A number of the patterned fabric images were also selected from internet. Images

captured by smart phone, were resized to 640*359 resolution. The others were also

resized to 640*480 resolution. All the database fabric images annotation, were carried

out in the off-line stage of the proposed algorithm. There were totally 70 keywords in

the database. Each fabric image keywords number was, a number in range of 1 to 7.

The proposed algorithm was implemented by an Intel Core i3 3217U 1.80 GHz laptop

with 4G RAM DDR3. Programming of the proposed algorithm was done by MATLAB

2018a. Table 6 shows retrieval results of the proposed algorithm for a number of

keywords in the pattern class. Table 7 shows the first three numbers of the retrieval

results of the proposed algorithm for each mentioned classes.

Page 10: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 10 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

Table 6. The Retrieval results of the proposed algorithm for a number of keywords in the

pattern class. Keyword Sample 1 Sample 2

Dog

Heart

x number

Cat

Table 7. The first three numbers of the retrieval results of the proposed algorithm for each

mentioned keyword classes. Class Output 1 Output 2 Output 3

Stripped

Checkered

Circled

Floral

Simple

Jean

Page 11: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 11 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

Cotton

Knotted

Experimental result evaluation

Retrieval evaluation criteria are selected as recall and precision according to

formulas (5) and (6).

(5)

(6)

where; A= number of relevant images retrieved, B= number of relevant images not

retrieved, and C= number of irrelevant images retrieved.

Table 8 shows retrieval results of the proposed algorithm based on precision and

recall criteria and average time for each image retrieving. In the pattern class, there are

24 different patterns, which are sorted to 3 groups including animals, human and others.

Some fabric images are placed in several classes depending on the pattern. Table 9

shows retrieval results of the proposed algorithm based on precision and recall and

average time for each image retrieval according to different groups in the pattern class.

Table 8. Retrieval results of the proposed algorithm based on precision, recall, and average

time criteria for each image retrieval. Kind/pattern Recall Precision Average

time/picture(s)

Stripped 97.31% 100% 0.2211 Checkered 99/53% 100% 0.2109

Circled 96% 100% 0.5024

Mixed 95.95% 199% 0.6361 Simple 97.14% 100% 0.5142

Jean 90% 98.90% 0.1228

Cotton 98.07% 91.07% 0.1228 Knotted 91.66% 91.66% 0.1228

Table 9. Retrieval results of the proposed algorithm based on precision, recall and average

time for each retrieval image according to different groups in the pattern class. Group name Animals Man Others

Recall 100% 91.42% 94.65%

Precision 100% 100% 100% Average time 0.2958 0.1495 0.3518

Pattern# 11 3 10

Fabric images# 149 35 131

.

Page 12: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 12 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

Conclusion

In this study, an algorithm is proposed for fabric images retrieval. The TBIR system

is used to eliminate the semantic distance. To improve the user keywords annotation,

precision of the user annotation is automatically confirmed through the retrieval

algorithm. In presence of error in the user keyword annotation, order of the retrieval

results displaying, were changed, where the fabric images with confirmed keywords

come first and then the other retrieval images. For the retrieval time reduction, user

keywords were divided to different classes and some visual predetermined features were

specified for each class. For each keyword classes, optimized methods, in terms of

precision and speed, were employed to that class to improve speed and performance of

annotation. In order to improve also speed more, in applying a number of keywords to

the fabric images annotation, training data was not needed. Therefore, training data

elimination reduced annotation time and human cost. Finally, the proposed algorithm

was implemented on 1000 fabric images with diversity of patterns. Retrieval results

showed the minimum 91% precision on the cotton and knotted fabric images and %100

precision on the other fabric images, representing high flexibility of the algorithm

against the high diversity of the fabric images. In summary, the proposed fabric images

retrieval algorithm had high precision and suitable retrieval time to work as a real time

system.

REFERENCES

[1] Akbas, E., Yarman Vural, F.T. (2007): Automatic image annotation by ensemble of

visual descriptors. – International Conference in Computer Vision and Pattern

Recognition (CVPR'07) 8p. [2] Bhargava, A., Shekhar, S., Arya, K.V. (2014): An object based image retrieval

framework based on automatic image annotation. – International Conference in

Industrial and Information Systems (ICIIS) 6p.

[3] Chang, S.F., Ellis, D., Jiang, W., Lee, K., Yanagawa, A., Loui, A.C., Luo, J. (2007): Large-scale multimodal semantic concept detection for consumer video. – In Proceedings

of the int. workshop on Workshop on multimedia information retrieval 9p.

[4] Conners, R.W., Harlow, C.A. (1980): A theoretical comparison of texture algorithms. – Pattern Analysis and Machine Intelligence 3: 204-222.

[5] Datta, R., Joshi, D., Li, J., Wang, J. Z. (2008): Image retrieval: Ideas, influences, and

trends of the new age. – ACM Computing Surveys (Csur) 40(2): 1-60. [6] Dharani, T., Aroquiaraj, I.L. (2013): A survey on content based image retrieval. –

International Conference in Pattern Recognition, Informatics and Mobile Engineering

(PRIME) 5p.

[7] Duda, R.O., Hart, P.E. (1972): Use of the Hough transformation to detect lines and curves in pictures. – Communications of the ACM 15(1): 11-15.

[8] Grangier, D., Bengio, S. (2008): A discriminative kernel-based approach to rank images

from text queries. Pattern Analysis and Machine Intelligence. – IEEE Transactions 30(8): 1371-1384.

[9] Guillaumin, M., Mensink, T., Verbeek, J., Schmid, C. (2009): Tagprop: Discriminative

metric learning in nearest neighbor models for image auto-annotation. – International

Conference in Computer Vision 7p. [10] Hammouda, K., Jernigan, E. (2000): Texture segmentation using gabor filters. – Canada:

Center for Intelligent Machines, McGill University. Available on:

https://www.mathworks.com/help/images/texture-segmentation-using-gabor-filters.html

Page 13: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 13 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

[11] Kotsiantis, S.B., Zaharakis, I., Pintelas, P. (2007): Supervised machine learning: A review

of classification techniques. – Informatica 31: 249-268.

[12] Kurniawardhani, A., Minarno, A. E., Bimantoro, F. (2016): Efficient texture image retrieval of improved completed robust local binary pattern. – International Conference

on Advanced Computer Science and Information Systems (ICACSIS) 6p.

[13] Kurniawardhani, A., Suciati, N., Arieshanti, I., (2015): Texture Feature Extraction Using

Improved Completed Robust Local Binary Pattern for Batik Image Retrieval. – Int. Journal of Advancements in Computing Technology 7(6): 69p.

[14] Li, J., Wang, J.Z. (2008): Real-time computerized annotation of pictures. Pattern Analysis

and Machine Intelligence. – IEEE Transactions on 30(6): 985-1002. [15] Mahalakshmi, T., Muthaiah, R., Swaminathan, P., Nadu, T. (2012): Review article: an

overview of template matching technique in image processing. – Res. J. Appl. Sci. Eng.

Technol 4(24): 5469-5473.

[16] Maini, R., Aggarwal, H. (2009): Study and comparison of various image edge detection techniques. – Int. Journal of image processing (IJIP) 3(1): 1-11.

[17] Mohanaiah, P., Sathyanarayana, P., GuruKumar, L. (2013): Image texture feature

extraction using GLCM approach. – Int. Journal of Scientific and Research Publications 3(5): 1-3.

[18] Nazir, A., Ashraf, R., Hamdani, T., Ali, N. (2018): Content based image retrieval system

by using HSV color histogram, discrete wavelet transform and edge histogram descriptor. – International conference on computing, mathematics and engineering technologies

(iCoMET) 6p.

[19] Rajam, I.F., Valli, S. (2013): A Survey on Content Based Image Retrieval. – Life Science

Journal 10(2): 2475-2487. [20] Roslan, R., Jamil, N. (2012): Texture feature extraction using 2-D Gabor Filters. –

International Conference in Computer Applications and Industrial Electronics (ISCAIE)

6p. [21] Ruiz, L.A., Fdez-Sarría, A., Recio, J.A. (2004): Texture feature extraction for

classification of remote sensing data using wavelet decomposition: a comparative study. –

International Archives of Photogrammetry and Remote Sensing 35(part B): 1109-1115. [22] Tao, Y., Muthukkumarasamy, V., Verma, B., Blumenstein, M. (2003): A texture

extraction technique using 2D-DFT and Hamming distance. – International Conference in

Computational Intelligence and Multimedia Applications (ICCIMA) 5p.

[23] Wakchaure Sujit, R., Shamkuwar Devendra, O. (2014): A Survey of Tag Completion for Efficient Image Retrieval Based on TBIR. – Int. Journal of Advanced Research in

Computer Eng. & Technology (IJARCET) 3(3): 996-1000.

[24] Wang, C., Zhang, L., Zhang, H.J. (2008): Learning to reduce the semantic gap in web image retrieval and annotation. – International Conference in Proceedings on Research

and development in information retrieval 7p.

[25] Wang, C., Jing, F., Zhang, L., Zhang, H.J. (2007): Content-based image annotation

refinement. – International Conference in Computer Vision and Pattern Recognition (CVPR'07) 8p.

[26] Wu, L., Jin, R., Jain, A.K. (2013): Tag completion for image retrieval. – Pattern Analysis

and Machine Intelligence 35(3): 716-727. [27] Yang, J., Guo, J. (2011): Image texture feature extraction method based on regional

average binary gray level difference co-occurrence matrix. – International Conference in

Virtual Reality and Visualization (ICVRV) 3p. [28] Yang, K., Hua, X.S., Wang, M., Zhang, H.J. (2011). Tag tagging: towards more

descriptive keywords of image content. Multimedia. – IEEE Transactions 13(4): 662-673.

[29] Zha, Z.J., Mei, T., Wang, J., Wang, Z., Hua, X.S. (2009): Graph-based semi-supervised

learning with multiple labels. – Journal of Visual Communication and Image Representation 20(2): 97-103.

Page 14: 24 MODIFIED KEYWORD-BASED RETRIEVAL ON FABRIC IMAGES

Birjandi and Mohanna: Modeified keyword-based retrieval on fabric images

- 14 -

QUANTUM JOURNAL OF ENGINEERING, SCIENCE AND TECHNOLOGY 1(3): 1-14.

http://www.qjoest.com

© 2020 Quantum Academic Publisher

[30] Zhu, G., Yan, S., Ma, Y. (2010): Image tag refinement towards low-rank, content-tag

prior and error sparsity. – Proceedings of the int. conference on Multimedia 9p.