iconOpen Access

ARTICLE

A Pattern Classification Model for Vowel Data Using Fuzzy Nearest Neighbor

Monika Khandelwal1, Ranjeet Kumar Rout1, Saiyed Umer2, Kshira Sagar Sahoo3, NZ Jhanjhi4,*, Mohammad Shorfuzzaman5, Mehedi Masud5

1 Department of Computer Science and Engineering, National Institute of Technology Srinagar, Hazratbal, 190006, Jammu and Kashmir, India
2 Department of Computer Science and Engineering, Aliah University, Kolkata, India
3 Department of Computer Science and Engineering, SRM University, Amaravati, 522240, AP, India
4 School of Computer Science SCS, Taylor’s University, Subang Jaya, 47500, Malaysia
5 Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif, 21944, Saudi Arabia

* Corresponding Author: NZ Jhanjhi. Email: email

Intelligent Automation & Soft Computing 2023, 35(3), 3587-3598. https://doi.org/10.32604/iasc.2023.029785

Abstract

Classification of the patterns is a crucial structure of research and applications. Using fuzzy set theory, classifying the patterns has become of great interest because of its ability to understand the parameters. One of the problems observed in the fuzzification of an unknown pattern is that importance is given only to the known patterns but not to their features. In contrast, features of the patterns play an essential role when their respective patterns overlap. In this paper, an optimal fuzzy nearest neighbor model has been introduced in which a fuzzification process has been carried out for the unknown pattern using k nearest neighbor. With the help of the fuzzification process, the membership matrix has been formed. In this membership matrix, fuzzification has been carried out of the features of the unknown pattern. Classification results are verified on a completely llabelled Telugu vowel data set, and the accuracy is compared with the different models and the fuzzy k nearest neighbor algorithm. The proposed model gives 84.86% accuracy on 50% training data set and 89.35% accuracy on 80% training data set. The proposed classifier learns well enough with a small amount of training data, resulting in an efficient and faster approach.

Keywords


1  Introduction

Pattern classification has been a challenging task for the last decades. It is used in many practical applications (like pattern recognition, artificial intelligence, statistics, financial gaming, organization data, vision analysis, and medicine) [1]. There are many critical aspects in the pattern classification problem, like the accuracy of classification, computational time, learnability, generality, interpretation of parameters etc. Many approaches exist to create pattern classifiers, such as neural networks, statistical models, fuzzy logic systems, and evolutionary systems [25]. In the applications mentioned above, the classification of patterns is essential. But whenever classification data sets are highly overlying, and the boundaries of the classes are imprecisely defined, it becomes a challenging task, for example, in land cover classification, remote sensing images, or vowel classification [6,7].

Although, in various pattern recognition issues, the categorization of the input pattern depends upon the dataset, where, the actual sample size for every class is limited and perhaps not indicative of the actual probability distributions, regardless of whether they are known. In these conditions, numerous techniques depend on distance or similarity in feature sets, for example, discriminant analysis and clustering [810]. In various problems, machine learning methods such as neural network [11], k -nearest neighbour algorithm [12], support vector machines [13], and convolutional neural network [14,15] is used for classification purpose. Various fuzzy classifiers for different problems have been developed. Das et al. [16] developed a neuro-fuzzy model to classify medical diseases, i.e., liver diseases, cardiovascular, thyroid disorders, diabetes, cancer, and heart diseases, with the help of a neural network. A feature reduction model with fuzzification has been developed by Das et al. [17] to resolve the problem of data classification. Other methods combining machine learning with neuro-fuzzy models are surveyed by Shihabudheen et al. [18]. Patel et al. [19] presented a hybrid approach for imbalanced data classification by combining fuzzy k-nearest neighbor with an adaptive k-nearest neighbor approach. Their method assigns various k values to different classes based on their sizes.

Meher [1] proposed a model for pattern classification by combining neighborhood rough sets and Pawlak’s rough set theory with fuzzy sets. Ghosh et al. [20] proposed a model based on neuro-fuzzy classification for fully and partially labeled data utilizing the feed-forward neural network algorithm. A neuro-fuzzy system was presented by Meher [21] for pattern classification by extracting features using rough set theory. An extreme learning machine was then used to efficiently classify partially and fully labeled data and remote sensing pictures. Pal et al. [22] developed a rough-fuzzy model depending on granular computing to classify fully and partially labeled data using rough set theory. The neuro-fuzzy model was also used in various other problems, i.e., to analyse biomedical data [23], Parkinson’s disease diagnosis [24], and analysis of gene expression data [25].

According to the k-nearest neighbor algorithm, the class labels of k-closest patterns decide the input pattern label. K-closest patterns are selected based on distance like Euclidean or Manhattan [26]. The k-nearest neighbor method is suboptimal. Although it has been demonstrated that in the infinite sample condition, the error value for the 1-nearest neighbor method is upper constrained by no more than double the optimum Bayes error value and as k rises, this error value reaches the optimal value [27]. There are some problems identified by Keller et al. [26] in the k-nearest neighbor rule, which are as follows: the first one is, that all k-nearest neighbors are equally essential to assign a class label to the unknown pattern, which causes a problem, where classification data set overlaps. It happens because atypical patterns have equal weight as true representatives of the groups. The second problem is that after an input pattern is allocated to a class, there is no sign of its “strength” of membership in that particular class. The above problems are addressed by Keller et al. [26], and resolved by using the fuzzy set theory in the k-nearest neighbor rule [26,28]. According to Keller et al. [26], the input pattern strength is calculated for each class then the class with maximum strength is assigned to the input pattern.

In this paper, a model is proposed for pattern classification. First, the model finds the nearest neighbors to the input pattern using the k-nearest neighbor algorithm. Next, the model finds membership values of features of the input pattern using fuzzy sets. Then, the model utilize the product reasoning rule followed by MAX operation to find the class label of the input pattern. The proposed model performance is verified using various classification models with the vowel data set. The performance of the proposed model is also compared with the fuzzy k nearest neighbor algorithm on the 50% and 80% training data sets. The motivation of this work is to utilize the fuzzification process that produces the importance of features of input patterns belonging to all classes rather than just one class.

The main contributions of this paper are as follows:

•   A particular problem in the fuzzy k-nearest neighbor algorithm is addressed, i.e., when the data has highly overlapping classes.

•   The identified problem is resolved by using a membership matrix and considering the importance of each pattern feature rather than considering the significance only.

•   A pattern classification model is developed using the k-nearest neighbor algorithm. The model’s accuracy is verified using different classification models with the vowel data set.

The organization of the paper is as follows: In Section 2, steps of the proposed model have been discussed; in Section 3, the data set and the result and analysis are discussed, and the proposed model is compared with the fuzzy k-nearest neighbor algorithm, and the proposed model is also compared with five other classification models and conclusion is drawn in Section 4.

2  Framework of the Proposed Model

In this section, a model is proposed to classify unknown patterns, and the various steps are shown in Fig. 1. Initially, the proposed model finds out the nearest neighbors of the unknown pattern using the k-nearest neighbor algorithm. Now, selected nearest neighbors are provided as input to the fuzzification process. Then the reasoning rule and defuzzification process are carried out to find the class label for the unknown pattern. In this paper, the proposed model is implemented in MATLAB software. The succeeding subsections describe the classification process and the advantage of using it.

images

Figure 1: The proposed model flow chart for pattern classification

2.1 Nearest Neighbors of the Input Pattern

In this section, the nearest neighbours of the input pattern are chosen using the k nearest neighbors algorithm as shown in the first step of Fig. 1, where k is a positive integer. Let S={p1,p2,,pn} be a set of n completely labelled patterns, where each pathasaving l features and class labels. Since class labels of the patterns are known, therefore, these are called as known patterns. pattern pi is represented as pi={fi,1,fi,2,fi,3,..,fi,l} where fi,j is jth feature of the pattern pi . A pattern x, whose class labels are not known is called an unknown pattern, where x is represedistance-vector x={f1,f2,,fl} . D={d1,d2,,dn} is a distance vector, where di represents the Euclidean distance between the ith pattern pi and the unknown pattern x .

di=j=1j=l(fi,jfj)2 (1)

The pattern pi is in k nearest neighbors of x iff didj is satisfied for at least nk times for 1jn . Such patterns are selected among the n patterns that are known as nearest neighbor patterns. This is also verified by the k nearest neighbor algorithm, which is as follows:

images

2.2 Fuzzification of Features of Input Pattern

In this section, fuzzification of features of a pattern is processed by using k -nearest neighbors. The output of this step will be the membership matrix. In the fuzzification method, the membership value of a feature of the known pattern is represented as μl,ij , where μl,ij is the membership value of lth feature of ith pattern for the jth class. If μl,ij=0 , it means lth feature of ith pattern does not belong to jth class and if μl,ij=1 , it means lth feature of ith pattern fully belong to jth class and if 0<μl,ij<1 , it means the lth feature of ith pattern partially belongs to the jth class [2931]. The membership function given by the Keller et al. [26] is as below:

μi(x)=j=1j=kμi,j(1||xix||)2m1j=1j=k(1||xix||)2m1 (2)

where μi(x) is a membership value of the unknown pattern x for ith class and μi,j is a membership value of pattern ith for jth class. ||xix|| is an Euclidean distance between x and xi , and the variable m decides how intensely the distance is weighted. But in this function, the importance of the features of the nearest neighbors is neglected, which is overcome by the membership matrix. The membership matrix gives the membership degree of the features of an input pattern to different classes by utilizing fuzzy sets.

λr,s(x)=i=1i=kμr,is(1|fi,rfx,r|)2m1i=1i=k(1|fi,rfx,r|)2m1 (3)

where |fi,rfx,r| is the absolute difference of frequency between the rth feature of ith pattern among k nearest neighbors and the rth feature of unknown pattern x . λr,s(x) is the membership value of rth feature of unknown pattern x for the class sth. Therefore, if the pattern has l features and c classes, the membership matrix will have l rows and c columns. The membership matrix M of fuzzified inputs is represented as:

M=[λ1,1(x)λ1,2(x)λ1,c(x)λ2,1(x)λ2,2(x)λ2,c(x)λl,1(x)λl,2(x)λl,c(x)] (4)

The fact that the sum of a feature’s membership values in the c classes must be equal to one for mathematical tractability. Therefore, it is defined for rth feature as follows:

j=1j=cλr,j(x)=1 (5)

For example, when c = 2, if the membership values are near the value of 0.5, it indicates that the feature has a high level of membership in both of the classes; that is, the “bounding area” which isolates the classes from each other.

2.3 Reasoning Rule

The output of the fuzzification process is a membership matrix M (as described in [7,32]), which uses fuzzy sets to assign membership degrees to aspects of a pattern to distinct classes by applying fuzzy sets [28]. On fuzzy sets, aggregation operations merge multiple fuzzy sets in a personalized manner to form a single fuzzy set. For issues in which all features contribute adequately to the desired class and cooperate in the decision-making approach, union and intersection (basic aggregate operations) are often unsatisfactory [33]. So, we have utilized the minimum reasoning rule (RR) over the attributes of the membership matrix as described in Li et al. [7]. Ghosh et al. [32] have explained and illustrated the advantage of utilizing product RR rather than minimal RR in different real-life datasets. In this work, product RR has been used for finding the class label. After using the product RR, the output obtained in the form of a vector is given by:

M=[δ1,δ2,.,δc] andδj=r=1r=lλr,j(x) (6)

for j=1,2,..,c and λr,j(x) is membership value of rth feature of unknown pattern x for jth class.

2.4 Rescaling and Defuzzification

Finally, the rescaled vector is obtained as Mop and a hard choice is made by utilizing a MAX function for defuzzifying the class associated with the vector. The class label of nearest neighbors is assigned to the input pattern, which has the highest membership value.

Mop=[δ1,δ2,,δc]andδj=δjj=1j=cδj (7)

If δiδj , for j=1 to c and ji , then an unknown pattern belongs to the ith class where 1 ic . Here, δj is the membership value for jth class. The MAX defuzzification technique is commonly used to solve classification problems and provide a hard class label. Various defuzzification techniques, such as mean of maximum, centroid of area, and so on, are employed in other issues (for example, in the control system problem [34]). However, the fuzzy class label can be used for higher-level analysis, but normalization of the result may be required.

3  Result and Discussion

In this section, we will discuss the data set and performance of the proposed model. The performance of the presented model is shown in the context of percentage accuracy (PA), where percentage accuracy is the proportion of the testing data that the proposed model effectively categorizes. The known class label of testing data is compared with classified results from the proposed model for the model’s accuracy. The training and testing data of the data set are selected at random by partitioning the data set into two parts. Testing data is independent of training data.

3.1 Data Set

This paper verifies the proposed model on the benchmarked Telugu vowel data set [35]. The data set is completely labeled and comprises 871 patterns, having three features and six highly overlapping classes. Features of the patterns are the sound which is uttered by human beings. The overlapping idea of these classes can be imagined from Fig. 2 of the vowel data set [1]. It has been observed that about 50% of boundary regions of class 5(/e/) is overlapped with neighbor’s class boundaries, for example, class 1 (//), class 3(/i/) and class 6 (/o/).

images

Figure 2: Visualization of overlapping classes by using projection over F1-F2 plane of vowel data [1]

3.2 Proposed Model’s Performance at the Varying Percentage of Training Data

The performance evaluated for different percentages of training data is illustrated in Tab. 1. The results from the Tab. 1 show that, for different percentages of training data, the percentage accuracy of the classification has been evaluated for the respective training data at m = 1.1 and k = 5. For better visualization of the performance accuracy, the respective bar chart is also shown in Fig. 3. From Fig. 3, it is explicable that as the percentage of training data increases, the accuracy in classifying the testing data also increases simultaneously.

images

images

Figure 3: The proposed model performance with vowel data set

3.3 Comparison of the Proposed Model with Various Classification Models

The proposed model’s performance is compared with various classification models. The models stated below have the benchmarked accuracy for the vowel data Set at 50% and 80% training data set [3537]. Hence, the same benchmarked models have been used for the performance analysis on the 50% and 80% training data sets, which are stated below:

(a)   Models used on 50% training data set

Model 1: Low, medium, and high (LMH) fuzzification (Meher [1]),

Model 2: LMH with fuzzy product aggregation reasoning rule (FPARR) classification (Meher [1]),

Model 3: Neuro-fuzzy (NF) classifier (Ghosh et al. [20]),

Model 4: LMH and Pawlak’s rough set theory with FPARR (Meher [1]),

Model 5: LMH and neighborhood rough set with FPARR (Meher [1]),

Model 6: A pattern classification model for vowel data using fuzzy nearest neighbor (This model).

(b)   Models used on 80% training data set

Model 1: Neuro-fuzzy (NF) classifier (Ghosh et al. [20]),

Model 2: Class dependent fuzzification with Pawlek’s rough set feature selection (Pal et al. [22]),

Model 3: Class dependent fuzzification with neighborhood rough set (NRS) feature selection (Pal et al. [22]),

Model 4: NRS fuzzification and neural network classifier with extreme learning machine algorithm (Meher [21]),

Model 5: SSV decision tree (Duch et al. [37]),

Model 6: A pattern classification model for vowel data using fuzzy nearest neighbor (This model).

For the 50% and 80% training data set, the performance of all the classification models is shown in Tab. 2. In Tab. 2, at different percentages of the training data set, percentage accuracies for all the models have been calculated. From Figs. 4a4b, it is visible that the percentage accuracy of model 6 is highest as compared to the rest of the five models for 50% and 80% training data set. In the experimental analysis, the efficiencies of the models have been demonstrated and it was found that the accuracy of the proposed model is superior to the previous models at m = 1.1 and k = 5 with the vowel data set.

images

images

Figure 4: The comparison of proposed model with previous classification models: (a) on 50% training data set (b) on 80% training data set

3.4 Comparison of the Proposed Model with Fuzzy k-Nearest Neighbor Algorithm

The percentage accuracy of the presented model is compared with the fuzzy k-nearest neighbor algorithm proposed by Keller et al. [26] on the 50% and 80% training data sets. The accuracy of the presented model is calculated using a random subsampling technique. This technique randomly splits the data set into training and test data. The model has been trained for the training data for each split, and the accuracy is predicted using test data. The resulting performance accuracy is then averaged over the splits. A comparison of the proposed model with a fuzzy k-nearest neighbor algorithm over such splits is illustrated in Tab. 3.

images

From Tab. 3, it has been depicted that the average accuracy of the fuzzy k-nearest neighbor algorithm at 50% and 80% training data is 84.72477% and 85.05747%, respectively. In comparison, the average accuracy of the proposed model at 50% and 80% training data sets is 85.27523% and 88.50575%, respectively. The proposed model has better accuracy for both data sets than the fuzzy k-nearest neighbor algorithm. The results of the splits are shown in Figs. 5a5b. In experimental analysis, the model’s efficiency has been demonstrated. It has been observed that the performance accuracy of the proposed model is superior to the fuzzy k-nearest neighbor algorithm at m = 1.1 and k = 5 with the vowel data set.

images

Figure 5: Performance comparison of the presented classification model and fuzzy k-nearest neighbor algorithm with the vowel data set: (a) for 50% training data set at k = 5 (b) for 80% training data set at k = 5

4  Conclusion

The pattern classification model for the vowel data using fuzzy set theory has been proposed, exploring the advantage of the explicit fuzzy classification technique and improving the model’s performance. Thus, the model explores the collective benefits of these techniques, which provide better class partition details, helpful for significantly overlapping data sets. The proposed model generates a membership matrix that represents the importance of features of input patterns belonging to all classes rather than just one class. As a result, the ability to generalize is improved. The efficiency of the proposed model was calculated through the percentage accuracy (PA), which was measured for a completely labeled vowel data set. Classification accuracy of the proposed model is also compared with the previous classification models and the fuzzy k nearest neighbor algorithm. The proposed model gives 84.86% accuracy on 50% training data set and 89.35% accuracy on 80% training data set. The learning ability of the proposed model from a small fraction of training data makes it applicable to tasks, including a high number of features and classes. This work can also be extended for organizational data, financial gaming, statistics etc.

Data Availability Statement: In this paper, the benchmarked Telugu vowel data set taken from [35] and available on GitHub repository https://github.com/Monika01p/Telugu-Vowel-Data-set.

Funding Statement: This work was supported by the Taif University Researchers Supporting Project Number (TURSP-2020/79), Taif University, Taif, Saudi Arabia.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

 1.  S. K. Meher, “Explicit rough–fuzzy pattern classification model,” Pattern Recognition Letters, vol. 36, pp. 54–61, 2014. [Google Scholar]

 2.  S. Haykin and N. Network, “A comprehensive foundation,” Neural Networks, vol. 2, pp. 41, 2004. [Google Scholar]

 3.  L. Kuncheva, “Fuzzy classifier design,” Springer Science & Buisness Media, vol. 49, 2000. [Google Scholar]

 4.  S. Derivaux, G. Forestier, C. Wemmert and S. Lefevre, “Supervised image segmentation using watershed transform, fuzzy classification and evolutionary computation,” Pattern Recognition Letters, vol. 31, no. 15, pp. 2364–2374, 2010. [Google Scholar]

 5.  S. Umer, P. P. Mohanta, R. K. Rout and H. M. Pandey, “Machine learning method for cosmetic product recognition: A visual searching approach,” Multimedia Tools and Applications, vol. 80, no. 28, pp. 34997–35023, 2021. [Google Scholar]

 6.  F. Melgani, B. A. Al Hashemy and S. M. Taha, “An explicit fuzzy supervised classification method for multispectral remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 38, no. 1, pp. 287–295, 2000. [Google Scholar]

 7.  C. Li, J. Zhou, Q. Li and X. Xiang, “A fuzzy cluster algorithm based on mutative scale chaos optimization,” in Int. Symp. on Neural Networks, Berlin, Germany, pp. 259–267, 2008. [Google Scholar]

 8.  S. Russell and P. Norvig, “Artificial intelligence: A modern approach,” Prentice Hall, 2002. [Google Scholar]

 9.  A. Iosifidis, A. Tefas and I. Pitas, “Multi-view action recognition based on action volumes, fuzzy distances and cluster discriminant analysis,” Signal Processing, vol. 93, no. 6, pp. 1445–1457, 2013. [Google Scholar]

10. R. K. Rout, S. S. Hassan, S. Sindhwani, H. M. Pandey and S. Umer, “Intelligent classification and analysis of essential genes using quantitative methods,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 16, no. 1s, pp. 1–21, 2020. [Google Scholar]

11. M. Khandelwal, D. K. Gupta and P. Bhale, “DoS attack detection technique using back propagation neural network,” in Int. Conf. on Advances in Computing, Communications and Informatics (ICACCI), Jaipur, Rajasthan, India, pp. 1064–1068, 2016. [Google Scholar]

12. T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE Transactions on Information Theory, vol. 13, no. 1, pp. 21–27, 1967. [Google Scholar]

13. M. Khandelwal, R. K. Rout and S. Umer, “Protein-protein interaction prediction from primary sequences using supervised machine learning algorithm,” in 12 th Int. Conf. on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, pp. 268–272, 2022. [Google Scholar]

14. S. Umer, R. Mondal, H. M. Pandey and R. K. Rout, “Deep features based convolutional neural network model for text and non-text region segmentation from document images,” Applied Soft Computing, vol. 113, pp. 107917, 2021. [Google Scholar]

15. S. Umer, A. Sardar, B. C. Dhara, R. K. Rout and H. M. Pandey, “Person identification using fusion of iris and periocular deep features,” Neural Networks, vol. 122, pp. 407–419, 2020. [Google Scholar]

16. H. Das, B. Naik and H. S. Behera, “Medical disease analysis using neuro-fuzzy with feature extraction model for classification,” Informatics in Medicine Unlocked, vol. 18, pp. 100288, 2020. [Google Scholar]

17. H. Das, B. Naik and H. S. Behera, “A hybrid neuro-fuzzy and feature reduction model for classification,” Advances in Fuzzy Systems, vol. 2020, pp. 4152049:1–4152049:15, 2020. [Google Scholar]

18. K. V. Shihabudheen and G. N. Pillai, “Recent advances in neuro-fuzzy system: A survey,” Knowledge-Based Systems, vol. 152, pp. 136–162, 2018. [Google Scholar]

19. H. Patel and G. S. Thakur, “An improved fuzzy k-nearest neighbor algorithm for imbalanced data using adaptive approach,” IETE Journal of Research, vol. 65, no. 6, pp. 780–789, 2019. [Google Scholar]

20. A. Ghosh, B. U. Shankar and S. K. Meher, “A novel approach to neuro-fuzzy classification,” Neural Networks, vol. 22, no. 1, pp. 100–109, 2009. [Google Scholar]

21. S. K. Meher, “Efficient pattern classification model with neuro-fuzzy networks,” Soft Computing, vol. 21, no. 12, pp. 3317–3334, 2017. [Google Scholar]

22. S. K. Pal, S. K. Meher and S. Dutta, “Class-dependent rough-fuzzy granular space, dispersion index and classification,” Pattern Recognition, vol. 45, no. 7, pp. 2690–2707, 2012. [Google Scholar]

23. H. Das, B. Naik, H. S. Behera, S. Jaiswal, P. Mahato et al., “Biomedical data analysis using neuro-fuzzy model with post-feature reduction,” Journal of King Saud University-Computer and Information Sciences, 2020. [Google Scholar]

24. H. L. Chen, C. C. Huang, X. G. Yu, X. Xu, X. Sun et al., “An efficient diagnosis system for detection of Parkinson’s disease using fuzzy k-nearest neighbor approach,” Expert Systems with Applications, vol. 40, no. 1, pp. 263–271, 2013. [Google Scholar]

25. M. Khashei, A. Z. Hamadani and M. Bijari, “A fuzzy intelligent approach to the classification problem in gene expression data analysis,” Knowledge-Based Systems, vol. 27, pp. 465–474, 2012. [Google Scholar]

26. J. M. Keller, M. R. Gray and J. A. Givens, “A fuzzy k-nearest neighbor algorithm,” IEEE Transactions on Systems,” Man, and Cybernetics, vol. SMC-15, no. 4, pp. 580–585, 1985. [Google Scholar]

27. A. Kontorovich and R. Weiss, “A Bayes consistent 1-NN classifier,” in Artificial Intelligence and Statistics, PMLR, San Diego, California, USA, pp. 480–488, 2015. [Google Scholar]

28. L. A. Zadeh, “Fuzzy sets,” in Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems: Selected Papers by Lotfi A Zadeh, USA: World Scientific, vol. 6, pp. 19–34, 1996. [Google Scholar]

29. S. S. Hassan, R. K. Rout, K. S. Sahoo, N. Jhanjhi, S. Umer et al., “A vicenary analysis of SARS-CoV-2 genomes,” Computers Materials & Continua, pp. 3477–3493, 2021. [Google Scholar]

30. R. K. Rout, S. S. Hassan, S. Sheikh, S. Umer, K. S. Sahoo et al., “Feature-extraction and analysis based on spatial distribution of amino acids for SARS-CoV-2 protein sequences,” Computers in Biology and Medicine, vol. 141, pp. 105024, 2022. [Google Scholar]

31. S. K. Sahu, D. P. Mohapatra, J. K. Rout, K. S. Sahoo, Q. Pham et al., “A LSTM-FCNN based multi-class intrusion detection using scalable framework,” Computers & Electrical Engineering, vol. 99, pp. 107720, 2022. [Google Scholar]

32. A. Ghosh, S. K. Meher and B. U. Shankar, “A novel fuzzy classifier based on product aggregation operator,” Pattern Recognition, vol. 41, no. 3, pp. 961–971, 2008. [Google Scholar]

33. H. J. Zimmermann, “Fuzzy set theory-and its applications,” Springer Science & Business Media, 2011. [Google Scholar]

34. A. V. Patel, “Simplest fuzzy PI controllers under various defuzzification methods,” International Journal of Computational Cognition, vol. 3, no. 1, pp. 21–34, 2005. [Google Scholar]

35. S. K. Pal and D. D. Majumder, “Fuzzy sets and decision making approaches in vowel and speaker recognition,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 7, no. 8, pp. 625–629, 1977. [Google Scholar]

36. S. K. Pal and S. Mitra, “Multilayer perceptron, fuzzy sets, classification,” IEEE Transactions on Neural Networks, vol. 3, no. 5, 1992. [Google Scholar]

37. W. Duch and Y. Hayashi, “Computational intelligence methods and data understanding,” Studies in Fuzziness and Soft Computing, vol. 54, pp. 256–270, 2000. [Google Scholar]


Cite This Article

APA Style
Khandelwal, M., Rout, R.K., Umer, S., Sahoo, K.S., Jhanjhi, N. et al. (2023). A pattern classification model for vowel data using fuzzy nearest neighbor. Intelligent Automation & Soft Computing, 35(3), 3587-3598. https://doi.org/10.32604/iasc.2023.029785
Vancouver Style
Khandelwal M, Rout RK, Umer S, Sahoo KS, Jhanjhi N, Shorfuzzaman M, et al. A pattern classification model for vowel data using fuzzy nearest neighbor. Intell Automat Soft Comput . 2023;35(3):3587-3598 https://doi.org/10.32604/iasc.2023.029785
IEEE Style
M. Khandelwal et al., "A Pattern Classification Model for Vowel Data Using Fuzzy Nearest Neighbor," Intell. Automat. Soft Comput. , vol. 35, no. 3, pp. 3587-3598. 2023. https://doi.org/10.32604/iasc.2023.029785


cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 748

    View

  • 371

    Download

  • 0

    Like

Share Link