Quantum Particle Swarm Optimization Based Convolutional Neural Network for Handwritten Script Recognition

: Even though several advances have been made in recent years, handwritten script recognition is still a challenging task in the pattern recognition domain. This field has gained much interest lately due to its diverse application potentials. Nowadays, different methods are available for automatic script recognition. Among most of the reported script recognition techniques, deep neural networks have achieved impressive results and outperformed the classical machine learning algorithms. However, the process of designing such networks right from scratch intuitively appears to incur a significant amount of trial and error, which renders them unfeasible. This approach often requires manual intervention with domain expertise which consumes substantial time and computational resources. To alleviate this shortcoming, this paper proposes a new neural architecture search approach based on meta-heuristic quantum particle swarm optimization (QPSO), which is capable of automatically evolving the meaningful convolutional neural network (CNN) topologies. The computational experiments have been conducted on eight different datasets belonging to three popular Indic scripts, namely Bangla, Devanagari, and Dogri, consisting of handwritten characters and digits. Empirically, the results imply that the proposed QPSO-CNN algorithm outperforms the classical and state-of-the-art methods with faster prediction and higher accuracy.


Introduction
With the rapid development and exponential usage of imaging technology, digital cameras, and other intelligent devices, the need for automatic character recognition in document images has drawn the attention of many researchers in this domain. Extensive comprehensive research work is available on the printed character recognition, and the recognition accuracy of printed characters has been potentially considered as a solved problem. However, the recognition of handwritten characters is still a challenging task in the field of pattern recognition. The challenging part of handwritten character recognition is the diversity in individual writing styles, patterns, size, and thickness of characters [1]. Even the handwriting style of the same person may vary at different times. The already difficult task of handwritten character recognition eventually gets more complicated in the case of stylistically distinct Indic scripts. Over the past decades, a large number of research studies have been published on the recognition of handwritten characters in scripts like Arabic, Chinese, Japanese, and Latin. Though, the research on handwritten Indic scripts is still in its infancy due to the presence of several challenging issues [2]. Most Indic script characters have an unconstrained language domain, complex structure, large character sets, and similar shaped characters.
Deep neural networks (DNN) have demonstrated their remarkable performance in recent years to solve pattern recognition problems [3], for instance, handwritten character recognition. Specifically, deep convolutional neural networks (CNN) have witnessed tremendous achievements and proven to be particularly powerful in this domain [4]. Nonetheless, it remains a cumbersome, error-prone, and time-consuming process to design a meaningful CNN topology. Most successful CNNs architectures, such as DenseNet [5], ResNet [6], Inception [7], AlexNet [8], etc. were manually handcrafted. However, designing a successful CNN topology right from scratch is a complex and meticulous process that requires specialists with a lot of problem domain knowledge. As a result, to deal with this unwieldy process of designing CNN topologies, one can get the idea from nature to automatically evolve meaningful CNN representations. This approach belongs to a field known as Neuro-evolution [9], which uses evolutionary computation methods to automatically search and design the CNN topologies without any human expertise.
The rest of this article is organized as follows. Section 2 briefly describes the background of CNN, Binary Particle Swarm Optimization (BPSO), Quantum Computing (QC) and summarizes the related work. Section 3 delineates the proposed algorithm. Section 4 outlines the experimental design, which includes a brief description of the datasets, algorithm parameters, and implementation details. Section 5 presents the computational results and compares the performance of the proposed algorithm with the traditional state-of-the-art techniques. Finally, Section 6 concludes the article.

Background 2.1 Convolutional Neural Networks
CNN was firstly introduced by LeCun et al. [10] in 1998 and used in many applications [11][12][13][14]. In CNN, the feature learning unit replaces the traditional feature engineering stage. In fact, CNNs are designed to handle raw data, i.e., data with very little or no pre-processing. In CNN, layers are arranged consecutively in such a practice that the output obtained from one layer will be supplied as input to the next layer. Conventionally, the CNN classifier generally contains three kinds of layers, viz. the convolution layer, the pooling layer, and the fully-connected layer.

Binary Particle Swarm Optimization
The BPSO suggested by Kennedy et al. [15] in 1997 allows the conventional Particle Swarm Optimization (PSO) to work in binary spaces. The BPSO has potentially the same structure as the real-valued PSO except the position vector of each particle has a binary representation. Furthermore, the velocity of every m th particle in the d th dimension depends on the plausibility that the position of the m th particle corresponding to the d th dimension takes a value of 0 or 1. The BPSO is evaluated using a logistic sigmoid limiting transformation function S(V d m (k)), as illustrated in Eq. (1).
In BPSO, the position vector is determined by selecting a number (rand) uniformly at random and its value is confined between the range [0, 1] as in Eq. (2).
If rand ≥ S(V d m (k)), then the position of m th particle pertaining to the d th dimension at k th iteration is set to 0; otherwise, it is set to 1.

Quantum Computing
The concept of QC is evolved from Quantum Physics. A quantum bit (Q-bit) is the smallest amount of information in QC [16]. Unlike conventional computing in which a bit is observed either in state 1, or state 0, a Q-bit may reside in state 1, state 0, or superposition of 0 & 1. With the ability to be in more than two states, QC provides a faster and better local exploitation and global exploration of the search space. So, the algorithms designed using QC are more efficient and powerful.
A Q-bit is denoted with a pair of complex numbers (α, β), in which |α| 2 and |β| 2 determines the probability of occurrence of quantum bit to be in state 0 and state 1; naturally, the condition |α| 2 + |β| 2 = 1 must hold. The Q-bit individual with d dimensions is composed of a vector of d Q-bits.

Related Work
In literature, neuro-evolution during its early inception has been applied to encode connection weights and topologies of artificial neural networks (ANN). Stanley et al. [17] initially proposed Neuro-Evolution Augmenting Topologies (NEAT) scheme for learning weights and evolving efficient ANN topologies using variable-length chromosome. Later, Stanley et al. [18] introduced hyperNEAT to overcome the deficiencies of NEAT. Along with the indirect encoding strategy of NEAT, the hyperNEAT approach was also build using connective compositional pattern producing networks (CPPN) [19]. Fernando et al. [20] proposed differentiable CPPN (DPPN) using microbial genetic algorithm (GA) in 2016, capable of searching for feasible CNN representation. A Large-scale evolution of images classifier technique developed by Google in 2017, to evolve CNN topologies achieving state-of-the-art results over several benchmark datasets [21]. In recent times, neuro-evolution based techniques, influenced by the fundamental theory of PSO, have also been used as solution mechanism for regression [22,23] and image classification tasks [24]. Among these techniques, Wang et al. [25] designed an PSO based neurocomputing approach with variable-length encoding strategy, to evolve CNN topologies for handwritten recognition problems. Sun et al. [26] designed an algorithm using PSO to create flexible convolutional autoencoders for the classification of images. 5858 CMC, 2022, vol.71, no.3

The Proposed Methodology
In this section, we describe the proposed quantum particle swarm optimization based convolutional neural network (QPSO-CNN) technique in detail. To be specific, the framework of the proposed QPSO-CNN is elucidated in Section 3.1.

Framework
The proposed QPSO-CNN algorithm applies quantum PSO, to automatically evolve meaningful CNN architectures. Algorithm 1 manifests the overall framework of the proposed technique. Firstly, the algorithm is initiated by randomly initializing the position corresponding to each particle and quantum bit (Q-bit) individuals. After this, the evolution of particles will start to take effect before the termination conditions, for instance, the given number of iterations, are met. Lastly, the global best solution is picked up and decoded into the corresponding CNN architecture for the final deep training in order to perform the jobs at hand. During the evolution, the evaluation of the particles is performed, and the corresponding recognition accuracy is employed as a fitness measure of individual particles. Consequently, the pbest and gbest are modified based on the evaluated fitness. After that, the rotation operator is employed for modifying the Q-bit individual. Next, the position of each particle is updated using the probability amplitude stored in the corresponding Q-bit individual. pbest m ← X m //Initialize the personal best of each particle 9: X m _acc, pbest m _acc ← evaluate_Fitness(X m , ds train , t epoch ) 10: end while 11:gbest ← X m for all particles n, m = n and evaluate_Fitness(X m , ds train , t epoch ) > evaluate_Fitness(X n , ds train , t epoch ) //Initialize global best of the swarm 12: gbest_acc ← X m _acc 13: k ← 1 14: while k ≤ iter do 15:

Encoding Strategy
In the proposed QPSO-CNN, a binary encoding strategy is used to encode the potential CNN architectures into particle vectors. Each particle is composed of a different number of convolutional, pooling, and fully-connected layers. Therefore, these layers should be encoded in a single particle vector for the evolution process to proceed further. Each particle vector with D dimensions accommodates the details about CNN layers. To be more specific, in the binary encoding scheme, the particle vector consists of x number of fixed-length binary strings where each string represents the configuration of a single CNN layer, i.e., the layer parameters. The parameters corresponding to the convolutional layer are Kernel size, stride size, and number of feature maps. Secondly, parameters corresponding to the pooling layer are pooling window size, stride size and pooling type (maximal pooling or average pooling). Finally, parameters corresponding to the fully-connected layer are the number of neurons.
Depending on the chosen benchmark datasets size and conventions used in the traditional deep learning community, the range for all the parameters is elucidated in Tab. 1. On the basis of aforementioned range, the maximum number of bits required for binary encoding can be found in Tab. 1. The maximum number of bits describes the length of binary strings. Furthermore, taking the non-zero parameter into account, there is a need to add one in the decimal value convinced out of the binary string. For instance, as depicted in Tab. 1, for the convolutional layer, the Kernel size of 4, the number of feature maps of 64, and stride size of 2 are transformed into corresponding binary strings with values 011, 00111111, and 1. Finally, after transforming the parameters' values into individual binary strings, these binary strings are concatenated, as illustrated in the summary row corresponding to the convolutional layer. Consequently, the sample binary strings for other layers are obtained by following the same series of steps, as depicted in Tab. 1.  Since the maximum number of bits used to encode a single layer is 12; therefore, for each layer, the binary string is filled with zeros till the length approaches 12 bits, as illustrated in Tab. 2. During initialization, the length of the particles is defined in advance; therefore, to obtain variable-length particles, the disabled layer is employed in the encoded particle. The disabled layer is similar to the other three kinds of layers, except it has no parameters.

Swarm Initialization
The initialization of the swarm begins with creating individual particles based on the precedent encoding strategy until the pre-determined population size is reached. This process will generate M particles having arbitrary CNN architectures. In the corresponding scheme, each particle will contain a random number of layers. According to the deep learning convention, the first element of each particle is always a convolutional layer. However, the remaining elements of each particle can be supplied by any number of convolutional layers, pooling layers, and disabled layers until the first fully-connected layer is added. Thus, the algorithm should ensure that once a fully-connected appears, then every other succeeding layer ought to be a fully-connected layer or disabled layer. Finally, the last element corresponding to every particle is always a fully-connected layer. Furthermore, during the initialization phase, the values of α and β corresponding to each Q-bit individual are specified to be 1/ √ 2 [16]. This approach, while initialization, depicts the linear superposition of all states with a similar probability which ensures that the quantum bit individual guarantees the normalization of state to unity.

Evaluating Fitness
Once the position of the particles is obtained, the fitness evaluation is performed by training the particles representing full-fledged CNN architectures on the training dataset (ds train ) for (t epoch ) training epochs. CNNs are typically based on the deep learning algorithm; so, exhaustively training them for achieving the final recognition performance will require a large number of training epochs having a magnitude of more than 100. This process will take considerable time and consume a substantial number of computational resources. In the case of population-based algorithms, this high computation issue will worsen even more. Therefore, in the current work, each particle has been trained over a very small number of epochs, for instance, 2 to 10 training epochs, and then the trained particles are batchevaluated using model_Evaluate(ds fitness , b size ) function on the dataset ds fitness , as shown in Algorithm 2. Each evaluated particle will return two metrics, i.e., the loss value and accuracy. The obtained recognition accuracy is assigned as the fitness value of individual particles. Based on the computed fitness, the pbest and gbest solutions are updated to regulate the search in the direction of the optimal solution. acc, loss ← model_Evaluate(ds fitness , b size ) 8: fitness ← acc //Set accuracy metrics as the fitness value of corresponding particle 9: X m ← Update fitness of individual particle X m in population X 10: end for 11: return X

Updating Q-Bit Individual
The proposed QPSO-CNN algorithm uses a rotation operator to update the Q-bit individual. The rotation operator employs a rotation angle (ϕ) for modifying the position of particles. The magnitude of ϕ directly impacts the speed of convergence and search efficiency of the algorithm. Therefore, the appropriate selection of ϕ controls and uphold a good balance between exploration (i.e., global search) and exploitation (i.e., local search) of the search space, as well as it also results in a lesser number of iterations while obtaining the optimal best solution. Conventionally, the magnitude of ϕ is primarily confined between [0.001π, 0.05π] [16]. The magnitude of ϕ is decreased perpetually from ϕ max to ϕ min with each iteration using Eq. (3) to achieve fast convergence.
where iter indicates the maximum iterations, k denotes the current iteration, and the values of ϕ min and ϕ max are set to be 0.001π and 0.05π, respectively. After computing the value of ϕ using Eq. (3), the rotation angle ( ϕ) is evaluated for each Q-bit using Eq. (4).
where pbest m (k) depicts the personal best of m th particle at k th iteration, gbest(k) represents the global best position at k th iteration, and γ 1m (k) and γ 2m (k) are stipulated using Eq. (5) and Eq. (6).
Finally, the rotation operator is applied to update the D-dimensional Q-bit individual by transforming the α and β values corresponding to each quantum bit, as illustrated in Eq. (7).

Update Position of Particles
The position vector of every m th particle X d m (k + 1) at d th dimension in k + 1 iteration is updated using the probability |β m (k + 1)| 2 stored in m th quantum bit individual, i.e., where r stands for uniformly distributed pseudo-random number from [0, 1].

Deep Training on Gbest
After the evolution of QPSO-CNN is completed, the global best solution obtained by picking the best particle out of all the particles in the swarm is selected for the final deep training. The deep training process is similar to the fitness evaluation process discussed in Section 3.4, apart from the fact that a substantially huge number of epochs are employed for training the optimal CNN architecture, for example, 100 or 200.

Experimental Design 4.1 Datasets Used in Present Work
The designed QPSO-CNN algorithm has been evaluated on benchmark handwritten character datasets belonging to three popular Indic scripts (Devanagari, Bangla, and Dogri). These scripts are genealogically different from each other and highly used by the majority of people in India [27]. Some typical samples corresponding to each chosen dataset are depicted in Fig. 1 for reference, and a summary enclosing the dataset name, category, script type, training set, and test set sizes are elucidated in Tab. 3.

Pre-Processing of Datasets
The original handwritten character images in the datasets are not normalized to a uniform size and have numerous pixel resolutions. Therefore, for training and testing, some pre-processing steps are applied to the datasets. The grayscale isolated handwritten character images are first transformed into subsequent binary format. Then the handwritten character images for datasets (D2, D3, D4, and D7) are normalized to a size of 32 × 32 pixels with the aspect ratios preserved.

Algorithm Parameters
In the proposed QPSO-CNN algorithm, the parameters are primarily classified into three groups, i.e., parameters related to QPSO, CNN initialization, and CNN training. The parameters used in the present investigation have been compiled in Tab. 4. The parameters associated with the first group, are set according to the conventions applied in the community of evolutionary algorithms. The first group includes seven parameters, namely, the swarm size (M), the minimum dimension of particle vector, the maximum dimension of particle vector, the maximum number of iterations (iter), the minimum magnitude of rotation angle (ϕ min ), the maximum magnitude of rotation angle (ϕ max ), and the initial values of probability amplitude (α, β) stored in the Q-bit individual. The swarm size determines the total number of particles used in the QPSO algorithm. The quality of solution improves continuously and marginally with the increase in swarm size; on the other hand, the computation time is also increased linearly. Consequently, during the search process, the swarm size is selected to be 30 after some preliminary experiments by investigating the heuristic trade-off among the solution quality and computation time. The minimum and maximum number of layers are specified as 3 and 20, respectively. The maximum number of iterations used to search the optimal CNN architecture is set to be 30. The magnitude of rotation angle influences the speed of convergence and search efficiency of the algorithm. Therefore, the proper selection of rotation angle performs an extremely critical role in the rapid convergence of particles in the swarm towards the optimal solution. Conventionally, the minimum and maximum magnitude of the rotation angle, i.e., ϕ min and ϕ max are set to be 0.001π and 0.05π [16], respectively. The value of ϕ gradually decreases from ϕ max to ϕ min with each iteration, according to Eq. (3) that guides the particles to move in a positive direction from global search to local search proceeding towards an optimal solution to achieve fast convergence. While initializing the Q-bit individuals, the values of α and β corresponding to each Q-bit individual are specified to be 1/ √ 2 [35]. It determines that quantum bit individual depicts the linear superposition of all states with similar probability The parameters associated with the second group regulate the diversity of initial particles' architectures. The second group includes seven parameters, namely, the minimum number of feature maps, the maximum number of feature maps, the minimum size of a convolutional filter, the maximum size of a convolutional filter, the convolutional filter stride size, the pooling window size, and the pooling window stride size. The improper setting of parameters in the convolutional and pooling layers would make the CNN architecture incompetent and result in unaffordable computational costs. Therefore, according to the deep learning conventions, during the exploration of each particle, the minimum and the maximum number of feature maps is set as [32,256]. On the basis of state-of-the-art CNNs conventions, squared convolutional filters are used with a filter size ranging from 3 * 3 to 5 * 5. In the convolutional layer, the (width, height) of stride is taken as (1, 1). Analogously, the pooling layer uses squared kernels with a pooling window of size 2 * 2. The (width, height) of stride in the pooling layer is taken to be (2, 2).
Finally, the parameters associated with the third group regulate the training process of each particle in the swarm. The third group includes five parameters, viz, the training epochs for particle evaluation, learning rate, dropout rate, batch size, and number of epochs for training optimal CNN architecture (gbest). The fitness evaluation is a computationally expensive process because it requires training and evaluating a large number of particles representing different CNN architectures. Therefore, for fitness evaluation, the particles are trained only with two epochs on the training dataset. The Xavier weight initialization [36] is used in this work as it has proven to be one of the most efficient weight initialization techniques and also implemented in several deep learning architectures. The training process is carried out by Adam optimizer [37] with a learning rate of 0.001. A dropout regularization [38] of 50% is deployed in the current work to prevent the chances of overfitting while training the particles. In the final phase of training, each particle is inflicted by the batch normalization [39] with a minibatch of size 50 for speeding up the training process. Furthermore, at the end of the optimization, the g-best solution obtained by applying the QPSO-CNN algorithm representing the potential CNN topology is trained for 100 epochs.

Implementation Details
The experiments on the proposed QPSO-CNN model have been performed using a Nvidia Tesla V100 GPU with 16 GB of memory and Ubuntu 16.04.6 LTS operating system. Due to the stochastic nature of the proposed QPSO-CNN algorithm, 10 independent experimental runs are conducted on each handwritten dataset in order to maintain the consistency in the results.

Overall Performance
The overall recognition performance of the proposed QPSO-CNN algorithm in terms of the best recognition accuracies and the mean recognition accuracies obtained from 10 independent experimental runs on each chosen benchmark dataset is outlined in Tab. 5. The computational experimental results illustrate that the proposed algorithm obtains promising recognition accuracies on all the conventional Indic script benchmark datasets (i.e., D1, D2, D3, D4, D5, D6, D8), while it exhibits moderately inferior recognition accuracy on the collected DOGRA C-64 dataset. This difference in recognition accuracy demonstrates the complexities and challenges associated with the D7 dataset. These complexities arise due to the presence of complex structures and a potentially large number of visually similar shaped characters in the Dogra script, which eventually brings ambiguities and leads to misrecognition error. Overall, the experimental results clearly reveal that the proposed algorithm yields satisfactory performance on all the chosen datasets.

Comparative Analysis
In this section, to demonstrate the effectiveness of the proposed algorithm, the overall performance has been compared with the conventional techniques that are widely used for handwritten Indic scripts recognition. The existing state-of-the-art techniques that have claimed propitious recognition accuracy on the chosen benchmark Indic script datasets are considered as the peer competitors. The evaluation results of the QPSO-CNN algorithm, along with the existing peer competitors on all the benchmark datasets, have been compiled in Tab. 6. Keserwani et al. [40] 98.56 Alom et al. [43] 98.31 Dash et al. [42] 94.78 Gupta et al. [44] 86  [44] 98.92 Chen et al. [53] 99.48 Sun et al. [54] 98.82 Present work 99.69 For the Bangla script datasets, viz, D1, D2, D3, and D4, the deep learning based techniques used to compare our results are unified CNN [40], AlexNet [41], DenseNet [43], Multi-column Multi-scale CNN (MMCNN) [45], Residual Network (ResNet-50) [47], BornoNet [48], and modified ResNet-18 [49]. Moreover, for the Devanagari script datasets, viz, D5 and D6, the deep learning based techniques used to compare our results are MMCNN [45], CNN [50], ResNet-50 [51], and Inception V3 [52]. Furthermore, the classical machine learning based techniques used to compare our results are Sparse Concept Coding with Tetrolet transform and Nearest Neighbor method [42], Oppositionbased Multiobjective Harmony Search with SVM [44], and Advanced Feature Sets with SVM [46]. Additionally, to comprehensively evaluate the robustness of the proposed neuroevolutionary algorithm, the standard MNIST dataset is also used since it is a well-known dataset specifically for investigating the population-based neural architecture search (NAS) approaches. For MNIST handwritten digit recognition dataset viz, D8, the NAS approaches used for comparison are Internet Protocol based variable-length PSO-CNN [25], Flexible Convolutional Auto-Encoder [26], Deep Convolutional Variational Autoencoder [53], and Genetic algorithm based EvoCNN [54]. Empirically, it is clearly shown in Tab. 6 that the proposed algorithm achieves the best results in terms of the best classification performance on all the chosen benchmark datasets and thereby outperforms all the existing peer competitors for the handwritten Indic scripts recognition.

Failure Case Analysis
In this section, we show some typical examples of misclassified samples on each chosen benchmark dataset, as delineated in Fig. 2.
From the experimental analysis, we have noticed that there are two prominent reasons that contribute to the failure cases for handwritten Indic scripts recognition.
• The main reason is the confusion of similar shaped characters, i.e., some characters' pairs are written in such a manner that they have similar structural constructs, which are quite challenging to recognize even for humans. • Also, we observed that it is difficult to correctly identify the cursive and nonstandard writing habits in hooks and circles. Furthermore, the erroneous characters' samples, which are either degraded or severely polluted, have broken architecture, brought great ambiguities, and directly led to misclassification.

Discussion
The experimental results in this paper clearly indicate that the meta-heuristic evolutionary approach is proven to be feasible in designing the promising CNN architectures for handwritten Indic scripts recognition. The proposed algorithm provides competitive performance without using complicated architectures and data augmentation, and the results are comparable to the existing handcrafted models. In the past, most of the works have contributed hand-designed architectures for CNNs, explicitly designed for solving a particular problem. This process of manually creating the architecture is expensive and entails a significant amount of trial and error in determining the solution quality. Therefore, this proposed neuro-evolution approach is way more robust and simpler than the existing state-of-the-art techniques.
The proposed algorithm integrates the traditional PSO with the principles and ideology of quantum computing. Unlike classical computing, in which a bit may exist in either state 0 or state 1, in quantum computing, the Q-bit may exist in state 0, state 1, or superposition two states. This ability of quantum computing to have more than two states contributes to a better and faster exploration and exploitation of search space. Since the exploration and the exploration should complement each other, so the appropriate tuning of them could improve the performance. In this regard, the rotation angle is introduced for updating the position of particles. The proper selection of rotation angle controls and upholds a good balance between exploitation and exploration of the search space and obtain the competing solutions with shorter computation time and smaller swarm size. In consequence, this approach remarkably promotes the computation efficiency of the proposed algorithm. Furthermore, QPSO with D-dimensional Q-bit representation provides a better population diversity as it covers the search space faster than the traditional PSO. Thus, quantum computing supplements much more to the performance of PSO, which further intensifies the efficacy of the technique. In addition, the training curves of the global best solution representing the optimal CNN topology for dataset D3 are illustrated in Fig. 3. For this experiment, 15% of the data in the training set is randomly sampled to create the validation set. We noticed that the training and validation accuracies do not exhibit any indications of overfitting, and they are getting improved steadily and smoothly with time. This clearly corroborates that the proposed QPSO-CNN algorithm is sophisticated enough and indeed capable to exploit promising CNN architecture for any given script recognition dataset. Moreover, in the existing neural architecture search approaches, the final recognition accuracy is considered as a fitness measure while evaluating the particles. The final recognition accuracy usually needs a large number of training epochs. So, this process will eventually take a considerable amount of time. Therefore, to design the complete architecture with the above fitness evaluation plan, it is essential to exercise a significant number of computational resources for speeding up the process. Additionally, this process also demands further professional assistance, for example, task scheduling and synchronization, which is far from the expertise of most of the researchers. Hence, during the evolution, the particles do not need to check the final recognition accuracy; however, it is sufficient to predict the tendency that could reveal the future quality of the solution. In this context, the particles in the proposed scheme are trained with small numbers of epochs during the evaluation. In summary, it concludes that the proposed technique with the simplistic fitness evaluation scheme and welldesigned encoding strategies lends the researchers to discover potential CNN architectures without prior domain knowledge.

Conclusion
In this paper, a QPSO-CNN algorithm has been proposed for the recognition of handwritten Indic scripts. The proposed hybrid neuroevolutionary approach integrates particle swarm optimization with the concept of quantum computing to automatically evolve promising CNN architectures. The QPSO has a different operational procedure and is an amended version of conventional PSO. It is strengthened via an additional operator, i.e., the rotation angle. The proper selection of rotation angle controls and upholds a good balance between exploitation and exploration of the search space and obtain the competing solutions, even with a smaller swarm size. Also, we deduce that with the effective use of heuristics, the proposed algorithm avoids wasting too much computational time in vain search and hence provides an enhanced searching efficiency. The superiority of the proposed QPSO-CNN algorithm has been evaluated on a variety of Indic script datasets. The comprehensive experimental results demonstrate that the proposed algorithm performs significantly better than the existing stateof-the-art techniques.
Funding Statement: This research received no external funding.

Conflicts of Interest:
The authors declare that they have no conflicts of interest.