A Hybrid Deep Learning-Based Unsupervised Anomaly Detection in High Dimensional Data

: Anomaly detection in high dimensional data is a critical research issue with serious implication in the real-world problems. Many issues in this field still unsolved, so several modern anomaly detection methods struggle to maintain adequate accuracy due to the highly descriptive nature of big data. Such a phenomenon is referred to as the “curse of dimensionality” that affects traditional techniques in terms of both accuracy and performance. Thus, this research proposed a hybrid model based on Deep Autoencoder Neural Network (DANN) with five layers to reduce the difference between the input and output. The proposed model was applied to a real-world gas turbine (GT)dataset that contains 87620columns and 56 rows. During the experiment, two issues have been investigated and solved to enhance the results. The first is the dataset class imbalance,which solved using SMOTE technique. The second issue is the poor performance, which can be solved using one of the optimization algorithms. Several optimization algorithms have been investigated and tested, including stochastic gradient descent (SGD), RMSprop, Adam and Adamax. However, Adamax optimization algorithm showed the best results when employed to train the DANN model. The experimental results show that our proposed model can detect the anomalies by efficiently reducing the high dimensionality of dataset with accuracy of 99.40%, F1-score of 0.9649, Area Under the Curve (AUC) rate of 0.9649, and a minimal loss function during the hybrid model training.


Introduction
Nowadays, a huge amount of data is produced periodically at an unparalleled speed from diverse and composite origins such as social media, sensors, telecommunication, financial transactions, etc. [1,2]. Such acceleration in data generation yields to the concept of big data, which can be attributed to the umpteen and dynamism in technological advancements. For instance, the emergence of the Internet of Things (IoT) and the increase in smart devices usages (wearables and non-wearables) have contributed to the upsurge in the continuous generation of data [3]. As defined by Gandomi et al. [4], big data can be described as high-volume, high-velocity, and highvariety datasets where knowledge or insights can be derived using data analytic tools. Moreover, big data is conceptualized as the 5 Vs (Value, Veracity, Variety, Velocity and Volume) [5]. As shown in Fig. 1, Value depicts the advantage of data analysis; veracity shows the level of accuracy while Variety represents the different kinds of data (structured, semi-structured, and unstructured) present in big data [6]. Concerning Volume, it shows the magnitude of data being processed or stored. However, an increment in the volume of data leads to an increase in the dimensionality of such data. Dimensionality is the number of features or attributes present in each dataset. On the other hand, Velocity represents the rate at which data are produced which may consist of several dimensions. The preceding statements showed how the 5 Vs of big data addresses its underlining limitations [7]. Nonetheless, the dimensionality of data, which is proportional to the volume of the data is somewhat overlooked. Increment or large dimensions of data could negatively affect the extraction of knowledge from a dataset. That is, high dimensionality can affect data analytics such as anomaly detection in a large dataset. Figure 1: High dimensionality problem in big data [5] Anomaly detection points to the challenge of detecting trends in data that do not correspond to anticipated behavior [8]. In various implementation domains, these non-conforming patterns are referred to as deviations, outliers, discordant observations, variations, aberrations, shocks, peculiarities, or pollutants [9,10]. The existence of anomalies in each dataset can be seen as a data quality problem as it can lead to undesired outcomes if not removed [11,12]. As such, the removal of anomalous points from a dataset leads to data quality improvement, which makes the given dataset an imperative [13]. Besides, closeness of the data objects to one another yields to the high dimensionality in datasets, which will lead to the ambiguity in the respective data distances [14]. Although there are several detection techniques which require sophisticated and efficient computational approaches [8,15], the conventional anomaly detection techniques cannot adequately handle or address the high-dimensionality issue. Besides, many of these conventional anomaly detection techniques infer that the data have uniform attributes or features. On the contrary, real-life datasets in most cases have diverse types of attributes. This observation points to a heightened problem in anomaly detection [5,8,15].
Several anomaly detection techniques have been proposed across different application domains [14][15][16][17][18]. Neighbor-based anomaly techniques such as LOF, kNNW, ODIN detects anomalies using neighborhood information from data points [19][20][21]. These methods record poor performance and are somewhat conscious of the parameters of proposed methods. Another instance is the recent ensemble-based anomaly detection techniques. Zimek et al. [22] and Pasillas-Díaz et al. [23] proposed an ensemble-based anomaly detection method with good performance. However, ensemble methods are a black-box mechanism that lacks explain ability. Besides, selecting the most applicable and appropriate meta-learner is an open research problem. Another example, Wilkinson [24] proposed an unsupervised algorithm known as HDoutliers that can detect anomalies in high-dimensional datasets. Findings from the comparative study by Talagala et al. [25] also corroborate the efficiency of the HDoutliers algorithm. However, the tendency of HDoutliers to increase the false negatives rate is its drawback. Chalapathy et al. [26], Wu et al. [27], and Favarelli et al. [28], in their respective studies, proposed One-Class Neural Network anomaly detection methods for small and large-scale datasets. Also, Malhotra et al. [29], Nguyen et al. [30], Zhou et al. [17], and Said Elsayed et al. [31] developed anomaly detection methods based on long short-term memory (LSTM). However, these existing methods cannot handle the class imbalance problem.
Instigated by the preceding problems, this study proposes a novel hybrid deep learningbased approach for anomaly detection in large-scale datasets. Specifically, a data sampling method and multi-layer deep autoencoder with Adamax optimization algorithm is proposed. Synthetic Minority Over-sampling Technique (SMOTE) is used as a data sampling method to resolve the inherent class imbalance problem by augmenting the number of minority class instances to the level of the majority class label. A novel deep autoencoder neural network (DANN) with Adamax optimization algorithm is used for detecting anomaly and reducing dimensionality. The primary contributions of this work are summarized as thus: • A novel DANN approach to detect anomalies in time series by the unsupervised mode.
• Hybridization of SMOTE data sampling and DANN to overcome inherent class imbalance problem. • We addressed and overcame the curse of dimensionality in data by applying a multilayer autoencoder model that can find optimal parameter values and minimize the difference between the input and the output using deep reconstruction error during the model training.
The rest of this paper is structured as follows. Section 2 highlighted the background and the related works of the study. Section 3 outlines this work's research methodology, while Section 4 describes the experimental findings. Lastly, Section 5 concludes the paper and highlights the future work.

Background and Related Work
Anomaly detection is a well-known issue in a variety of fields, so different approaches have been proposed to mitigate this issue recently. Further information about this issue can be found 5366 CMC, 2022, vol.70, no.3 in [5,[32][33][34][35]. In this section, we will look at some of the more common anomaly detection techniques, and the relevant enhancements.
One of the commonly used anomaly detection technique is called neighbor-based anomaly detection technique whereby the outliers are identified based on the neighborhood information. Thus, the anomaly is scored as the average or weighted distance between the data object and its k nearest neighbors [19,21]. Another option is using the local outlier factor (LOF) to determine the anomaly degree whereby the anomaly score is calculated in accordance with its neighborhood [36]. Likewise, Hautamaki et al. [20] proposed an Outlier Detection using Indegree Number (ODIN), which is based on kNN graph, whereby data instances are segregated based on their respective influence in its neighborhood. It is worth mentioning that all the above-mentioned neighbor-based detection methods are independent of data distributions and can detect isolated entities. However, their success is heavily reliant on distance scales, which is unreliable or insignificant in the highdimensional spaces. Thus, considering the ranking of neighbors is a viable solution to overcome this issue as the existence of high-dimensional data still makes the ranking of each object's nearest neighbors significant. The underlying assumption is that if the same process created two objects, they would most likely become nearest neighbors or have similar neighbors [37].
Another applicable approach is deploying the subspace learning method. Sub-space-based anomaly detection approaches try to locate anomalies by sifting through various subsets of dimensions in an orderly manner. According to Zimek et al. [22], only a subset of relevant features for an object in a high dimensional space provides useful information, while the rest are unrelated to the mission. The presence of irrelevant features can make the anomaly detection process challenging to distinct. Another direction is using sparse subspace technique, which is a kind of subspace technique. Both points in a high-dimensional space are projected onto one or more low-dimensional, called sparse subspaces in this case [38,39]. As a result, objects that collapse into sparse subspaces are considered anomalies due to their abnormally low densities. It should be noted, however, such examination of the feature vectors from the whole highdimensional space takes time [38,40]. Therefore, to improve exploration results, Aggarwal et al. [41] used an evolutionary algorithm, whereby a space projection was described as a subspace with the most negative scarcity coefficients. However, certain factors, such as the original species, fitness functions, and selection processes, have a substantial effect on the effects of the evolutionary algorithm. The disadvantage of this method seems to be relaying on a large amount of data to identify the variance trend.
Ensemble learning is another feasible anomaly detection approach. This can be attributed to it efficiency over baseline methods [22,42,43]. Specifically, feature bagging and subsampling have been deployed in aggregating anomaly scores and pick the optimal value. For instance, Lazarevic et al. [44], in their study randomly selects feature samples from the initial feature space. An anomaly detection algorithm is then used to approximate the score of each item on each function subset. The scores for the same item are then added together to form the final score. On the other hand, Nguyen et al. [45] estimated anomaly scores for objects on random subspaces using multiple detection methods rather than the same one. Similarly, Keller et al. [46] suggested a modular anomaly detection approach that splits the anomaly mining mechanism into two parts, subspace search and anomaly ranking. Using the Monte Carlo sampling method, the subspace scan aims to obtain high contrast subspaces (HiCS), and then the LOF scores of artefacts are aggregated on the obtained subspaces. Van Stein et al. [40] took it a step further by accumulating similar HiCS subspaces and then measuring the anomaly scores of entities using local anomaly probabilities in the global feature space. For instance, Zimek et al. [22] used the random subsampling method to find each object's closest neighbours and then estimate its local density. This ensemble approach, when used in conjunction with an anomaly detection algorithm, is more efficient and yields a more complex range of data. There are several approaches for detecting anomalies that consider both attribute bagging and subsampling. Pasillas-Díaz et al. [23], for example, used bagging to collect various features at each iteration and then used subsampling to measure anomaly scores for different subsets of data. Using bagging ensemble, however, it is difficult to achieve entity variation, and the results are vulnerable to the scale of subsampled datasets. It is important to note that most of the above-mentioned anomaly detection methods can only process numerical data, leading to low efficacy. Moreover, most of preceding studies failed to investigate the concept of class imbalance that is an inherent problem in machine learning and present in most datasets. Thus, this proposed study proposes a novel hybrid deep learning-based approach for anomaly detection in large-scale datasets.

Materials and Methods
This section describes the gas turbine (GT) dataset, the real-world data utilized for anomalies detection in the high-dimensional dataset. It discussed the various techniques used for dimensionality reduction and features optimization and the different stages of the proposed hybrid model.

Dataset Description
The dataset used in this research is real industry high dimensional data for a gas turbine. This data contains 87620 columns and 56 rows. In this study, the data has been splits into a training set and testing set with a ratio of 60:40. Detecting anomalies in real-world high-dimensional data is a theoretical and a practical challenge due to "curse of dimensionality" issue, which is widely discussed in the literature [47,48]. Therefore, we have utilized a deep autoencoder algorithm composed of two symmetrical deep belief networks comprised of four shallow layers. Between them, half of the network is responsible for encoding, and the other half is responsible for decoding. The autoencoder learns significant present features in the data through minimizing the reconstruction error between the input and output data. Particularly, the high-dimensional noisy data is a common, so the first step is to reduce the dimension of the data. During this process, the data are projected on a space of lower dimension, thus the noise is eliminated and only the essential information is preserved. Accordingly, Deep Autoencoder Neural Network (DANN) Algorithm is used in this paper to reduce the data noise.

The Proposed Deep Autoencoder Neural Network (DANN) Algorithm
An autoencoder is a particular type of artificial neural network utilized primarily for handling tasks of unsupervised machine learning [49][50][51]. Like the works in [52][53][54][55][56], this study utilizes the autoencoders for both dimensionality reduction and detecting anomalies. Autoencoder is composed of two components: an encoder and a decoder. The encoder's output is a compressed representation of the input pattern described in terms of a vector function. First, the autoencoder learns the data presentation (encoding) of data set through the process of network training to ignore the "noise". The goal of this process is to reduce dimensionality [57,58]. Second, the autoencoder tries to produce a compressed representation, which is as close as possible to its original input, from the reduced encoding. As depicted in Fig. 2, the input, mapping, and bottleneck layers of the DANN estimate the mapping functions that bring the original data into the main component space of the lower dimension [59], whereas the demapping and output layers estimated the demapping functions that carry the original data space back to the projected data. The proposed DANN has the following mathematical model form: where x donates the input vectors, y m donates the mapping layer, t donates the bottleneck layer, y d donates the demapping layer, and xˆrepresent the output layer. b and W are bias vectors and weight matrices, correspondingly. Besides, a denotes the non-linear activation function. Fig. 2 summarizes the dimensions of both the matrices and vectors. The objective of auto-associative neural network training is to determine optimal parameter values (i.e., "optimal values of W and b") that reduce the input and output differences, it can be computed as given in Eq. (2): which is also called the reconstruction error.

Objective Functions for Autoencoder Neural Network Training
Apart from the reconstruction error specified in Eq. (2), three objective functions can be used to train autoencoder neural networks. We describe two alternative objective functions in this section: hierarchical error and denoising criterion. The authors in [60] proposed the concept of hierarchical error to establish a hierarchy (i.e., relative importance) amongst non-linear principal components Analysis (PCA), which is utilizing the reconstruction error as the objective function [61]. Thus, it demonstrated that maximization of the principal variable variance is equal to the residual variance minimization in linear PCA. Accordingly, the reconstruction of hierarchical error could be described as: The authors in [62] suggested the denoising criterion to derive more stable principal components. To employ the denoising criterion, the corrupted input x is produced by adding noise to the original input x, like masking noise and Gaussian noise. Subsequently, the autoencoder neural network is trained to retrieve the original input using the corrupted data as input. Finally, the denoising criterion was used to demonstrate the ability of autoencoder neural networks to learn a lower-dimensional manifold. Thus, it will capture more essential patterns in the original data. Fig. 3 summarizes the three-goal functions schematically. Based on the above, we have designed a similar procedure for dimensionality reduction utilizing DANN model. First, the matrix of the original data is partitioned into two split sets, to contain only the usual operating data. One set for the training purpose and another for DANN model testing. Second, the autoencoder neural network is trained to make use of the training dataset. Once it is trained, the autoencoder neural network start computing the principal components and residuals by feeding a new data sample. This is followed by determining the T 2 and Q statistics as follows: where t k denotes the principal component value of k th in the latest sample of data, and σ k denotes the k th principal component standard deviation as determined from the training dataset. It is a worth mention that the upper control limits were set with assuming the compliance of the data with a multivariate normal distribution. Thus, a different approach was followed in this work by calculating the upper control limits for two statistics directly from the given large dataset without assuming any possible distribution form. For instance, with a hundred samples of normal training data, the next biggest T 2 (or Q) value is chosen as the upper control limit to attain a false alarm rate of 0.01.

Synthetic Minority Oversampling Technique (SMOTE)
Resampling the data, including undersampling and oversampling, is one of the prominent approaches to relieve ths issue of imbalanced dataset [63]. Oversampling techniques are preferable over undersampling techniques in most circumstances [64]. Synthetic Minority Oversampling Technique (SMOTE) is a well-known oversampling technique whereby synthetic samples for the minority class are produced. SMOTE techniques aids in overcoming the overfitting issue caused by random oversampling. The technique concentrates on the feature space to create new instances by interpolating among positive instances that are close together [65].

Adam Optimizer
Adam [66] is a deep neural network training-specific adaptive learning rate optimization algorithm. It was firstly introduced in 2014, and it received a high attraction from a vast number of researchers due to its high performance compared to SGD or RMSprop.
The algorithm make use of adaptive learning rate techniques to determine the learning rates for each parameter individually. Adam algorithm is extremely efficient when dealing with complex problems involving a large number of variables or records. It is reliable and needs less memory. It is a combination of the 'gradient descent with momentum and the 'RMSP' methods. The momentum method accelerates the gradient descent algorithm by taking the 'exponentially weighted average' of the gradients into account. In addition, it utilises the advantages of Adagrad [67] to perform well in environments with sparse gradients, but it struggles with non-convex optimization of neural networks. It also use the advantage of Root Mean Square Propagation (RMSprop) [68] to address some of Adagrad's shortcomings and to perform well in online settings. Utilizing averages causes this method to converge to the bare minimum more quickly. Hence, where, m t denotes gradients aggregate at time t (present), m t -1 is the aggregate of gradients at time t−1 (prior), W t is the weights at time t, W t+1 is the weights at time t+1, α t is the learning rate at time t, ∂L is the derivative of loss function, ∂W t is the weights derivative at time t, β is the moving average parameter.
RMSprop is an adaptive learning method that attempts to boost AdaGrad. Rather than computing the cumulative number of squared gradients as AdaGrad does, it computes an 'exponential moving average'.
where, W t is the weights at time t, W t+1 is the weights at time t+1, α t is the learning rate at time t, ∂L is the loss function derivative ∂W t is the derivative of weights at time t, V t is the sum of the square of past gradients, β is the moving average parameter ε is the small positive constant. Thus, the positive/strengths attributes of RMSprop and AdaGrad techniques are inherited by Adam optimizer, which builds on them to provide a more optimized gradient descent. By taking the equations utilized in the aforementioned two techniques, we get the final representation of Adam optimizer as follows: where, β 1 and β 2 are the average decay rates of gradients in the aforementioned two techniques. α is the step size parameter/learning rate (0.01)

Results and Discussion
This section summarizes the experimental findings and discusses their significance for the different approaches including DANN with Adam optimizer, DANN with SGD optimizer, DANN with RMSprop optimizer, and DANN with Adamax optimizer. Tab. 1 shows the experimental results for the proposed DANN model with different optimizers methods.

Deep Autoencoder with Adam Optimizer
As depicted in Tab. 1, DANN model was tested independently without any optimization method, as shown in the column labeled "Deep autoencoder". The achieved results are 95.91% for 10 iterations in average. To improve this result, Adam optimizer method was integrated with the proposed DANN model. As shown in the third column, the model performs better, and it was able to detect the anomaly in the dataset with an accuracy of 97.36%. Fig. 4a depicts the result of anomaly detection accuracy for autoencoder neural network with Adam optimizer and Fig. 4b sows the proposed model loss. Fig. 5 shows the accuracy and loss function for the autoencoder neural network with RMSprop optimizer for both testing and training. Fig. 5a has presented the accuracy of the proposed hybrid model and Fig. 5b is presented the loss function of the proposed hybrid model with RMSprop optimizer algorithm.

Deep Autoencoder with the Adamax Optimizer
The vt element in the Adam update rule scales the gradient in reverse correspondingly to the 2 norm of the previous gradients (by the vt−1 term) and current gradient gt2 as presented in Eq. (9). Figs. 6a and 6b show the accuracy and loss function results for deep autoencoder neural network with the Admax optimizer method. However, this approach has superbases other proposed modes with an accuracy of 99.40% and minimal loss as shown in Fig. 6b.   Fig. 5 shows the results of training and testing accuracy and loss function for the autoencoder neural network with RMSprop optimizer. Fig. 5a has presented the accuracy of the proposed hybrid model and Fig. 5b is presented the loss function of the proposed hybrid model with RMSprop optimizer algorithm.

Performance Evaluation
Five measurement metrics are utilized to evaluate the performance of our experiment: Accuracy, Precision, Recall rate, F1-Score, and receiver operating characteristics (ROC). Accuracy is defined as the proportion of correctly classified samples and has the following formula: Precision is characterized as the proportion of those who truly belong to Category-A in all samples classified as such. In general, the higher the Precision, the lower the system's False Alarm Rate (FAR).
The recall rate indicates the proportion of all samples categorized as Category-A that are ultimately classified as such. The recall rate is a measure of a system's capability to detect anomalies. The greater it is, the more anomalous traffic is correctly observed.
The F1-score enables the combination of precision and recall into a single metric that encompasses both properties.
TP, FP, TN, FN represent True Positive, False Positive, True Negative and False Negative, respectively.
Accuracy is the most widely used metric for models trained using balanced datasets. This indicates the fraction of correctly estimated samples to the overall number of samples under evaluation for the model. Fig. 7 shows the accuracy scores for the proposed anomaly detection models, determined from an independent test set. As depicted in Fig. 7, out of five proposed models, the DANN-based Adamax optimizer model achieved an accuracy score of 99.40% followed by a 90.36% score of DANN-based Adam optimizer and DANN based objective function model. Although accuracy is a popular standard measure, it has drawbacks; mainly when there is a class imbalance in samples, it is often used along with other measures like F1 score or matthew's correlation coefficient.
F1-score is frequently employed in circumstances where an optimum integration of precision and recall is necessary. It is the harmonic mean of precision and recall scores of a model. Thus, the F1 score can be defined as given in Eq. (13). Fig. 7 shows the F1 prediction values for anomaly detection models based on the five DANNs, which confirms the earlier performance validated using the AUC ratings. The DANN-based Adam optimizer model achieved an optimal F1 score of 0.9811, while the DANN-based Adamax optimizer model obtained second place with an F1 score of 0.9649. DANN and DANN-based SGD optimizer models showed comparable performance and achieved an F1 score of 0.9376 and 0.8823, respectively. DANN with RMSprop optimizer score was not that far from the aforementioned DANNs but earned the last place, with an F1-score of 0.8280. Figure 7: Precision, recall, F1-score and AUC achieved by DANN-based anomaly detection models A receiver operating characteristics (ROC) is a method for organizing, visualizing, and selecting classification models based on their performance [69]. Additionally, it is a valuable performance evaluation measure, ROC curves are insensitive to changes in class distribution and especially useful for problems involving skewed class distributions [69]. The ROC curve illuminates, in a sense, the cost-benefit analysis under evaluation of the classifier. The false positive (FP) ratio to total negative samples is defined as the false positive (FP) rate and measures the negative examples misclassified fraction as positive. This is considered a cost since any further action taken on the FP's result is considered a waste, as it is a positive forecast. True positive rate, defined as the fraction of correctly predicted positive samples, can be considered an advantage due to the fact correctly predicted positive samples assist the classifier in resolving the examined problem more effectively.
The proposed five models AUC values in this analysis are presented in the Legend portion of Fig. 7. It is shown clearly from Fig. 7 that the DANN-based Adamax optimizer model outperforms the rest of the methods in detection anomaly in a high dimensional real-life dataset, with an AUC value of 0.981. The model-based adam optimizer obtained the second-best prediction with an AUC value of 0.951. The AUC results obtained validate the earlier evaluation results indicated by the F1 score matric.
When optimizing classification models, cross-entropy is often utilized as a loss function. Crossentropy as a loss function is extremely useful in binary classification problems that include the prediction of a class mark from one or more input variables. Our model attempts to estimate the target probability distribution Q as closely as possible. Thus, we can estimate the cross-entropy for an anomaly prediction in high dimensional data using the cross-entropy calculation given as follows: • Predicted P(class0) = 1 -yhat This implies that the model explicitly predicts the probability of class 1, while the probability of class 0 is given as one minus the expected probability. Fig. 8 shows the average cross-entropy across all training data for the DANN-based adamax optimizer method, where the model has minimal function loss. Hence, this confirms that the proposed model is efficient and effective in predicting anomaly in high dimensional data.

Comparison with Literature
To detect the anomaly in high dimensionality industrial gas turbine dataset, we were unable to find any research contribution that has been evaluated, but we have compared the results with the two recently proposed approaches for anomaly detection in the high dimensional dataset [70,71] shown in Tab. 2. The comparison is only shown for metrics available, but essentially, it shows the reader the promising results of the proposed DANN-based Adamax optimizer during the training process of the proposed model. The results show that the proposed method surpasses the two previous methods for detection anomaly in the high dimensional data set.
As presented in Tab. 2, the proposed detection model obtained a better result in detecting the anomaly and overcoming dimensionality's curse without needing any complex and labor-intensive feature extraction. This is possible due to the inherent capability of DANNs to learn the taskspecific feature presentations automatically. Thus, the proposed DANN outperform the anomaly detection approach that is based on an Autoregressive Flow-based (ADAF) model [70] and the hybrid semi-supervised anomaly detection model suggested by [71].  [71] Not Reported Not Reported Not Reported 0.95 Not Reported

Conclusion
This study proposed an efficient and improved deep autoencoder based anomaly detection approach in real industrial gas turbine data set. The proposed approach aims at improving the accuracy of anomaly detection by reducing the dimensionality in the large gas turbine data. The proposed deep autoencoder neural networks (DANN) were integrated and tested with several wellknown optimization methods for the deep autoencoder training process. The proposed DANN approach was able to overcome the curse of dimensionality effectively. It evaluated based on commonly used evaluation measures to evaluate and validate the DANN models performance. The DANN-based Adamax optimization method has achieved the best performance with an accuracy of 99.40%, F1-score of 0.9649 and an AUC rate of 0.9649. At the same time, the DANN-based SGD optimization method obtained the worse performance in anomaly detection in the high dimensional dataset.

Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study.