This investigative study is focused on the impact of wavelet on traditional forecasting time-series models, which significantly shows the usage of wavelet algorithms. Wavelet Decomposition (WD) algorithm has been combined with various traditional forecasting time-series models, such as Least Square Support Vector Machine (LSSVM), Artificial Neural Network (ANN) and Multivariate Adaptive Regression Splines (MARS) and their effects are examined in terms of the statistical estimations. The WD has been used as a mathematical application in traditional forecast modelling to collect periodically measured parameters, which has yielded tremendous constructive outcomes. Further, it is observed that the wavelet combined models are classy compared to the various time series models in terms of performance basis. Therefore, combining wavelet forecasting models has yielded much better results.
Due to the predictive importance, researchers have developed various forecasting models. As better environmental forecasting arts can be used to make appropriate management decisions, researchers are continually striving to improve the effectiveness and efficiency of the models. For decades the term wavelet has been used for the exploration of signal processing and geophysics. Therefore, this article looks at the WD combined with various traditional forecast time-series models. For decades the term wavelet has been used for the exploration of signal processing and geophysics. The last decade has shown vast interest in wavelets; it is a subject area that can be appropriate applicable and coalesced in various fields such as applied mathematics, physics, electrical engineering, etc.
Consequently, the WD has significantly impacted various fields, such as image processing, differential equations, statistics, and chemical signal processing [
Wavelet-based models are a noteworthy edge in de-noising the datasets to develop an efficient model. It has made it easy to analyse streamflow processes on different parameters without eliminating the effects of the time-frequency accompanied by conventional bandpass filters. The WD tool can let on information within the signal in both frequencies, time and scale domains [
The performance and accuracy of the traditional time series forecasting models continuously may be improved. Therefore, it can be inspired by the researchers to intend an improved version of the models [
To endorse the discussed TS forecast models and forecast the rivers, streamflow of the rate of the rivers have been collected 484 and 550 months, respectively, from two renowned Indus and Chenab Rivers of Pakistan (
The use of wavelet application with the various traditional forecasting models such as LSSVM, ANN and MARS has improved the efficiency of the models and found excellent outcomes. These tractable combined models have been implemented as efficient tools on streamflow datasets to forecast phenomena that provide comprehensive signals information. The developed combined wavelet with AI models implements the following two-step protocol for forecasting activities.
The WD methodology has been used as a preprocessor of input datasets. As a result, it has a time-frequency signal analysis at distinct intervals in the time-domain and considerable detail about input datasets. After obtaining the input signal by WD, it has been used for further processes as AI input in various traditional forecasting models.
Initially, the forecasting time-series datasets have been decomposed into a sub-time-series
The algorithm of WD ability to de-noise non-stationary signals into sub-signals at different levels has a suitable resource for improved streamflow elucidation [
Therefore, the continuous wavelet transform (CWT) can be defined as follows [
Decomposition levels
The human brain's functioning principle influenced an artificial neural network (ANN) as a forecasting model. Several architectures in the literature are available to forecast the streamflow and many other applications, one of which is the ANN algorithm mostly used. It is comprised of a network system with many interconnected nodes called neurons. The number of layers of an ANN is used to classify it, and layer(s) exists between an input and an output layer. Therefore, a single-layer feed-forward (SLFF) neural network is an architecture with just one layer for establishing connection among the nodes of the input, middle, and output layers. This type of system is characterised as a multi-layer feed-forward (MLFF) neural network built by more than one middle layer [
Layer-i. input layer, which is introduced to the network and takes one or more than one inputs.
Layer-ii. the hidden layer, where data is manipulating procedure with a feed-forward neural network accompanied with orthonormal WD basis by activation functions developed.
Layer-iii. output layer, which contains one or more linear combiners and the corresponding estimations are consistent with the given inputs.
The training process has acquired the weights of the network connections. The WANN model can have various TF of different nodes in identical or different layers. The TF such as sigmoid, hyperbolic tangent functions are used for hidden layers, and there is no appropriation for the output layer. The WANN model has been successfully used for forecasting estimations. Two key approaches to developing the WANN model technique are described as following:
The WD technique and the ANN model processing are used separately. Firstly decomposed, the input signal employs various WD basis functions ( In this case, two structures, WD mathematical and ANN artificial intelligence algorithms have been combined and performed. The transferal and dilation of the WD accomplished weights that have been adjusted according to a certain learning algorithm.
Only dyadic dilations and translations of the WD have developed the wavelet basis function whenever the first approach occurs. Therefore, this objective approach of WANN has often been known as a wavenet.
The following reprocessed support vector machine (SVM) classifier governs the application of minimisation [
The usage of the LSSVM classifier is implicitly compatible with the definition of regression using binary conditions
Since
After eliminating
Here, pick
The wavelet least-squares support vector machine (WLSSVM) model has been developed with the potential worth of the WD algorithm and LSSVM processing and obtained optimum nonlinear approximation ability. WLSSVM model has been consists of an input layer, hidden layer, and output layer and the model successfully has been used for forecasting approximations [
The MARS model schemes discoveries to forecasting continuous numeric outcomes. Appropriate, the MARS model scheme has been implemented in two stages containing forward-backward stepwise techniques. The stepwise forward technique has observed a large set of input variables (basis function) with a different knot; though, this stepwise technique might be developing complexity and a multi-layered model [
The statistical parameters are used to demonstrate the effectiveness in terms of forecastability of the models assessed by comparing the actual and forecasted values. Usually, the Mean Absolute Error (MAE), the Root Mean Square Error (RMSE), and the correlation coefficients (CC), are used to determine the efficiency of the models and outcomes fitted to the best fit line [
The statistical parameters
This article examines the wavelet impact on traditional forecasting models by fitting input hydrological time-series datasets collected from Indus and Chenab Rivers. The computational code of the conversed forecasting models has been written in the MATLAB application, including the wavelet toolbox.
The six
Model | Original datasets | Model | WD datasets |
---|---|---|---|
M1 | WM1 | ||
M2 | WM2 | ||
M3 | WM3 | ||
M4 | WM4 | ||
M5 | WM5 | ||
M6 | WM6 |
The training dataset of the models is described for approximated parameters and the testing dataset has characterised by choosing the best combination model amongst every number of hidden layers considered. A trial-and-error technique has estimated the optimum complexity of conversed models. The statistical approaches, such as the Correlation Coefficient (CC), the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE), respectively, have estimated the outcomes. The estimates of both streamflow datasets are described in
In
INDUS RIVER | CHENAB RIVER | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Training phase | Testing phase | Training phase | Testing phase | |||||||||
CC | MAE | RMSE | CC | MAE | RMSE | CC | MAE | RMSE | CC | MAE | RMSE | |
M1 | 0.9026 | 0.0086 | 0.00025 | 0.868 | 0.0318 | 0.0109 | 0.9172 | 0.0421 | 0.0027 | 0.9041 | 0.0437 | 0.0042 |
M2 | 0.9032 | 0.0086 | 0.00026 | 0.7357 | 0.0279 | 0.0097 | 0.9266 | 0.0401 | 0.004 | 0.9126 | 0.0445 | 0.004 |
M3 | 0.9049 | 0.0086 | 0.00025 | 0.9161 | 0.0313 | 0.0109 | 0.9323 | 0.0393 | 0.0039 | 0.9196 | 0.043 | 0.0036 |
M4 | 0.907 | 0.0084 | 0.00027 | 0.893 | 0.0323 | 0.0115 | 0.9342 | 0.036 | 0.0036 | 0.9296 | 0.0414 | 0.0034 |
M5 | 0.9057 | 0.0085 | 0.00024 | 0.9217 | 0.0336 | 0.0102 | 0.9432 | 0.0345 | 0.0031 | 0.9408 | 0.0382 | 0.0027 |
M6 | 0.9145 | 0.0078 | 0.00019 | 0.9418 | 0.036 | 0.0032 |
INDUS RIVER | CHENAB RIVER | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Training phase | Testing phase | Training phase | Testing phase | |||||||||
CC | MAE | RMSE | CC | MAE | RMSE | CC | MAE | RMSE | CC | MAE | RMSE | |
WM1 | 0.8994 | 0.0098 | 0.0014 | 0.9137 | 0.0188 | 0.0026 | 0.8689 | 0.015 | 0.0019 | 0.9071 | 0.0279 | 0.0203 |
WM2 | 0.9224 | 0.0065 | 0.0012 | 0.8728 | 0.0159 | 0.0015 | 0.9183 | 0.0255 | 0.0019 | |||
WM3 | 0.9063 | 0.0055 | 0.0003 | 0.905 | 0.023 | 0.0053 | 0.9006 | 0.0157 | 0.0013 | 0.9299 | 0.0258 | 0.0018 |
WM4 | 0.816 | 0.0041 | 0.0015 | 0.7872 | 0.0251 | 0.0068 | 0.927 | 0.015 | 0.001 | 0.9467 | 0.0247 | 0.0017 |
WM5 | 0.8836 | 0.0047 | 0.0021 | 0.832 | 0.0246 | 0.0076 | 0.9245 | 0.0141 | 0.0007 | 0.9502 | 0.0239 | 0.0012 |
WM6 | 0.8926 | 0.0032 | 0.0023 | 0.8052 | 0.0227 | 0.0067 | 0.943 | 0.0135 | 0.0005 |
Additionally, in
INDUS RIVER | CHENAB RIVER | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Training phase | Testing phase | Training phase | Testing phase | |||||||||
CC | MAE | RMSE | CC | MAE | RMSE | CC | MAE | RMSE | CC | MAE | RMSE | |
M1 | 0.8678 | 0.0114 | 0.022 | 0.8193 | 0.0301 | 0.1035 | 0.9147 | 0.0173 | 0.0367 | 0.0783 | ||
M2 | 0.8769 | 0.0124 | 0.0213 | 0.0922 | 0.9278 | 0.0219 | 0.0301 | 0.7956 | 0.0889 | 0.2934 | ||
M3 | 0.8536 | 0.0125 | 0.0231 | 0.7094 | 0.0487 | 0.165 | 0.9026 | 0.0261 | 0.0346 | 0.7842 | 0.372 | |
M4 | 0.7428 | 0.019 | 0.0297 | 0.6683 | 0.04 | 0.1059 | 0.8713 | 0.0271 | 0.0374 | 0.6945 | 0.2685 | 0.5161 |
M5 | 0.7216 | 0.0207 | 0.0307 | 0.8643 | 0.0318 | 0.8102 | 0.0199 | 0.0307 | 0.6141 | 0.1515 | 0.1973 | |
M6 | 0.8454 | 0.014 | 0.0237 | 0.8572 | 0.0306 | 0.0892 | 0.7009 | 0.0263 | 0.0332 | 0.5314 | 0.1259 | 0.2392 |
INDUS RIVER | CHENAB RIVER | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Training phase | Testing phase | Training phase | Phase | |||||||||
CC | MAE | RMSE | CC | MAE | RMSE | CC | MAE | RMSE | CC | MAE | RMSE | |
WM1 | 0.855 | 0.0357 | 0.0027 | 0.8293 | 0.0419 | 0.0029 | 0.8649 | 0.0132 | 0.0275 | |||
WM2 | 0.8678 | 0.029 | 0.0019 | 0.8995 | 0.036 | 0.0025 | 0.8287 | 0.0155 | 0.0254 | 0.8763 | 0.0719 | 0.2728 |
WM3 | 0.8732 | 0.0263 | 0.0016 | 0.9015 | 0.0335 | 0.002 | 0.8007 | 0.0231 | 0.0353 | 0.8124 | 0.0991 | 0.3215 |
WM4 | 0.8796 | 0.0223 | 0.0012 | 0.9398 | 0.0317 | 0.0019 | 0.7732 | 0.0195 | 0.0295 | 0.8013 | 0.1978 | 0.5081 |
WM5 | 0.8871 | 0.0182 | 0.0007 | 0.7411 | 0.0176 | 0.0263 | 0.7646 | 0.207 | 0.1908 | |||
WM6 | 0.8799 | 0.0253 | 0.0015 | 0.9475 | 0.0324 | 0.0017 | 0.729 | 0.0154 | 0.0259 | 0.7116 | 0.1079 | 0.2271 |
Furthermore, in
INDUS RIVER | CHENAB RIVER | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Training phase | Testing phase | Training phase | Testing phase | |||||||||
CC | MAE | RMSE | CC | MAE | RMSE | CC | MAE | RMSE | CC | MAE | RMSE | |
M1 | 0.8715 | 0.0112 | 0.0006 | 0.7991 | 0.025 | 0.008 | 0.884 | 0.0863 | 0.0129 | 0.7869 | 0.118 | 0.0271 |
M2 | 0.9061 | 0.0094 | 0.0003 | 0.8561 | 0.0266 | 0.0087 | 0.8976 | 0.0354 | 0.0029 | 0.8174 | 0.1152 | 0.0297 |
M3 | 0.9034 | 0.0097 | 0.0003 | 0.9069 | 0.0313 | 0.0025 | 0.7869 | 0.1213 | 0.0352 | |||
M4 | 0.9054 | 0.0096 | 0.0003 | 0.8958 | 0.0265 | 0.0083 | 0.8895 | 0.03 | 0.0023 | 0.7684 | 0.1243 | 0.0396 |
M5 | 0.9042 | 0.0098 | 0.0003 | 0.8825 | 0.0275 | 0.0089 | 0.9004 | 0.0281 | 0.0021 | 0.7336 | 0.1352 | 0.0458 |
M6 | 0.9056 | 0.0096 | 0.0003 | 0.8774 | 0.0274 | 0.0089 | 0.9397 | 0.0344 | 0.0031 |
INDUS RIVER | CHENAB RIVER | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Training phase | Testing phase | Training phase | Testing phase | |||||||||
CC | MAE | RMSE | CC | MAE | RMSE | CC | MAE | RMSE | CC | MAE | RMSE | |
WM1 | 0.9001 | 0.0112 | 0.0006 | 0.8089 | 0.0315 | 0.0079 | 0.8899 | 0.0693 | 0.0049 | 0.8169 | 0.1075 | 0.0279 |
WM2 | 0.9018 | 0.0094 | 0.0003 | 0.8479 | 0.0271 | 0.0081 | 0.8808 | 0.0545 | 0.0037 | 0.8074 | 0.1059 | 0.0265 |
WM3 | 0.9048 | 0.0097 | 0.0003 | 0.8491 | 0.0268 | 0.0078 | 0.8809 | 0.0419 | 0.0032 | 0.8091 | 0.1108 | 0.0271 |
WM4 | 0.9067 | 0.0095 | 0.0003 | 0.8895 | 0.0379 | 0.0028 | 0.8561 | 0.1047 | 0.0277 | |||
WM5 | 0.9059 | 0.0098 | 0.0003 | 0.8355 | 0.0278 | 0.0084 | 0.9014 | 0.0301 | 0.0027 | |||
WM6 | 0.9064 | 0.0096 | 0.0003 | 0.8532 | 0.0269 | 0.0082 | 0.9009 | 0.0316 | 0.0031 | 0.8591 | 0.1019 | 0.0269 |
Clearly,
Similarly,
The approximations of each model are shown in
The CC-values of WD combined models are close to 100% for Indus and Chenab datasets compared to traditional models. Therefore, the WD algorithm combined with traditional models has made a tremendous impact and performs the role of a gadget to deliver improved estimations of both streamflow rivers. Consequently, the combined WLSSVM, WANN, and WMARS forecasting methodologies have been used as the second type models and provide excellent results instead of the first type traditional models LSSVM, ANN and MARS.
Dataset | Model | CC | MAE | RMSE |
---|---|---|---|---|
Indus River | LSSVM | 0.9335 | 0.0247 | 0.0019 |
WLSSVM | 0.9398 | 0.0178 | 0.002 | |
ANN | 0.9125 | 0.0288 | 0.0824 | |
WANN | 0.9597 | 0.029 | 0.0016 | |
MARS | 0.8968 | 0.0267 | 0.0085 | |
WMARS | 0.9012 | 0.0259 | 0.0077 | |
Chenab River | LSSVM | 0.9467 | 0.0327 | 0.0021 |
WLSSVM | 0.9511 | 0.0233 | 0.0016 | |
ANN | 0.8144 | 0.0298 | 0.1008 | |
WANN | 0.9286 | 0.0271 | 0.0539 | |
MARS | 0.8471 | 0.1112 | 0.0278 | |
WMARS | 0.8599 | 0.0993 | 0.026 |
It is concluded that the use of wavelet application with the addressed forecast time-series models has improved the efficiency and yielded tremendous results. The traditional forecasting time-series models have been prescribed by utilising the impact of the wavelet algorithm. The significance of wavelet information is to improve the efficiency of the models that determines the appropriate outcomes for time-series models. The performance of the wavelet combined models mapping with their associated resampling outcomes. Filtrations of the streamflow time-series datasets have been interpreted from the WD application and these features have not been observed in traditional models. The nonlinear input combination models have been constructed with the WD application and used as input estimators with traditional models that improve the forecast efficiency of the combined models. Therefore, the WD application has become an efficient and interesting valuable tool to analysed simulations of time-series datasets models in various domains.
Thus researchers have a good argument in the future for the extensive usage of the wavelet algorithm to build up the novelty in the model or improve the existing models by suitable transform other than wavelet. The WD algorithm provides the optimum ability to pick the appropriate input and logically improves the output of the traditional forecasting models.
This study has been reinforced by the Department of Basic Sciences & Related Studies, Mehran University of Engineering & Technology, Jamshoro, Sindh, Pakistan. The authors have gratefully acknowledged the institute for its support and cooperation in the research activity and providing a healthy research environment and facilities.