The COVID-19 pandemic has caused trouble in people’s daily lives and ruined several economies around the world, killing millions of people thus far. It is essential to screen the affected patients in a timely and cost-effective manner in order to fight this disease. This paper presents the prediction of COVID-19 with Chest X-Ray images, and the implementation of an image processing system operated using deep learning and neural networks. In this paper, a Deep Learning, Machine Learning, and Convolutional Neural Network-based approach for predicting Covid-19 positive and normal patients using Chest X-Ray pictures is proposed. In this study, machine learning tools such as TensorFlow were used for building and training neural nets. Scikit-learn was used for machine learning from end to end. Various deep learning features are used, such as Conv2D, Dense Net, Dropout, Maxpooling2D for creating the model. The proposed approach had a classification accuracy of 96.43 percent and a validation accuracy of 98.33 percent after training and testing the X-Ray pictures. Finally, a web application has been developed for general users, which will detect chest x-ray images either as covid or normal. A GUI application for the Covid prediction framework was run. A chest X-ray image can be browsed and fed into the program by medical personnel or the general public.
In recent times, Coronavirus disease has become the biggest health hazard worldwide. Each country is going up against furious events as far as ensuring the wellbeing of its inhabitants because of the boundless thought of the disease and the detachment of medicine or immunization for it. The COVID-19 pandemic has achieved excellent examinations the world over. The effect on exploring ahead of time at the hour of the boundless, the criticality and difficulties of ongoing far and wide ask about, and this epic far-reaching fully emphasizes the importance of a pediatrician-researcher labor force. As it examines and goes through and beyond this far-reaching issue, which has the prospect of having a long-term impact on our reality, research, and the biomedical inquiry initiative, it is vital to recognize and address openings and procedures, as well as difficulties in analyzing and sustaining the pediatrician-researcher labor force [
The impact on the investigations into the case of COVID-19 was immediate, emotional, and, without a doubt, long-lasting. Most scholarly, business, and government fundamental research and clinical investigations have been decreased, or investigations have already been diverted to COVID-19. The majority of ongoing clinical trials, including those researching life-saving cures, have been postponed, and the majority of those that are still open to modern recruitment have closed. Continuous clinical trials have been altered to allow domestic organizations to give care and virtual monitoring, reduce the risk of COVID-19 contamination, and avoid the diversion of healthcare resources from a widespread reaction [
In a medical environment, learning the resulting outcome takes about 6–7 days, and it is also expensive for the general population. Due to these limitations, radiography checks can be used as a stand-in for diagnosing the disease. Chest radiography images can be evaluated to determine the presence of the novel coronavirus or its side effects. Infections are found in this family, according to studies, and show up as crucial symptoms in radiographic images. Furthermore, Polymerase Chain Reaction (PCR) test results are not always accurate. Furthermore, chest X-rays are more tolerant than other radiological examinations, such as Computed Temography (C.T) scans, and are available in almost every clinic. The difficulties of locating Covid-19 patients using chest x-rays (CXR) have been demonstrated, with prepared specialists not always being available, especially in the higher ranges [
The COVID-19 pandemic has resulted in a global emotional loss of human lives and is a magnificent test of our entire condition. Public mindfulness and “dos and don’ts” programs for COVID-19 are being implemented in public areas. Environmental factors may also aid the coronavirus. However, the loss and recovery rates show that the pandemic is not being well handled. They couldn’t even get the results on time in most cases. As a result, the patient’s condition declines or dies.
Many corona detection systems like CXR and C.T. images use Transfer Learning and Haralick features, using the internet of things and sending alerts, COVID-19 Deep Learning Prediction Model Using Publicly Available Radiologist. Also, in a medical way, like Molecular point-of-care test, Polymerase chain reaction (qrt-PCR) etc. Due to a shortage of diagnostic kits and the incorrect prediction of RT-PCR in Algeria, public and private hospitals employed CT scans as an alternate diagnostic method to detect COVID-19 in patients [
The study’s main goal is to estimate the most accurate result, to save time and money when it comes to Coronavirus tests. It shows a fully programmed framework for differentiating coronavirus-infected lungs from chest CT scan images and other lung disorders. For the frontend, a Graphical User Interface (GUI) system was used instead of Flask and Django. The novelty of this paper is that we used (CNN, Inception and DenseNet) 3 types of models for comparison. The accuracy obtained from the CNN model is 98.3 percent. Also, the GUI software tool will detect the covid patients’ results within a sec using the given x-ray images. In this way, it can be claimed that classification using radiographic images, such as a chest X-ray (CXR), can be precise while also being significantly faster and less expensive than a PCR test.
Section one provides an introduction. In section two, methods and methodology are described. In section three, mathematical equations and expressions are presented. The results are provided in section four. A performance comparison is shown in section five. Finally, section six discusses the conclusion.
The proposed system aims to predict Covid-19 from chest X-ray images using Deep Learning and Convolutional Neural Networks. The available dataset [
As a result, the article recommends a simple and effective Convolutional Neural Network (CNN) and Deep Learning-based technique for classifying Covid-19 positive and negative cases using CXR images. This method can saturate a very small, specific area of Covid-19 positive patients in a matter of seconds. We supplied an apparatus that can be designed to recognize Covid-19 positive patients as part of this study. To be sure, in the absence of a radiologist or if the topic specialists’ opinions conflict, this deep learning-based device will continue to offer interpretation without requiring human participation. We used data from free sources to demonstrate the applicability of the proposed gadget in terms of arrangement exactness and affected ability in this paper.
The system is made up of a dataset that includes normal and Covid patients’ chest X-ray images [
To identify objects for our Covid-19 detection system from chest X-ray images, we used OpenCV. This image processing technique helps with object detection based on the color, size, and shape of images. Different benchmark CNN models have been embraced in our proposed work. They have been trained individually to make independent predictions. Then the models are combined, using the new method of weighted average assembling technique, to predict a class value. We used OpenCV to detect particles in chest X-ray images for our Covid-19 detection system. This image processing technique assists in recognition of objects based on image color, size, and shape. In our proposed work, we used a variety of benchmark CNN models. Individually, they’ve been taught to make predictions on their own. The models are then merged to determine a class value, utilizing a novel way of the weighted average assembly procedure. This modern proposed assembling technique is expected to produce a more powerful expectation. DenseNet, h5, and the Inception model are three pre-trained CNN models in our proposed study. To train models, we applied Keras and TensorFlow with supplied parameters. Then, using the weighted normal gathering of the three models, they run the prepared models on the test images and choose lesson name 0 or 1 based on the results. Whereas partitioning the pictures into preparing and testing guarantees that there’s no persistent cover, i.e., distinctive pictures of the same quiet aren’t displayed in both preparing and testing datasets.
Firstly, the DenseNet’s convolution creates a greater number of highlight maps. The number of yields, including maps of a layer, is characterized as the development rate. DenseNet has lower requirements for wide layers since layers are thickly associated with little excess within the learned highlights.
Densely interconnected Convolutional Systems, also known as DenseNets, are the next step in the evolution of profound convolutional systems. After executing a composite of operations, traditional feed-forward neural systems interface the layer to yield it to the next layer. As we’ve seen, this composite typically includes a convolution operation or pooling layers, group normalization, and actuation work. The Dense Nets are divided into Dense Blocks, where the highlight map measurements remain constant within a piece, but the number of channels varies [
The paper proposes a modern sort of engineering – Google Net or Initiation v1. It is essentially a convolutional neural network (CNN) that is 27 layers dense. The 1 × 1 Convolutional layer is sometimes applied to another layer, which is primarily utilized for dimensionality reduction.
Beginning Convolutional Neural Network systems (CNNs) are used to reduce processing costs by combining modules. Because a neural network interacts with an infinite number of pictures, each of which has a wide variety of significant components, they must be appropriately described. Convolution is performed on input with not one but three distinct channel sizes in the first disentangled adaption of a starting module (1 × 1, 3 × 3, 5 × 5). Furthermore, maximum pooling is employed. The following yields are concatenated and passed to the next phase at that time. The following yields are concatenated and passed to the next tier at that time. The arrangement becomes dynamically more extensive, not more profound, by organizing CNN to complete its convolutions at the same level [
Convolutional neural networks were used to detect objects, and two chest X-ray datasets were analyzed. It consists of normal, COVID-19, and Pneumonia patients’ chest X-ray samples [
The dataset images, which are divided into two parts. Those are Covid positive and normal CXR images. The images are then converted into (224,224) forms and normalized. At that point, the pictures are rearranged and split into preparing and testing information. Thus, the training part has 60 images and 2 classes. The same testing part has 60 images and 2 classes. There are many possibilities that the same patients’ CXR images are kept in both the training and testing parts. It can be overlapping, but it’s kind of promising that training of the model, which has been examined by testing and validation checking, defines the ability of the trained model. Covid-19 positive and negative patients’ chest X-Ray images below in
Here are 4 images from the dataset which have been taken from the Kaggle dataset. Covid and Normal. The first dataset was distributed in 2018, and from that point forward, around 100 exploration articles have 121 been distributed so far, including its example. The benchmark paper by Kermany et al. reports 122 claims a grouping exactness of 92.8 percent while utilizing the Inception V3 architecture (pre-prepared for the 123 ImageNet datasets) to recognize Normal and COVID tests. When it came to recognizing normal, bacterial, and viral pneumonia, they achieved a 90.9 percent accuracy rate. Notwithstanding, 125 ongoing examinations have revealed better order results (for paired groupings) on these 126 datasets. For instance, in 2020, Chouhan et al. detailed an examination portraying a Transfer Learning127 based methodology for Covid identification, which brought about a 96.39 percent grouping exactness 128. Nahid et al. suggested a two-channel CNN-based pneumonia location technique that generated a 132-characterization exactness of 97.92 percent [
This proposed work’s most significant benefit is that it is very user-friendly, and the system benefits in all sections of socities. Everyone can use it if anyone has a minimum knowledge of browsing and selecting images. This whole system is mainly done in the Python language, and for the overall design, we use the Tkinter library, which is Python’s default graphical user interface toolkit, to generate a standard user interface. A dataset was created using a Jupyter notebook. We train and validate our data in Google Colab. We use Flask, which helps us to import libraries. For the CNN base model, we use Keras for image processing; we also use OpenCV and Tensorflow, which resize images and zoom. Also, the standard image size is 224,224. We use scikit-learn to maintain our algorithm-like epochs. To highlight training loss, training accuracy, validation accuracy, and validation loss, we use Matplotlib, from which the library was born, as well as Sklearn. Metrics, which allow us to show true positive, true negative, falsely positive, and falsely negative, actually show how successfully the system’s work is done. To detect the image, we use artificial intelligence. In the backend part, deep learning is used.
Actual or classification accuracy, which we may obtain using some variant of cross-validation data, or sensitivity, which is the ratio of the correctly + classified by our program to all, are the performance metrics used to measure the success of the proposed system. The ratio of accurately + classified by our software to all + classified is known as precision. After that, the overall score was calculated.
Accuracy measured as follows:
It’s called a classification metric, which, in summary, is the number of correct and incorrect predictions made by a classifier.
For these confusion metrics, we need the Python machine learning library nameScikit-learn. The True Positive/Negative name refers to the anticipated result of a test, whereas the True/False refers to the real result. So, in the event, I anticipated that somebody would be Covid-19 positive, but they weren’t. At that point, that would be a False Positive since the real result was wrong, but the expectation was positive.
It is a summary of our proposed solutions and results. The technique is repeated until a specified meeting model is fulfilled, a predetermined number of eras has elapsed, or the arrangement has remained the same for a few successive periods.
Layer (type) | Output Shape | Parameters |
---|---|---|
conv2d (Conv2D) | (None, 222, 222, 32) | 896 |
conv2d_1 (Conv2D) | (None, 220, 220, 64) | 18496 |
max_pooling2d(MaxPooling2D) | (None, 110, 110, 64) | 0 |
dropout (Dropout) | (None, 110, 110, 64) | 0 |
conv2d_2 (Conv2D) | (None, 108, 108, 64) | 36928 |
max_pooling2d_1(MaxPooling2) | (None, 54, 54, 64) | 0 |
dropout_1 (Dropout) | (None, 54, 54, 64) | 0 |
conv2d_3 (Conv2D) | (None, 52, 52, 128) | 73856 |
max_pooling2d_2(MaxPooling2) | (None, 26, 26, 128) | 0 |
dropout_2 (Dropout) | (None, 26, 26, 128) | 0 |
flatten (Flatten) | (None, 86528) | 0 |
dense (Dense) | (None, 64) | 5537856 |
dropout_3 (Dropout) | (None, 64) | 0 |
dense_1 (Dense) | (None, 1) | 65 |
Total params: 5,668,097 | ||
Trainable params: 5,668,097 | ||
Non-trainable params: 0 |
The model is sequential. It is also a CNN-based model in Keras. From here, we can see the layer types and our shape number. Total params number, where 5,668,097 is trainable and non-trainable number is 0. The table summarizes the training and validation loss and accuracy of two files, where for Covid positive it is 0 and for normal it is 1: for example,
COVIDPositive | Normal | Training |
Training |
Validation |
Validationaccuracy |
---|---|---|---|---|---|
0 | 1 | 0.9643 | 0.0904 | 0.9833 | 0.0363 |
From
Covid-19 class indices | Training generator | Validation generator |
---|---|---|
0 | 0.06231 | 0.03630 |
1 | 0.97321 | 0.98333 |
The model evaluates the generator of training and validation is Training accuracy of 0.6231 and loss of 0.97321. Validation accuracy is 0.98333, and loss is 0.03630.
All models are trained for a total of 10 epochs with a step-per-epoch of 7. Model fit is generated here, and the model is saved as an a.h5 file. Model training takes 9 s for each epoch. The training accuracy of this model is almost 96 percent, and the validation accuracy is almost 98 percent.
The experimental evaluation of the suggested method is described in this section. The goal of the offline training and testing experiments is to find the best machine learning model with the highest output for real-time sentiment polarity prediction. We analyzed the performance using two machine learning models and training on a total of five thousand CXR images. Where 60 images for testing and validation.
So, starting by describing the confusion matrix. For these confusion metrics, the True Positive/Negative name refers to the anticipated result of a test, whereas the True/False refers to the real result. In
Here, T.P. is 30, TN is 29, F.P. is 1, and F.N. is 0. So, the train generator class index Covid is 0 and normal is 1.
Here the graph shows that training and validation accuracy is increasing. At epoch 4, the training accuracy was a little bit, but after epoch 04, the training accuracy increased. Here, the highest training accuracy is 0.95 after epoch 8. Here the graph also shows the validation accuracy; it also increases when epoch 0; the validation accuracy is 0.92. The validation accuracy is highest at epoch 1, after epoch 7, it is a little bit decreasing. After epoch 8, the validation accuracy again increased. The training loss is 0.09, and the validation loss is 0.03. Here, when the epoch is 0, the training loss is high. At the highest point, training loss is decreasing. Here the graph shows that in epoch 2, 4, 6, 8, training loss is decreasing, and after the epoch, the training loss rate is the lowest. But in epoch 4, 7, it slightly increased, and after epoch 08, the training loss was very low. On the other hand, the graph also shows validation loss. Here the highest validation loss is close to 0.58 when epoch 0. When the epoch is 1 to 2, range validation is constant, and after epoch 2, the validation loss also decreases. When epoch is in the range of 3–7, the change in validation loss is more or less the same, but at epoch 7, the validation is a little bit increased, but after epoch 7, the validation loss again decreases. Training loss is higher than validation loss. It can be shown with the help of both graphs below.
Overall validation accuracy is higher than training accuracy. It can be shown with the help of both graphs below. For graphing the epochs for training and validation, it has been used by matplotlib and the Sklearn library.
Based on the proposed course of action, a GUI application for the Covid Prediction framework was run. In a way that will minimize Covid-19 positive and negative situations, a direct desktop program was developed, as shown in
A chest X-ray image can be browsed and fed into the program by medical personnel or the general public. In turn, the application will implement the provided illustration in this paper and assign a title to the given Chest X-Ray image, such as Covid or Normal. As a result, they will identify the Covid +ve and Covid -ve situations in addition to their probabilities, as illustrated in
Firstly, to get the desktop tool in Anaconda Prompt, press the tool name gui_covid.py. Then enter the platform where it can be seen like the one below. In
After the selection of the images, it will now press the detect button to predict your result. Then press OK. After that, press the detect the coronavirus button. After a few seconds, the result will appear on the screen, either Covid or Normal. The result can be shown in
Finally, after image analysis, the user will get the result and it will show users a message.
When the user crosses the x-ray image, the text message will be shown to the user. If the patient gets a result of Covid-19 positive, it shows Take Care, be alert and Stay Safe. If the patient gets the result of Covid-19 negative, it shows don’t worry, and you are safe. After that, the user can end their prediction or they can again apply the same procedure with different chest x-ray images.
So, developing tools where they have been used in the Gui application for indexing and templates. For the Gui application, we imported the Python library named Tkinter. To predict exact results, we used our training model.h5 file. So. This is how users can get their Covid-19 results.
We compared the model’s execution to comparable strategies in
Models | Parameters | Validation Accuracy (%) | Precision (%) | Sensitivity (%) | F1-Score (%) |
---|---|---|---|---|---|
Individual Networks | |||||
DenseNet201 | 5,668,097 | 96.8 | 96 | 98 | 97.4 |
Inception v3 | 5,634,029 | 95 | 91 | 92 | 95.8 |
Ensembled networks | |||||
Unweighted average | 94.5 | 93 | 95 | 95.1 | |
Weighted average (accuracy) [ |
94.5 | 94 | 95 | 95.1 | |
Weighted average (rank) [ |
95.3 | 95 | 97 | 95.8 | |
Proposed Approach | 97.6 | 96 | 98 | 98.3 |
In summary, the success of our proposed solution, It’s better than models of individuals. The closest execution is that of DenseNet201. In
Reference | Identified Classes | Number of Samples | Classification Model | Accuracy (%) |
---|---|---|---|---|
[ |
COVID-CXR | 21456 | CovXNet | 95.00 |
[ |
COVID-CXR | 5349 | CNN-AD | 85.00 |
[ |
NORMAL-CXR | 5349 | CNN-AD | 85.00 |
[ |
COVID-CXR | 5349 | CNN-SA | 95.00 |
[ |
NORMAL-CXR | 5349 | CNN-SA | 95.00 |
[ |
COVID-CXR | 6432 | VGG16 | 91.69 |
[ |
COVID-CXR | 3000 | RF | 89.41 |
[ |
COVID-CXR | 3000 | LR | 88.36 |
[ |
COVID-CXR | 3000 | KNN | 69.25 |
[ |
COVID-CXR | 3000 | DenseNet | 85.73 |
Proposed | COVID-CXR | 5668 | CNN | 98.3 |
“--” denotes that the information is not mentioned in the associated paper.
Covid patients’ most important duty is to not spread the Covid to healthy people. Suppose someone doesn’t know about their Covid result. In that case, they cannot control the spread of Covid in healthy people, so each person should know about Covid’s positive or negative results as early as possible. This paper is all about how the system can know their Covid results as early as possible at a low cost. The main motive is to decrease the cost of the corona test and get the results as early as possible using neural networks and artificial intelligence. The research work was done to detect chest X-ray images using detection. In this detection model part, it is seen that the accuracy of giving the true result is 98 percent. That means in a hundred results, and there might be two false results. This will help the patient to get the Covid result from home at no cost. They can take their treatment from home. That will be helpful for Covid patients and also for healthy people who are not affected by Covid. In this way, we can save people from Covid-19. Because when Covid affected people are in quarantine at home, they don’t spread the disease to other people. Also, they save their health by going into bad situations with proper treatment. Lastly, this paper shows a Graphical User Interface (GUI) application that has been operated for the Covid Prediction framework, which introduces the design and implementation of technology. It is believed that the proposal is very helpful for all people and greatly impacts on decreasing the Covid rate if it can be implemented. It also added a great milestone in the Covid-19 testing section. However, in the future, our goal is to improve the CNN architecture to get better accuracy and implement different models for comparison with deep learning models. Also, we have the possibility to scale up the web application by helping users to save their login info, data and open their comment sections for better improvement.
Authors would like to thank the Department of Electrical and Computer Engineering of North South University and Taif University Researchers Supporting Project number (TURSP-2020/73), Taif University, Taif, Saudi Arabia.