[BACK]
Computers, Materials & Continua
DOI:10.32604/cmc.2022.020238
images
Article

IoT & AI Enabled Three-Phase Secure and Non-Invasive COVID 19 Diagnosis System

Anurag Jain1, Kusum Yadav2, Hadeel Fahad Alharbi2 and Shamik Tiwari1,*

1Virtualization Department, School of Computer Science, University of Petroleum and Energy Studies, Dehradun, 248007, Uttarakhand, India
2College of Computer Science and Engineering, University of Ha’il, Ha’il, Kingdom of Saudi Arabia
*Corresponding Author: Shamik Tiwari. Email: shamik.tiwari@ddn.upes.ac.in
Received: 16 May 2021; Accepted: 22 July 2021

Abstract: Corona is a viral disease that has taken the form of an epidemic and is causing havoc worldwide after its first appearance in the Wuhan state of China in December 2019. Due to the similarity in initial symptoms with viral fever, it is challenging to identify this virus initially. Non-detection of this virus at the early stage results in the death of the patient. Developing and densely populated countries face a scarcity of resources like hospitals, ventilators, oxygen, and healthcare workers. Technologies like the Internet of Things (IoT) and artificial intelligence can play a vital role in diagnosing the COVID-19 virus at an early stage. To minimize the spread of the pandemic, IoT-enabled devices can be used to collect patient’s data remotely in a secure manner. Collected data can be analyzed through a deep learning model to detect the presence of the COVID-19 virus. In this work, the authors have proposed a three-phase model to diagnose covid-19 by incorporating a chatbot, IoT, and deep learning technology. In phase one, an artificially assisted chatbot can guide an individual by asking about some common symptoms. In case of detection of even a single sign, the second phase of diagnosis can be considered, consisting of using a thermal scanner and pulse oximeter. In case of high temperature and low oxygen saturation levels, the third phase of diagnosis will be recommended, where chest radiography images can be analyzed through an AI-based model to diagnose the presence of the COVID-19 virus in the human body. The proposed model reduces human intervention through chatbot-based initial screening, sensor-based IoT devices, and deep learning-based X-ray analysis. It also helps in reducing the mortality rate by detecting the presence of the COVID-19 virus at an early stage.

Keywords: COVID-19; internet of things; deep learning; chatbot; X-ray; capsule network; convolutional neural network

Abbreviations
AUC: Area Under Curve
CapsNet: Capsule Network
CNN: Convolutional Neural Network
HSV: Hue, Saturation and Value
IoT: Internet of things
ReLU: Rectified Linear Unit
ROC: Receiver Operating Characteristic Curve
RT-PCR: Reverse Transcription Polymerase Chain Reaction

1  Introduction

Corona is a viral disease that has taken the form of an epidemic and is causing havoc worldwide. It is a hazardous infection that bare eyes cannot see, and the infection spreads from one person to another. The disease starts with a cold and cough, which gradually takes a severe form and badly affects the patient’s breathing. Sometimes this also results in the death of the patient [1]. This virus has first entered the human body by bats in November-2019. At the end of the year 2019, it was seen in the horrible form in China, which has now slowly spread all over the world [2]. Problems like dry cough, shortness of breath are their significant and initial symptoms. Initially, it looks like a common cold, but it can be detected only after examination, whether it is a corona or not. When sneezing, it spread in the air due to the sneezing particles coming out from inside the person, and the person coming in contact with it can quickly get this infection. Victims of this virus have been found mainly in the 55–60 years age group. Often, the person suffering from any chronic disease such as diabetes, kidney disease, or heart disease is more prone to this infection [3]. COVID 19 infection has affected more than 180 countries in the world. It has affected about 16 crore people so far, and about 33 lakh people have died due to this virus. Until now, its vaccination is still under the trial phase. Therefore all countries are still struggling with this virus [4]. Developing and densely populated countries face a scarcity of resources like hospitals, ventilators, oxygen, and healthcare workers. Therefore researchers and scientists are looking for some nonconventional methods to detect this virus to control its spread. Technology can play a vital role in this. Sensor-based hand-held devices can be used to collect various body parameters in a non-invasive manner. Through IoT-enabled devices, the patient's data can be managed remotely and stored on the cloud. Later on, data can be analyzed through a machine and deep learning model, and confidential information can be easily fetched. It will minimize the spread of the virus as the patient will get to the health care facility at home. Moreover, this will also assist the healthcare worker in making decisions [5,6].

In this manuscript, a three-phase technology-enabled model has been proposed to diagnose the COVID-19. While designing the model, the authors have incorporated the Chatbot, IoT, and deep learning technologies to diagnose the presence of the COVID-19 virus safely. The structure of the paper is as follows: Technology-enabled solutions proposed by different researchers to detect COVID-19 are given in Section 2. Details of the proposed three-phase technology-enabled model to diagnose covid-19 are given in Section 3. Details of the simulation environment, results, and their detailed analysis are described in Section 4. Finally, conclusive remarks and further scope of extension are discussed in Section 5.

2  Literature Review

This section contains the details of different IoT and artificial intelligence-based solutions proposed by eminent researchers to fight against the COVID 19 virus.

Alam [7] has proposed four-layer architecture based on blockchain and IoT to detect and prevent the COVID-19 pandemic. The author has used IoT technology to detect the COVID-19 symptoms at home. For secure transmission over an insecure network, the author has used blockchain technology. The author has suggested using the Aarogya Setu and Tawakkalna mobile app for analysis of received data. Singh et al. [8] have proposed an IoT-based wearable band (IoT-Q-Band) to track the situation of quarantined patients. The proposed band is bundled with a mobile application and cloud-based monitoring system. Patients in quarantine mode are supposed to wear this band until the completion of the quarantine period. The band has a battery that lasts long for 20 days. Through a Bluetooth link, it is connected to a mobile device. Once it is worn, then it can’t be taken off. It can only be cut. If a quarantined patient tries to tamper with it or remove it before completing the quarantine period, it immediately sends the signal to the local authority. Band also sends the location of the person to the local authorities after a specific time interval. If a person is more than 50 m away from the registered location, this indicates that a person is violating the quarantine rules. Kolhar et al. [9] have proposed an IoT-based 3-layered biometric architecture for face detection to restrict the movement of people during the lockdown. The proposed model was designed to ensure that during the relaxing lockdown period, to purchase essential items, only one person from a family will go out. At the first layer, called the device layer, the authors have used sensors to collect face images of people. These sensing devices are deployed at the entrance of society and ported to Raspberry pi. Captured images are transmitted at the second layer of the cloud, where the image database is maintained for storage purposes. A third layer, CNN architecture, is used to match the face with the family database, and if another member of the same family is outside, then that person is not allowed to go. Karmore et al. [10] have proposed a sensor and artificial intelligence-based cost-effective and safe medical diagnosis humanoid system to diagnose covid-19. Authors have divided their proposed model into three parts: (i) Localization and autonomous navigation, (ii) Identification and diagnosis system, (iii) Report generation. Authors have used IR sensors, e-health sensors, cameras, blood samples, and CT scan images of the chest to detect covid-19 at various parts of their model. Mohammed et al. [11] have proposed the concept of the intelligent helmet to diagnose COVID-19. Smart helmet was equipped with a thermal & optical imaging system. A thermal imaging system captures the body temperature through infrared rays. If the temperature is more than the normal temperature, then the image of that person is captured through another camera. GPS coordinates of that person are also captured. Collected data is passed to the local administration for necessary action. Apostolopoulos et al. [12] have proposed a transfer learning-based model for the diagnosis of COVID-19. The authors have used chest X-ray images of COVID 19 patients, pneumonia patients, and normal persons to train and test the proposed model. They have achieved an accuracy level of 96% during the classification of different images.

Though different researchers have proposed different technology-enabled solutions to detect COVID 19, still no one has used multiple technologies simultaneously. Therefore, it has motivated us to offer numerous technology-based frameworks to diagnose COVID 19. Details of the proposed framework are given in the next section.

3  Material and Method

Details of the proposed framework, methodology, models, and dataset are described in this section.

3.1 Design of Framework

The design of the proposed framework is shown in Fig. 1. The whole framework is divided into three phases. Their details are as follows:

images

Figure 1: Architecture of three-phase COVID-19 detection model

3.1.1 Phase 1

A chatbot model can be used to interact with the person feeling some unusual conditions. A chatbot powered by artificial intelligence can guide individuals and patients during and beyond COVID-19. By giving answers to different questions through hand-held electronic devices like mobile phones, tablets, etc., disease risk can be alleviated. In the absence of a doctor or at remote places, AI-enabled chatbot systems designed in regional language can be used in fetching and recording the patient's details in a more convenient manner. They can also be used for booking appointments with doctors and reordering medicines timely. Comparing a patient’s present symptoms with already-fed data can be used as an adviser for the next stage of treatment.

3.1.2 Phase 2

A thermal scanner and pulse oximeter can be used to guide the patient. A person having body temperature in the range of 36.5°C to 37°C is considered normal. Fever (83%–99%) is the first symptom which a COVID-19 infected person experiences. Cough (59%–82%) and fatigue (44%–70%) are other symptoms experienced by each person. Fever can be judged through an analog or digital thermometer, but it may require to be placed in the mouth. A thermal imaging scanner can be used to check the body temperature without making contact with the patient body. Thus, it will prevent the easy dispersal of the virus. Due to its lightweight and hand-held characteristics, corona warriors can utilize it as a front-line screening tool. It can also be set on a tripod, and its sensor will record the temperature of every person who will pass away in front of it. To record the body temperature, it throws infrared rays on the person standing in front of it. The commonly used sensor for this purpose is AMG8833 IR [13].

A pulse oximeter is used to measure the oxygen saturation (SpO2) level in the human body. It measures the level of oxygen in the blood through non-invasive means. If a person is infected with COVID-19 or pneumonia, then the level of oxygen in the blood will go down. It causes results in hazardous to our health. Pulse oximeter releases infrared and red light. This light passes through the blood via the tissue bed and is received by the receiver of the pulse oximeter. While this light passes through the blood, hemoglobin in blood absorbs different amounts and spectra of light. Fully saturated hemoglobin will absorb infrared light while fully desaturated hemoglobin absorbs red light. Based on the percentage of light absorbed, it estimates the SpO2 level in the body. SpO2 level above 95% is considered ideal, while when it goes below 90%, there is a need to analyze it. In medical terminology, when SpO2 goes below 90%, it is called hypoxemia. COVID-19 infected asymptomatic patients do not feel uncomfortable due to the depletion of SpO2. In the beginning, a gradual decrease in SpO2 level has been noticed in COVID-19 infected patients, and this declines rapidly after a few days of infection. It signifies the importance of the pulse oximeter. We can use MAX30100 [14] sensor for this work.

3.1.3 Phase 3

Radiography images of the chest integrated with a deep learning model can be used to confirm the infection of COVID-19. The medical practitioner can use the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test to detect the disease. However, this test fails in the detection of the COVID-19 virus during the starting stage. Moreover, a doctor needs to be cautious while taking the sample. The doctor can use chest X-rays to detect the COVID-19. By analyzing COVID 19 infected person’s chest X-ray, it has been found that at the beginning of the COVID 19 infection, nodule and ground-glass opacity cover the small portion of lungs. Without proper treatment, symptoms increases, and this covers the entire lungs. In extreme conditions, lesions diffuse in both lungs, and the lungs become white and stop functioning. AI-based deep learning models combined with radiographic images can be helpful in the accurate detection of COVID-19 infection. In remote locations and developing countries, this can also assist the doctor in the absence of specialized physicians.

3.2 Algorithm

The algorithm of the proposed Three-Phase COVID-19 Detection Model is as follows:

images

In phase 3, a deep learning model can detect the COVID-19 infection from a chest X-ray image. A detailed methodology for the use of Convolution Neural Network (ConvNet) and Capsule Network (CapsNet) is given in the following Subsection 3.3.

3.3 Deep Learning Models for COVID-19 Detection Through Chest X-Ray Images

In phase three of the proposed model, the authors have used two deep learning architectures for COVID 19 diagnosis through chest X-rays. The details of the deep learning model are as follows:

3.3.1 COVID-ConvNet

Convolution neural network is the critical factor behind the success of deep learning. For the extraction of features from an image, it uses pooling and convolution operations. These features are used for the training and classification of images. Convolution kernels apply the convolution operation. In continuous domain convolution of two functions f and h are defined as (Eq. (1)) [15,16]:

(fh)(t)=f(τ)h(tτ)dτ=f(tτ)h(τ)dτ (1)

For discrete signals, the corresponding convolution operation is defined as (Eq. (2)):

(fh)(n)=m=f(m)h(nm)=m=f(nm)h(m) (2)

Above 1-D convolution for 2-D convolution case is defined as (Eq. (3)):

(fh)(x,y)=m=MMn=NNf(xn,ym)h(n,m) (3)

Here, the function h is termed as a filter (kernel) that is used to convolute over image f. Between two consecutive convolution layers, there are pooling layers whose function is to minimize the overfitting chances. In addition, for every convolution layer, there is an activation function ReLU for faster training time. In this model, each neuron is not connected with all inputs; they are connected to only a subset of inputs. The CNN algorithm used in this work is presented below:

images

3.3.2 COVID-CapsNet

Capsule Network offered by Hinton is an improved version of traditional CNN. Conventional Convolution neural network architecture has a pooling layer for downsampling. It is applied on the feature map to reduce memory requirement and ensure that ConvNet identifies a similar object in images with different scales. It causes spatial invariance in ConvNets, which results in significant flaws of ConvNet [17]. Furthermore, ConvNets are more likely to adversarial attacks like pixel perturbations resulting in incorrect classification. Due to max-pooling in ConvNet, reconstruction of the image is more challenging than image reconstruction in CapsNet [18].

A CapsNet will have many layers of capsules. Fig. 2 depicts the architecture of the proposed CapsNet model having one ReLU convolution layer which is followed by a single primary capsule layer and X-rayCaps layer. Here the capsules indicate the three types of X-ray images. The convolutional layer converts pixel intensity into the related activities of a local feature finder. It becomes the input of the primary capsules. Convolutional layers having strides of more than two will be used for the reduction of dimension. The output of X-rayCaps will be used to decide the input image class. All capsules in X-rayCaps layer are interconnected with each capsule available in the primary capsule layer. In the architecture of the proposed model, the authors routing by agreement method is used in place of max-pooling operation-based routing. Due to the availability of the feedback procedure in the dynamic routing method, support for those capsules is raised that agrees with the parent output, and this enhances the learning capability.

images

Figure 2: COVID-CapsNet architecture to diagnose COVID-19 (redesigned from [19])

A nonlinear function named squashing function has been used for capsule classification during training procedures. This function acts as activation at the capsule level [19,20]. It is defined as in (Eq. (8)):

vj=||sj||21+||sj||2sj||sj|| (8)

where, sj denotes the weighted sum of capsules defined in (Eq. (9))

sj=iciju^j|i (9)

and u^j|i represents the affine transformation defined in (Eq. (10))

(10)

images

images

Figure 3: Sample images corresponding to each category (a) COVID-19, (b) Normal, and (c) Viral pneumonia [21]

3.4 Image Dataset

The COVID-19 X-ray image dataset utilized in this paper is gathered by Andrew by following the web Italian Society of Medical and Interventional Radiology (SIRM), Radiological Society of North America (RSNA), and Radiopaedia [21]. This data includes 1341, 1345, and 219 images of a person having a normal condition, common bacterial pneumonia, and confirmed COVID-19, respectively. The dataset is divided into train, validation, and test subsets where the number of images in each subset is 2324, 406 and 175 respectively. The sample image corresponding to each category is shown in Fig. 3. All the images are resized to 128 × 128.

4  Result & Discussion

A simulation environment for testing both models has been designed using Python language. The objective is to analyze the performance of COVID-ConvNet and COVID-CapsNet models using significance metrics like precision, recall, accuracy, F1-score, and area under the ROC curve.

Precision=TruePositive/(TruePositive+FalsePositive) recall=TruePositive/(TruePositive+FalseNegative) F1score=2recallprecision/(recall+precision)

4.1 COVID-ConvNet Model for COVID-19 Detection

The ConvNet model used in this experiment has three convolutional layers. There are 32, 64, and 64 kernels in each convolutional layer. A filter of size 3 × 3 is also there with each convolution layer. ReLU is used as an activation function on all layers. A max-pooling layer follows each convolution layer. Later, five dense layers are used with 512, 256, 128, 64, and 3 neurons in that order. A dropout layer for regularizing neural networks is used to reduce overfitting after the first four dense layers. Dropout with probabilities 0.25, 0.4, 0.3, and 0.5 are applied. One batch normalization layer is also added after the third dense layer to increase the model’s speed, performance, and stability. Categorical cross-entropy (loss function) is given by the (Eq.(11)):

LCE=1Ni=1NlogeWyiTxi+byij=1neWjTxi+bj (11)

W = weight matrix, b = bias term, xi is the i-th training sample, yi = class label for i-th training sample, N = sample count, Wj and Wyi are the j-th and yi-th column of W.

The initial learning rate 0.0125 with a momentum of 0.75 and 25 epochs with batch size 32 are used. The model is compiled with the adam optimizer and the categorical cross-entropy as a loss function. Fig. 4 presents the classification loss in the training and validation set of the COVID-ConvNet training. It can be observed that the loss decreases quickly in the first five repetitions and gets stabilized after 20 repetitions afterward. The confusion matrix for the COVID-ConvNet model for different classes has shown in Fig. 6a. Results of the experiment are shown in Tab. 1. The average accuracy achieved using this model is 0.86.

images

Figure 4: Classification loss evolution in training and validation set of the COVID-ConvNet training. Loss decreases abruptly in the first five repetitions and gets stabilized after 20 repetitions

images

Figure 5: Classification loss evolution in training and validation set of the COVID-CapsNet training. Loss decreases abruptly in the first ten repetitions and gets stabilized after 20 repetitions

4.2 COVID-CapsNet Model for COVID-19 Detection

As discussed in Section 3.3.2, the CapsNet model architecture is used for COVID-19 detection in this experiment. The model is compiled with the adam optimizer at a value of 0.0012 for the initial learning rate, and the categorical cross-entropy is used as a loss function. Equation of loss function is given in (Eq. (12)):

Lk=Tkmax(0,m+||vk||)2+α(1Tk)max(0,m+||vk||m)2 (12)

Tk = 1 only when a feature corresponding to class k is available and m+ = 0.9 and m+ = 0.1, to ensure that the vector length remains within the specified practical limits. The α down-weighting function is used to ensure numerical stability and set at 0.5. The model is trained for 25 epochs, with 16 as training batch size and 1 validation batch size.

Fig. 5 presents the classification loss in the training and validation set of COVID-CapsNet training. It can be observed that there is a sharp decrease in loss during the first ten repetitions, and it stabilizes after 20 repetitions. The confusion matrix for COVID-CapsNet model for different classes has shown in Fig. 6b. The result of the experiment is shown in Tab. 1. The average accuracy achieved using this model is 0.97.

images

Figure 6: Confusion matrix for COVID-19 diagnosis for three classes (a) By using COVID-ConvNet, and (b) By using COVID-CapsNet

images

4.3 Discussions

It has been proved from the analysis of results that COVID-CapsNet model has attained greater accuracy than the COVID-ConvNet model. All three performance metrics, i.e., recall, F1-score, & precision, are higher in COVID-CapsNet model than COVID-ConvNet model separately for each class. The test accuracy improved to 0.97 from 0.86, as shown in Tab. 1. ROC plots in Figs. 7 and 8 also confirm the dominance of COVID-CapsNet model over COVID-ConvNet model. ROC curves are a valuable tool for evaluating a diagnostic test’s performance over a wide range of possible values for a predictor variable. The area under a ROC curve is a measure of discrimination that researchers can use to compare the results of two or more diagnostic tests [22]. The AUC for each class is 1 in COVID-CapsNet model that is much better than COVID-ConvNet model. Moreover, the micro-average and macro-average areas are higher in COVID-CapsNet model. ConvNets are weak at encoding object orientation. Therefore they require precisely oriented images; however, capsule networks do not have this problem. Because CNN prioritizes the presentation of certain features over their location, this attribute increases the invariance between these features, which is then ignored in the case of CapsNet. The feature information of lower-level neurons is transmitted to higher-level neurons that require this information primarily in case CapsNets. The information will not be sent to every higher-level neuron, saving a significant amount of time in processing. This dynamic neuron-to-neuron routing also improves the rate of prediction or accuracy of CapsNet compared to ConvNet, as demonstrated by results.

images

Figure 7: ROC curve obtained using COVID-ConvNet

images

Figure 8: ROC curve obtained using COVID-CapsNet

5  Conclusion

Detection of COVID-19 at the initial stage will help in preventing the widespread of the disease. In this work, an integrated model working at different stages to diagnose COVID 19 has been proposed. The framework suggests that initial symptoms can be diagnosed at home through a chatbot without the intervention of medical practitioners. Patients having some initial symptoms need to go for regular tests through a thermal scanner and pulse oximeter. In case of high temperature and low oxygen saturation level, the person needs to visit the clinic for an X-ray test. X-ray images can be examined through a deep learning-based model to confirm COVID 19 infection. Model-based on deep learning, namely COVID-ConvNet and COVID-CapsNet has been designed to detect the presence of the COVID 19 virus through chest X-ray. COVID-ConvNet is based on traditional CNN, while COVID-CapsNet is an improved version of conventional CNN. The limitations of CNN have been addressed by replacing the usual scalar activations with vectors and routing the neurons in a different style. Both models have been tested in a simulation environment designed in Python, and it has been found that COVID-CapsNet has shown better performance relative to COVID-ConvNet. COVID-CapsNet has shown 97% accuracy while analyzing the X-ray of covid patients, pneumonia patients, and normal persons. The proposed framework may reduce the widespread pandemic of COVID 19 by minimizing face-to-face interaction between patient and doctor and reducing the mortality rate by detecting it at an early stage.

In the future, to improve the accuracy of the proposed framework, CT scan images can be used with X-ray images to design a dual input model.

Funding Statement: The authors received no specific funding for this study.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1. R. Keni, A. Alexander, P. G. Nayak, J. Mudgal and K. Nandakumar, “COVID-19: Emergence, spread, possible treatments, and global burden,” Frontiers in Public Health, vol. 8, pp. 1–13, 2020.
  2. A. Biswas, U. Bhattacharjee, A. K. Chakrabarti, D. N. Tewari, H. Banu et al., “Emergence of novel coronavirus and covid-19: Whether to stay or die out. Critical reviews,” Microbiology, vol. 46, no. 2, pp. 182–193, 2020.
  3. S. Tiwari and A. Jain, “Convolutional capsule network for covid-19 detection using radiography images,” International Journal of Imaging Systems and Technology, vol. 31, no. 2, pp. 525–539, 2021.
  4. WHO Coronavirus (COVID-19) Dashboard,” visited on 14th May, 2021. [Online]. Available: https://covid19.who.int/.
  5. R. P. Singh, M. Javaid, A. Haleem and R. Suman, “Internet of things (IoT) applications to fight against covid-19 pandemic. Diabetes & metabolic syndrome,” Clinical Research & Reviews, vol. 14, no. 4, pp. 521–524, 2020.
  6. T. Alafif, A. M. Tehame, S. Bajaba, A. Barnawi and S. Zia, “Machine and deep learning towards covid-19 diagnosis and treatment: Survey, challenges, and future directions,” International Journal of Environmental Research and Public Health, vol. 18, no. 3, pp. 1–24, 2021.
  7. T. Alam, “Internet of things and blockchain-based framework for coronavirus (covid-19) disease,” SSRN Electronic Journal, Preprints, 2020. https://doi.org/10.20944/preprints202000642.v1.
  8. V. Singh, H. Chandna, A. Kumar, S. Kumar, N. Upadhyay et al., “IoT-Q-Band: A low cost internet of things based wearable band to detect and track absconding covid-19 quarantine subjects,” EAI Endorsed Transactions on Internet of Things, vol. 6, no. 21, pp. 1–9, 2020.
  9. M. Kolhar, F. Al-Turjman, A. Alameen and M. M. Abualhaj, “A three layered decentralized IoT biometric architecture for city lockdown during covid-19 outbreak,” IEEE Access, vol. 8, pp. 163608–163617, 2020.
  10. S. Karmore, R. Bodhe, F. Al-Turjman, R. L. Kumar and S. Pillai, “IoT based humanoid software for identification and diagnosis of covid-19 suspects,” IEEE Sensors Journal, vol. 20, no. 20, pp. 1–8, 2020.
  11. M. N. Mohammed, H. Syamsudin, S. Al-Zubaidi, R. R. Aks and E. Yusuf, “Novel covid-19 detection and diagnosis system using IoT based smart helmet,” International Journal of Psychosocial Rehabilitation, vol. 24, no. 7, pp. 2296–2303, 2020.
  12. I. D. Apostolopoulos and T. A. Mpesiana, “COVID-19: Automatic detection from x-ray images utilizing transfer learning with convolutional neural networks,” Physical and Engineering Sciences in Medicine, vol. 43, no. 2, pp. 635–640, 2020.
  13. Adafruit AMG8833 IR Thermal Camera Breakout - STEMMA QT,” visited on 14th May, 2021. [Online]. Available: https://www.adafruit.com/product/3538.
  14. Mouser Electronics” visited on 14th May, 2021. [Online]. Available: https://www.mouser.in/newsroom/publicrelations_maxim_max30100_2015final#:~%20:text=The%20MAX30100%20is%20a%2014,monitors%20and%20fitness%20wearable%2-0devices.
  15. S. Tiwari, “A comparative study of deep learning models with handcraft features and non-handcraft features for automatic plant species identification,” International Journal of Agricultural and Environmental Information Systems, vol. 11, no. 2, pp. 44–57, 2020.
  16. S. Tiwari, “An analysis in tissue classification for colorectal cancer histology using convolution neural network and colour models,” International Journal of Information System Modeling and Design, vol. 9, no. 4, pp. 1–19, 2018.
  17. E. Xi, S. Bing and Y. Jin, “Capsule network performance on complex data,” arXiv preprint arXiv: 1712.03480v1, 10 December, 20
  18. M. Yang, W. Zhao, J. Ye, Z. Lei, Z. Zhao et al., “Investigating capsule networks with dynamic routing for text classification,” in 2018 Conf. on Empirical Methods in Natural Language Processing, 2018, Brussels, Belgium, pp. 3110–3119, 20
  19. T. Vijayakumar, “Comparative study of capsule neural network in various applications,” Journal of Artificial Intelligence, vol. 1, no. 1, pp. 19–27, 20
  20. S. Sabour, F. Nicholas and G. E. Hinton, “Dynamic routing between capsules,” in 31st Conf. on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, vol. 30, pp. 3856–3866, 2017.
  21. A. M. Dadario, “COVID-19 X rays,” Kaggle. visited on 14th May, 20 [Online]. Available: https://doi.org/10.34740/KAGGLE/DSV/1019469.
  22. S. Tiwari, “Dermatoscopy using multi-layer perceptron, convolution neural network, and capsule network to differentiate malignant melanoma from benign nevus,” International Journal of Healthcare Information Systems and Informatics, vol. 16, no. 3, pp. 58–73, 2021.
images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.