[BACK]
Computers, Materials & Continua
DOI:10.32604/cmc.2021.015501
images
Article

Design of Intelligent Mosquito Nets Based on Deep Learning Algorithms

Yuzhen Liu1,3, Xiaoliang Wang1,*, Xinghui She1, Ming Yi1, Yuelong Li1 and Frank Jiang2

1Hunan University of Science and Technology, Xiangtan, 411201, China
2School of Info Technology, Deakin University, Geelong, 3215, Australia
3Key Laboratory of Knowledge Processing and Networked Manufacturing, College of Hunan Province, 411201, China
*Corresponding Author: Xiaoliang Wang. Email: fengwxl@163.com
Received: 24 November 2020; Accepted: 28 April 2021

Abstract: An intelligent mosquito net employing deep learning has been one of the hotspots in the field of Internet of Things as it can reduce significantly the spread of pathogens carried by mosquitoes, and help people live well in mosquito-infested areas. In this study, we propose an intelligent mosquito net that can produce and transmit data through the Internet of Medical Things. In our method, decision-making is controlled by a deep learning model, and the proposed method uses infrared sensors and an array of pressure sensors to collect data. Moreover the ZigBee protocol is used to transmit the pressure map which is formed by pressure sensors with the deep learning perception model, determining automatically the intention of the user to open or close the mosquito net. We used optical flow to extract pressure map features, and they were fed to a 3-dimensional convolutional neural network (3D-CNN) classification model subsequently. We achieved the expected results using a nested cross-validation method to evaluate our model. Deep learning has better adaptability than the traditional methods and also has better anti-interference by the different bodies of users. This research has the potential to be used in intelligent medical protection and large-scale sensor array perception of the environment.

Keywords: Internet of things; smart home; ZigBee protocol; internet of medical things; deep learning

1  Introduction

The disease transmission caused by mosquito bites has been a severe problem. The World Health Organization reports that mosquitoes kill 725,000 people a year, which makes them the deadliest insect in the world. Today, people in many areas of the world are still suffering from mosquito bites and mosquito-borne diseases. It is essential to reduce the risk of mosquito-borne infections through using mosquito nets for the people in mosquito-infested areas. However, there are some inconveniences if people use a conventional fabric mosquito net. In addition, when people rest in bed, the effect of blocking mosquitoes is reduced significantly if the users leave the net open to avoid the trouble of getting in or out of bed. Mosquitoes may have hidden in the net during the opening process.

Deep learning, which is a branch of machine learning, provides an excellent decision model tool for the control of Internet of Medical Things applications. Still deep learning is a technique based on artificial neural networks for the representation of data in machine learning [13]. In the field of Internet of Medical Things, the classic methods of deep learning have achieved considerable success in many areas. The areas are not only included in long short-term memory (LSTM) [4] to predict air quality [5], deep Q learning for split edge computing [6], but also in neural networks to solve the trust problem of edge devices [7]. Traditional algorithms have achieved excellent results in image acquisition and early warning systems for vehicles. These areas of research have also turned to deep learning to cope with the increasing complexity of the road [8] and the enormous prospects for combining smart sensor networks with deep learning.

We propose an intelligent mosquito net system which uses ZigBee and deep learning to solve the problem of perception of the users’ intention and the control of the mosquito net, and use the infrared sensor and the pressure sensor arrays to detect the status of the users. Next, we use ZigBee technology to process the data from the sensor, and transfer the data to the deep learning model for the classification of intentional status of the users. The model gives the intention of the current state of the users and controls the automatic opening and closing of the mosquito net subsequently. The infrared sensor detects the arrival of the users when the users enter the mosquito net, and open the mosquito net automatically. When the users need to leave the closed mosquito net, the model judges the intention of the users after the pressure sensor array detects the status of the users, and the key to the system is the intention of the users to leave the net.

The sensor will also detect the changes similar to the state of leaving the mosquito net, due to the changes of the body state during the regular rest of the users. In these situations, the users may be exposed to mosquito bites. In this case, we use the Dense Inverse Search (DIS) optical flow method [9] to extract the pressure array features and the three-dimensional–revolutionary neural network to capture the body features of the users.

The main contributions of this research are as follows:

1.    A pressure sensor array is introduced to detect the status of the users in the process of entering or leaving the mosquito net;

2.    ZigBee is used in a mosquito net to network and control sensors with deep learning methods;

3.    The real-time feature extraction method and the DIS optical flow algorithm are introduced to extract the features of the pressure array;

4.    Both the three-dimensional–CNN method and the information of the time dimension are introduced into the attitude classification of the pressure array.

The article is structured as follows. Section 2 presents related work, and Section 3 introduces the workflow of the design. The detailed design is proposed, and we also present our experimental analysis in Section 4. Section 5 concludes the article.

2  Related Work

Many researchers have made use of sensor devices on the bed for scientific research and practical applications. Gaddam et al. [10] proposed an intelligent bed sensor system that includes four force sensors at different positions for smart home monitoring, and their system can detect the presence of a person on the bed and identify precisely their position.

Su et al. [11] proposed a hydraulic sensor system underneath the mattress that can estimate the relative systolic blood pressure of a person, and they used this sensor system to monitor blood pressure based on two features, ballistocardiogram pulse strength (BPS) and ballistocardiogram pulse deviation (BPD).

Some researchers used bed sensors to detect falls [1214]. Mineharu et al. [14] used pressure sensors to detect information of sleep position. This approach automatically classifies the sleep position of people and detects the danger of falling in advance with nine types of sleep postures and the possibilities of falling. Enokibori et al. [13] also considered care for the elderly living alone and presented a bed monitoring system that included a fall detection function. In their study, the infrared and pressure sensors were used to monitor the bed-going, out-of-bed behavior, and lying-down states of elderly people whose existing state or fall events were detected by applying the finite state machine (FSM) method.

Some researchers used bed sensors to analyze the stages or quality of human sleep[1518]. Kortelainen et al. [15] used Emifit sensor foils to obtain the information about heartbeat intervals and movement activities of the users. They adopted a time-variant autoregressive model as the feature extractor and a hidden Markov model as the classification algorithm to detect the sleeping or waking period. Migliorini et al. [16] implemented ballistocardiogram (BCG) signals recorded through bed sensors to analyze sleep stages, using a time variant-autoregressive model and wavelet discrete transform for feature extraction and a quadratic and linear discriminant as feature classifiers. Walsh et al. [17] utilized the data collected from the under mattress bed sensor (UMBS) to identify nocturnal movements of two healthy adults. Based on these data, the pressure center and the spread of the pressure were found, correspondingly, a pressure sensing grid was used to describe the spatial positions of two adults.

Bed sensors for heartbeat detection have also been used by researchers [1921]. Rosales et al. [19] presented a hydraulic transducer configuration to improve the accuracy of the heartbeat signal, and a k-means clustering algorithm was used to process the heartbeat data of the person in bed to obtain the characteristics of their pulses. Lydon et al. [20] used the BCG signal collected from hydraulic bed sensors placed under a mattress to detect the heartbeats of older adults, and utilized a moving average of the pulse rate estimates to remove noises for the distortion of the BCG signal. Jeong et al. [21] presented a study to confirm the feasibility of a cordless monitoring system for preventing sudden infant death syndrome (SIDS) by using the vital signals of infants collected by a large-scale pressure sensor sheet.

Other researchers have focused on the detection of the human pulses or respiration [2227] as it is inconvenient to check patients’ characteristics in their sleep breathing and pulse measurement experiments by using electrodes and straps. Mora et al. [25] proposed an unobtrusive sleep monitoring system for the detection of sleep apnea-hypopnea syndrome (SAHS). The authors adopted pressure bed sensors (PBSs) in the bed mattress to measure physiological signals such as heartbeat, respiration, and body movements, and an automatic algorithm was used to calculate a respiratory event index (REI) for detecting the sleep problems of patients. Sivanantham [26] used the load sensors installed on four legs of a bed to detect the heartbeats, breathing patterns, and body movements of participants. Their method utilized the cardioballistic force near the thorax region, namely, ballistocardiography, to measure a heartbeat, and the variations in breathing frequency in the same thorax region were used to determine the respirational signal. All the parameters collected by the sensors were used to detect the body movement patterns. Moreover, respiratory rates of preterm infants were also studied. Joshi et al. [27] used an ultrathin film pressure sensor embedded between the mattress and the bedding to get the chest impedance (CI) and the ballistographic signal (BSG) of preterm infants.

More and more researchers are paying attention to the pattern recognition of the state of people in bed. In our system, the core problem of perceptual judgment is related to this type of approach. The behavior of the users on the sensor array are transmitted into the decision model finally. Lu et al. [12] proposed an infrared sensor, pressure sensor, and ZigBee technology to transmit the data generated by the sensor that is used to monitor the status of the users. Multiple sensors were used in the network system since there was no pressure sensor array in their research, and the researchers used only the finite state machine as a pattern recognition method. However, the user-state pattern recognition method in Lu’s approach is not sufficiently robust. Mineharu et al. [14] used a support vector machine (SVM) to classify the patterns on the pressure sensor array. The study was based on a 1768-site distributed pressure array sensor and the researchers tried to classify nine patterns with an accuracy of 77.14%. Enokibori et al. [13] used a deep neural network to classify three modes on 3200 pressure array sensors. They did not use any feature engineering, claiming that the model was 99.7% accurate. Matar et al. [28] used the Histogram Oriented Gradient (HoG) and Local Binary Pattern (LBP) features and input the two features into a Fast Artificial Neural Network (FANN). Their recognition of four patterns reached 97.9% when there were only 1728 pressure array monitoring sites. Pattern recognition was improved dramatically which shows that the combination of neural networks and efficient feature engineering is worthy of attention compared with previous research on the features of HoG and LBP.

In the studies described in this section, no time series features have been performed for pattern recognition on pressure arrays. However, time dimension information is worthy of consideration for detection in these scenarios. We propose that the use of three-dimensional CNN in stress map mode detection improve the accuracy. Furthermore the accuracy can also be improved with the consideration of time dimension.

3  The Workflow of Our Design

Our goal is to have intelligent control of the mosquito net. The mosquito net is supposed to recognize the intention of the users entering or leaving the mosquito net and, accordingly, lifting the net automatically. We limit the system to standard bed-and-desk beds used in college dormitories in China for simplifying the situation and excluding other possible interference. So in this case, the bed area and the moving range for the users on the bed are both limited. Since the mosquito net still needs to be adjusted manually when entering and exiting the mosquito net regardless of the behavior of the users on the bed, our efforts are focused on avoiding this manual adjustment and realizing intelligent control. Finally, we need to analyze the behavior of the users when they go to bed, and based on their behavior the mosquito net lifts automatically.

The main changes that take place in bed are due to differences in the position of users and pressure distribution on the bed as mentioned earlier [11]. The position of a person is close to the head of the bed when users first go to the bed. However, the pressure distribution is more uniform when lying flat. The position of a person is close to the bedside when the user gets up. The user sits close to the bedside compared with the lying-down state of the users during sleep and the pressure on the lower part of the bedplate is partly increased. Therefore, the device needs to capture accurately and analyze these changes. It needs to judge the state of the users going to the bed or leaving the bed to control the rise and fall of the mosquito net. The workflow of realizing the whole scheme is shown in Fig. 1.

images

Figure 1: The implementation process of this scheme

Firstly, in bed the behavior of the user changes. Secondly, the sensor perceives the changes. Next, the coordinator collects and analyzes the information captured by the sensor and determines the behavior pattern of the user. Features of the information are extracted and fed to the model for classification subsequently. Finally, the mechanical equipment starts the corresponding action according to the behavior recognized for the user.

4  The Detailed Design

We use sensor technology, ZigBee wireless networking technology, embedded system development, data communication technology, and mechanical design in the device. The device includes the following design parts.

4.1 Mechanical Part

4.1.1 Design of Sensor Data Receiving Device

The design of the data receiving device can judge the behavior of the users based on the received data, both going to bed and getting out of bed included. The data receiving device needs to be able to receive data from various sensors quickly and also have some data analyzed and processed. It also needs to control the working state of the lifting device for the mosquito net. The analysis of data needs to be accurate and fast to control the lifting device for the mosquito net because the process of getting up and going to bed is typically short. In addition, there are diverse behavior patterns for a user in bed beyond his or her sleeping habit. The data receiving device therefore needs to be able to recognize the behavior pattern of the users, and then analyze and judge the behavior of the users. The data receiving device controls the automatic lifting device to raise the mosquito nets when the users go to bed and to allow the mosquito nets fall when the users get out of bed.

4.1.2 Design of Automatic Lifting Device for Mosquito Nets

The automatic lifting device for the mosquito nets needs to accurately control the rise or fall of the nets according to the data receiving device. Moreover, the automatic lifting device for the mosquito net needs a quick response because the mosquito net takes typically only a short time for the users to go to or get out of bed. Most mosquito nets on the market are very light. Little effort is required to raise these nets. Therefore the typical drive motors on the market can be selected as the power device. In addition, a wide range of choices are available when choosing the configuration of motors. Thus, the lifting of the mosquito net can be refined to match the motion of users getting up or down from the bed.

4.2 Sensors and Wireless Sensor Network

We do not need to collect data in a large area because the movement of the users on the bed is limited, and the most obvious features are about the changes in the positions of the person on the bed and the pressure on the bed board when he or she moves. Therefore, we need an infrared sensor to collect the data for the changes in position. And also a pressure sensor to collect the data for the changes in pressure is needed.

The sensor data need to be collected in a centralized way for the classification of the intention from the users, which represents a small-scale data acquisition on a large number of sensor nodes. In this context, we selected ZigBee networking to realize data transmission and low-power and high-sensitivity sensors for detection.

The specific models are as follows: (1) The interface conversion chip FT232RL is selected, which can realize the conversion from Universal Serial Bus (USB) to a serial universal asynchronous receiver transmitter (UART) interface, and the conversion of synchronous and asynchronous bit-bang interface modes. (2) The core chip CC2530 is chosen due to its low power consumption and multi-mode adaptation, meanwhile it can be applied in different environments. (3) The infrared sensor hc-sr501is selected with low-power consumption and high sensitivity. (4) The L298N motor drive module is selected. (5) The hx711 is selected as the pressure sensor module of the product.

4.3 The Software Development Work

The ZigBee terminal and the coordinator automatically form a network, and the system is initialized when the power module of the terminal and the coordinator is turned on and powered. First the ZigBee terminal collects data from the infrared sensor and the pressure sensor in turn, and sends the data back to the ZigBee coordinator. Next the data are transmitted through the ZigBee wireless communication module, and the coordinator analyzes the received data and judges the current state of the behavior of the users. The coordinator sends serial data to the host computer for display subsequently.

4.4 Experimental Analysis

The pressure on the bed board changes significantly depending on the different behavior of the users on the bed board. These behavior not only comprise sleeping but also turning over and sitting on the bed. Therefore, we must study the placement position and sensitivity threshold of sensors in the equipment. These studies can avoid the influence of operation errors in the judgment of behavior such as the users getting up, getting out of the bed or other unrelated behavior.

We conducted the following experiments and analysis on the placement position and sensitivity threshold of the infrared and pressure sensors in the equipment to improve the accuracy of data acquisition. These experiments and analysis allow us get the best placement of sensors.

4.4.1 Infrared Sensor Configuration

The infrared sensor is well suited to judge whether the user goes to bed or gets out of bed as the user typically moves from the end of the bed to the head of the bed when he or she goes to bed. Therefore, the specific detection range of infrared sensors and the sensing effect are tested and analyzed in the experiment. The number of experiments is 20 times per round. The user moves within and outside the detection range. The sensors capture whether the user is within the detection range and collect the relevant data. The analysis of the detection range and effective data rate of the infrared sensor are shown in Tab. 1. Valid data are when the user moves within or outside the detection range. The data captured by the sensor are expressed as 1 in the test range and 0 in the test range.

images

The number of experiments is 20 times per round. Valid data is defined as the data captured by the sensor when the user moves within and outside the detection range.

The rate of effective data decreases gradually with the expansion of the infrared detection range as shown in Tab. 1 and Fig. 2.

images

Figure 2: Relationship between infrared detection range and effective data rate

At bed time the behavior of a user is normally like the way that the head rests on the pillow when he or she goes to bed.

If the detection range is extended to 0.50 m or more, the non-bedtime behavior of the user, consider the act of sitting up or turning on the bed, may also easily trigger the infrared sensor. This behavior makes transmit information from the data receiving device which is consistent with the user leaving the bed. The behavior therefore interferes with the judgment of the data receiving device.

It can be concluded from the graph that the sensitivity of the infrared sensor is very high when the detection range of the infrared sensor is between 0.20 m and 0.40 m. The distance between the head and the side of the bed is precisely 0.05–0.40 m when the user is lying down. Therefore it is reasonable to control the detection range of the infrared sensor to the range 0.20–0.40 m to ensure that the infrared sensor can capture the data of the user accurately.

4.4.2 Pressure Sensor Configuration

The size of beds in college dormitories is 190 cm × 90 cm typically. The local pressure under the bed will change as a user gets out of bed.

The typical behavior of the user in bed is not sleep. In addition, the trigger threshold of the pressure sensor has a certain influence on the success rate of judging the out-of-bed behavior of the user. Turning over and changing position have a certain impact on the value detected by the pressure sensor.

Therefore we studied the preset threshold and false trigger rate of the pressure sensor. In this experiment, the pressure sensor is 20 cm away from the bed and 45 cm away from the tail of the bed. The number of experiments was 20 times per round. The experimental data change when setting a different trigger threshold of the pressure sensor. Tab. 2 shows the experimental data of different trigger thresholds, false trigger rates, and successful trigger rates.

images

From Tab. 2, Fig. 3, it is pretty obvious that the false trigger rate decreases along with the trigger threshold of the increased pressure sensor. The trigger threshold of the pressure sensor can be set a little higher to enable users to turn over or sit up on the bed without going out and triggering the sensor. It may be possible that the trigger threshold setting of the pressure sensor is too high to trigger when the user performs a get-out action. This means the pressure sensor may lead to a failure of triggering and transmit errors to the data receiving device subsequently. In this case, the system cannot conduct the rise and fall of the mosquito nets timely and accurately. The worst scenario is that the users are trapped in mosquito nets. Therefore, we need to choose a relatively reasonable trigger threshold of the sensor to avoid unnecessary errors. When the sensor threshold is set between 343N and 392N, we can see from the chart (Fig. 3) that the false trigger rate is the lowest, and the successful trigger rate is the highest.

images

Figure 3: The relation between trigger threshold, false trigger rate, and success trigger rate

The specific placement of the sensor is an additional factor influencing the effective data collected by the pressure sensor. The landing points of knees and legs are concentrated at 20 cm away from the bed because the width of the bed is limited and the students simulate the behavior of getting up and down many times. Therefore, the pressure sensor is placed 20 cm away from the bed in the experiment. The distance between the sensor and the end of the bed will determine whether the user trigger the lifting device by mistake because the pressure changes from the head of the bed to the tail of the bed when the user gets up and down. Therefore, the distance between the pressure sensor and the end of the bed needs further testing.

The following is a series of experiments probing the distance between the pressure sensor and the tail of the bed. In the experiment, the trigger threshold of the pressure sensor is 343N and the number of experiments is set to 20 times per round.

In practical experiments, the successful triggering and false triggering of pressure sensors have different results with the different placement of pressure sensors. Tab. 3 shows the relationship between the distance from the sensor to the end of the bed, the successful triggering rate and the false triggering rate of the sensors.

images

It can be concluded that when the pressure sensor is placed at different distances from the end of the bed, it has a significant impact on the successful trigger rate and the false trigger rate (Tab. 3, Fig. 4). The false trigger rate increases after the pressure sensor is placed more than 50 cm from the end of the bed (Fig. 4). When the pressure sensor is placed at a position greater than 50 cm from the end of the bed, the position of the buttocks is closer to that of the person lying on the bed. The buttocks are the primary support of the body when people are lying in bed or sitting up and the pressure on the bedplate is higher than in other places. The pressure on the bedplate is higher than that in other places. Users are more likely to trigger the pressure sensor by mistake when they turn over or sit up. The successful triggering rate increases gradually from 25 to 45 cm and has a stable triggering effect of the pressure sensor at 45 to 50 cm. Therefore, we choose to place the pressure sensor at 45 to 50 cm from the end of the bed to detect the up-and-down bed behavior of the user.

images

Figure 4: The relationship between placement position, successful trigger rate, and false trigger rate

4.4.3 Analysis of Pressure Sensors with Deep Learning Perception Model

(1) Feature Extraction

The optical flow is a concept of object motion detection in the computer vision field. It is used to describe the movement of the observation target, surface, or edge relative to the observer. The optical flow method is a method to infer the moving speed and direction of objects by detecting the change of the intensity of image pixels with time. The optical flow can be defined as follows:

IxVx+IyVy+It=0, (1)

where Vx and Vy are the velocities in x and y directions, respectively. The optical flow I/x, I/y and I/t of I(x,y,t) are  called I (x, y, t), which are the partial derivatives of the image (x, y, t) in the corresponding direction.

We use the DIS algorithm [9] to extract the optical flow. Three parts of the DIS algorithm are as follows: (1) the reverse search for patch correspondence; (2) the dense displacement field generated by patch aggregation along with multiple scales; and (3) variational refinement. DIS is competitive in standard optical flow benchmarks. DIS is conducted in a single CPU core at 1024×436 resolution. This includes pre-processing: disk access, image rescaling, and gradient calculation. The system is run at a frequency of 300 Hz up to 600 Hz, which achieves the time resolution of the human biological vision system.

In the same precision range, DIS is several orders of magnitude faster than the latest methods, which makes it very suitable for real-time applications. So this enables our model to be completed more quickly at the feature extraction stage.

(2) Classifiers

Our approach for the detection of the intention of the users is to use a two-dimensional static method often used by researchers, and we take some of the frames as the input of the classification model in a continuous frame of posture. These methods use many techniques to extract more features for a better classification. However, these efforts ignore the fact that there is a large amount of information between consecutive frames in the time dimension. Here, we use the three-dimensional revolution neural network [29] to solve this classification problem. The details of the neural network design are also described in the following section and shown in Fig. 5.

We use a three-dimensional convolution kernel to extract the plane information in consecutive frames and the time dimension information between adjacent frames. We can define a convolution kernel with length, width, and thickness, represent the length, width, and time of the frame plane, respectively. Three-dimensional convolution not only performs convolution operations on the objects in the plane but also on the adjacent frames in the time series. Three-dimensional convolution can sense the change of time sequence by a convolution between two adjacent frames. The thicker the three-dimensional convolution kernel is, the larger the perception field for time will be. The thickness of the 3D convolution kernel must be greater than 1 because the three-dimensional convolution needs to detect the information between adjacent frames.

images

Figure 5: The convolution example of a 3 × 3 × 3 three-dimensional convolution kernel in a 5 × 5 × 5 cube

The convolution kernel first convolutes in three adjacent frames and moves with the time axis subsequently.

The basic structure of the three-dimensional CNN and the essential operation or convolution kernel size of each layer is shown in Fig. 6.

images

Figure 6: The basic structure of the three-dimensional CNN and the essential operation or convolution kernel size of each layer

The first layer of the network is the input layer, which is supported by ZigBee. The wireless sensor network takes the pressure distribution map of the pressure sensor array as the input and takes 16 consecutive frames as an action sequence which corresponds to a sensor change process with intention tags.

The second layer of the network is the hardwired layer, which uses 16 consecutive frames of the input layer to calculate the optical flow between 15 frames. The method of calculating the optical flow is the DIS algorithm.

The third layer is the convolution layer, which uses two different 5 × 5 × 5 convolution kernels to generate two sets of 46 × 46 frame sequences. The activation function that we choose is Rectified Linear Unit (ReLU).

The fourth and sixth layers of the network are the same as the third layer.

The fifth layer of the network is the lower sampling layer. The max pool method is used for 2 × 2 down-samplings.

The seventh to tenth layers of the network are convolution layers, in which the seventh layer is convoluted by a 3 × 3 × 3 cubic convolution kernel, and the last three layers are continuous 3 × 3 convolutions. The activation method is also ReLU.

In the eleventh layer of the network, the extracted features are used as the input into two fully connected layers which have 240 and 128 neurons each.

The penultimate layer of the network is the dropout layer and this layer is used to prevent overfitting.

The last layer of the network is the output layer, which controls whether the motor is on or off according to the intention of the users and the input pressure distribution diagram.

We use Adam as a gradient descent optimization method. Adam is an update to the RMSProp optimizer. In this optimization algorithm, the continuous average value of the gradient and the second moment of the gradient are used. Given the parameter w(t) and loss function L(t), the current training iteration index t position (index 0) and Adam’s parameters are updated as follows:

mw(t+1)β1mw(t)+(1β1)wL(t), (2)

vw(t+1)β2vw(t)+(1β2)(wL(t))2, (3)

m^w=mw(t+1)1β1t+1, (4)

v^w=vw(t+1)1β2t+1, (5)

w(t+1)w(t)ηmwvw+, (6)

where η is the learning rate. Each element in the independent variable of the objective function has its own learning rate. ϵ is a small scalar used to prevent division by 0, and β1 and β2 are forgetting factors of the gradient and the second moment of the gradient, respectively. The sum of squares is based on the elements.

We use k-fold cross-validation to verify our results. k-fold cross-validation divides the training set into k subsamples. A single subsample is reserved as the data of the validation model and the other k-1 samples are used for training. The cross-validation repeats k times (In this study, we take k as 10.), each sub-sample validates once, averaging the results of k times or using other combined methods. Finally, a single estimation is obtained. At the same time, this method repeatedly uses randomly generated sub-samples for training and verification and the results are verified once of a time.

4.4.4 Comprehensive Experiment

We set more effective thresholds for the sensors based on the previously described test results, and put these sensors in reasonable positions to build the whole device for our work. The experimental results are described in the following section.

When a user goes to bed, the head and body enter the detection range of the infrared sensor in the process of moving along the end of the bed to the head of the bed. Therefore the infrared sensor can detect the user in the valid range. Next, the ZigBee communication module sends information to the data receiving device. The data receiving device decides that the user is in the bed state when it receives the information. And the data receiving device also knows that there are users in the range of infrared sensor detection. It controls the motor driving module to work and the mosquito nets can drop automatically.

The comprehensive experimental results are shown in Tab. 4. When a user gets out of bed his or her body leaves the detection range of the infrared sensor in the process of moving along the head of the bed to the end of the bed. The infrared sensor detects that there is no user in the valid range and sends the corresponding information to the data receiving device through the ZigBee communication module. At the same time, the local pressure on the lower part of the bed increases compared with the state of the user lying flat when the user moves toward the end of the bed. The pressure sensor module transmits information to the data receiving device through the ZigBee communication module when the pressure on the position of the pressure sensor is greater than the trigger threshold of the pressure sensor. When the data receiving device gets these two kinds of information, it identifies that there is no user in the detection range of the infrared sensor and the pressure sensor located in the lower part of the bed is triggered. The data receiving device is therefore determined when the user is out of bed at this time. The control motor drive module starts to work and the mosquito net automatically rises finally.

images

The device accurately captures the changes of the behavior of users weighing 52, 56, and 63 kg when they repeatedly use the device to simulate the behavior of going to bed and getting out of bed. The motor drive module controls the rise and fall of mosquito nets and achieves the desired results.

The output data of the UART debugging for the data receiving device are shown in Fig.7. When the mosquito net is raised and the sensors detect that no user has entered, the whole system is in a user-free state.

images

Figure 7: Serial debugging output data

The infrared sensor transmits data to the data receiving device when the users go to bed and enter the detection range of the infrared sensor. After the analysis, the data receiving device outputs “user is coming, mosquito net drops” at the UART and starts the motor to drop the mosquito net. The user triggers the pressure sensor in the lower part of the bed when the user gets out of bed and leaves the detection range of the infrared sensor. When these two conditions are met at the same time, the data receiving device gets and analyzes the relevant information. It then outputs “user is leaving, mosquito net rises” at the UART and the device starts the motor to raise the mosquito net. The experiments show that the device can judge intelligently the behavior of the users in getting in and out of bed and starts to lift or drop the mosquito net automatically.

5  Conclusion

In this study, we propose an intelligent mosquito net system using ZigBee technology and a deep learning method. We show that the pressure sensor array and infrared sensor can accurately judge the intention of the users to open or close the mosquito net through the combination of ZigBee and deep learning. We find that, using 3D-CNN to add time dimensional information, the accuracy of pose recognition and classification improves significantly. This shows that using the information of the time dimension based on single frames can increase effectively the learning ability of the deep learning. At the same time, it also means that we can achieve the same level of classification accuracy with fewer sensors compared with previous research. Experiments show that using 3D-CNN can provide strong support for the practical application of our system design. We are also considering using more diversity real-time feature extraction methods to improve the robustness of the system. It can also reduce the reasoning burden of the deep learning model to accelerate the system landing application through using more diversity real-time feature extraction methods.

Acknowledgement: We thank LetPub (https://www.letpub.com) for its linguistic assistance during the preparation of this manuscript.

Funding Statement: The financial support provided by the Cooperative Education Fund of China Ministry of Education (201702113002, 201801193119), the Scientific Research Fund of Hunan Provincial Education Department (20A191), and the National Natural Science Foundation of China under Grant (61702180) are greatly appreciated by the authors.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1.  1.  T. Nakaura, T. Higaki, K. Awai, O. Ikeda and Y. Yamashita, “A primer for understanding radiology articles about machine learning and deep learning,” Diagnostic and Interventional Imaging, vol. 101, no. 12, pp. 765–770, 2020.
  2.  2.  Z. Yang, S. Zhang, Y. Hu, Z. Hu and Y. Huang, “VAE-Stega: Linguistic steganography based on variational auto-encoder,” IEEE Transactions on Information Forensics and Security, vol. 16, pp. 880–895, 2021.
  3.  3.  R. T. Martinez, G. A. Bestard and A. M. A. Silva, “Analysis of GMAW process with deep learning and machine learning techniques,” Journal of Manufacturing Processes, vol. 62, no. 1, pp. 695–703, 2021.
  4.  4.  L. Y. Xiang, S. H. Yang, Y. H. Liu, Q. Li and C. Z. Zhu, “Novel linguistic steganography based on character-level text generation,” Mathematics, vol. 8, no. 9, pp. 1558, 2020.
  5.  5.  B. Wang, W. Kong, H. Guan and N. Xiong, “Air quality forecasting based on gated recurrent long short term memory model in Internet of Things,” IEEE Access, vol. 7, pp. 69524–69534, 2019.
  6.  6.  Y. Wei and Z. Wang, “Deep q-learning based computation offloading strategy for mobile edge computing,” Computers, Materials and Continua, vol. 59, no. 1, pp. 89–104, 2019.
  7.  7.  A. Alanoud, K. Heba and A. Linal, “A neural network-based trust management system for edge devices in peer-to-peer networks,” Computers, Materials and Continua, vol. 59, no. 3, pp. 805–815, 2019.
  8.  8.  X. Wang, W. Song and B. Zhang, “An early warning system for curved road based on ov7670 image acquisition and stm32,” Computers, Materials and Continua, vol. 59, no. 1, pp. 135–147, 2019.
  9.  9.  T. Kroeger, R. Timofte and D. Dai, “Fast optical flow using dense inverse search,” in European Conf. on Computer Vision, Amsterdam, Netherlands, pp. 471–488, 2016.
  10. 10. A. Gaddam, S. Mukhopadhyay and G. S. Gupta, “Intelligent bed sensor system: Design, experimentation and results,” in 2010 IEEE Sensors Applications Symp., Limerick, Ireland, pp. 220–225, 2010.
  11. 11. B. Y. Su and M. Enayati, “Monitoring the relative blood pressure using a hydraulic bed sensor system,” IEEE Transactions on Biomedical Engineering, vol. 66, no. 3, pp. 740–748, 2018.
  12. 12. C. C. Lu, J. Huang, Z. Lan and Q. Wang, “Bed exiting monitoring system with fall detection for the elderly living alone,” in Int. Conf. on Advanced Robotics & Mechatronics, Macao, China, 2016.
  13. 13. Y. Enokibori and K. Mase, “Data augmentation to build high performance DNN for in-bed posture classification,” Journal of Information Processing, vol. 26, pp. 718–727, 2018.
  14. 14. A. Mineharu, N. Kuwahara and K. Morimoto, “A study of automatic classification of sleeping position by a pressure-sensitive sensor,” in 2015 Int. Conf. on Informatics, Electronics & Vision, Fukuoka, Japan, pp. 1–5, 2015.
  15. 15. J. M. Kortelainen, M. O. Mendez and A. M. Bianchi, “Sleep staging based on signals acquired through bed sensor,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 3, pp. 776–785, 2010.
  16. 16. M. Migliorini and A. M. Bianchi, “Automatic sleep staging based on ballistocardiographic signals recorded through bed sensors,” in 2010 Annual Int. Conf. of the IEEE Engineering in Medicine and Biology, Boston, Massachusetts, USA, pp. 3273–3276, 2010.
  17. 17. L. Walsh, E. Moloney and S. McLoone, “Identification of nocturnal movements during sleep using the non-contact under mattress bed sensor,” in 2011 Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society, Boston, Massachusetts, USA, pp. 1660–1663, 2010.
  18. 18. G. Guerreromora and E. Palacios, “Sleep-wake detection based on respiratory signal acquired through a pressure bed sensor,” in 2012 Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society, California, U.S.A, pp. 3452–3455, 2012.
  19. 19. L. Rosales, M. Skubic and D. Heise, “Heartbeat detection from a hydraulic bed sensor using a clustering approach,” in 2012 Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society, California, U.S.A, pp. 2383–2387, 2012.
  20. 20. K. Lydon and B. Y. Su, “Robust heartbeat detection from in-home ballistocardiogram signals of older adults using a bed sensor,” in 2015 37th Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society, Milan, Italy, pp. 7175–7179, 2015.
  21. 21. H. Jeong and Y. Ohnob, “Cordless monitoring system for respiratory and heart rates in bed by using large-scale pressure sensor sheet,” Smart Health, vol. 13, no. 9770, pp. 100057, 2019.
  22. 22. S. S. Gilakjani, H. Azimi and M. Bouchard, “Improved sensor selection method during movement for breathing rate estimation with unobtrusive pressure sensor arrays,” in 2018 IEEE Sensors Applications Symp., Glassboro, New Jersey, USA, 2018.
  23. 23. D. Heise and M. Skubic, “Monitoring pulse and respiration with a non-invasive hydraulic bed sensor,” in 2010 Annual Int. Conf. of the IEEE Engineering in Medicine and Biology, Boston, U.S.A, pp. 2119–2123, 2010.
  24. 24. B. Y. Su, K. C. Ho, M. Skubic and L. Rosales, “Pulse rate estimation using hydraulic bed sensor,” in 2012 Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society, California, U.S.A, pp. 2587–2590, 2012.
  25. 25. G. G. Mora, J. M. Kortelainen and E. R. P. Hernández, “Evaluation of pressure bed sensor for automatic SAHS screening,” IEEE Transactions on Instrumentation and Measurement, vol. 64, no. 7, pp. 1935–1943, 2014.
  26. 26. A. Sivanantham, “Measurement of heartbeat, respiration and movements detection using Smart Bed,” in 2015 IEEE Recent Advances in Intelligent Computational Systems, Trivandrum, India, pp. 105–109, 2015.
  27. 27. R. Joshi and B. Bierling, “Monitoring the respiratory rate of preterm infants using an ultrathin film sensor embedded in the bedding: A comparative feasibility study,” Physiological Measurement, vol. 40, no. 4, pp. 045003, 2019.
  28. 28. G. Matar, J. M. Lina and G. Kaddoum, “Artificial neural network for in-bed posture classification using bed-sheet pressure sensors,” IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 1, pp. 101–110, 2019.
  29. 29. S. Ji, M. Yang and K. Yu, “3D convolutional neural networks for human action recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 221–231, 2012.
images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.