Since the number of fires in the world is rising rapidly, automatic fire detection is getting more and more interest in computer vision community. Instead of the usual inefficient sensors, captured videos by video surveillance cameras can be analyzed to quickly detect fires and prevent damages. This paper presents an early fire-alarm raising method based on image processing. The developed work is able to discriminate fire and non-fire pixels. Fire pixels are identified thanks to a rule-based color model built in the PJF color space. PJF is a newly designed color space that enables to better reflect the structure of the colors. The rules of the model are established through examining the color nature of fire. The proposed fire color model is assessed over the largest dataset in the literature collected by the authors and composed of diverse fire images and videos. While considering color information only, the experimental findings of detecting flame pixels candidates are promising. The suggested method achieves up to 99.8% fire detection rate and 8.59% error rate. A comparison with the state-of-the-art color models in different color spaces is also carried out to prove the performance of the model. Based on the color descriptor, the developed approach can accurately detect fire areas in the scenes and accomplish the best compromise between true and false detection rates.
Throughout history, fire has always played an essential but conflictual role: on the one hand fire has enabled to improve the conditions of everyday life, protecting humans and developing industry, on the other hand it has represented a danger to be defended against. Indeed, fire is one of the most grave calamities in the world that often lead to economic, ecological and social damages by endangering people’s lives [
Video fire detection is getting increasingly the researchers interest. However, it is still a challenge to build a robust fire detection system capable to work efficiently in all possible real world scenarios. This is because of the complex and non-static structure of the fire flame, as well as the real time constraint. As a matter of fact, fire flame is characterized by its dynamic form and changing intensity. Besides, though flame color varies always in the red-yellow range, it can be a misleading feature since many objects may have similar appearances, such as fireworks lights, moving red objects, the sun, which may lead to false alarm detections. Undoubtedly, overcoming these challenges depends considerably on the reliability of the fire detection method. Generally, VFD techniques exploit the color, form, texture and movement signatures of the fire area [
We should mention that this work is dedicated for only discussing fire color model to extract flame colored pixels. Color models cannot distinguish between fire and fire-like pixels. We agree that further analysis should be applied on the detected regions, but it is a problem by itself, which is not considered in this paper. An extension of this work was addressed in [
This paper is organized as follows: Section 2 outlines literature works using color-based models for fire detection. Section 3 presents the proposed model. Experimental findings and their analysis are provided in Section 4. To sum up, Section 5 concludes the work and exhibits its perspectives.
The number of documents dealing with VFD in the literature has exponentially increased [
We would like to mention that we only deal with color models in this work.
In these methods, the fire pixels colors are supposed to be concentrated in particular areas of the color space, and so, the probability distribution functions are built by training the model with a set of fire images. A pixel is considered as fire if it belongs to the predetermined color probability distribution. The already developed works differ depending on the used color space, components plane, distribution model, etc. Reference [
For this type of approaches, the color intensities of the pixel are considered as features to categorize pixels as fire and non-fire. In Reference [
The majority of the works on flame color pixel detection are rules based. They combine rules or thresholds on a color space. Rule-based fire detection methods are the fastest and the simplest. RGB is among the oldest and most used color spaces for fire detection since nearly all cameras capture videos in RGB space. In Reference [
To sum up, after a comparison with the related works cited above, we perceive that there is no single color system defined as a standard for the detection of fire flames. For that, we aim to exploit new emerging color spaces that have already shown promising results to provide a color-based model for fire detection.
RGB format is the most provided output by video surveillance cameras, however, for better data output representation, other color spaces are used. The first step in our method is the conversion from RGB to PJF color space. PJF imitates the concept of L*a*b* by converting the R, G and B color channels into novel ones to better describe the organization of the colors with a low color calibration error. The brightness is expressed as a single variable and the color is expressed with two variables: one that goes from blue to yellow and the other from green to red, noting that vectors remain inside the RGB color cube [
The color components generated by
Through this work, we propose to build a simple and efficient rule-based color model to locate possible fire areas in images (namely fire regions and fire-like regions). It should be used in VFD system followed by a more refined analysis to separate between fire and fire-like regions [
This choice is made to de-correlate luminance and chrominance aiming to avoid illumination changes effects, while gathering the pertinent chrominance information (fire pixel colors) in a small number of components. Therefore, defining a color model that exploits the chrominance channels is more meaningful for the structure of the flame colors. As explained previously, the chroma components J and F can measure the relative amounts of red and yellow respectively which are the two bounds of the fire color range. Based on the color nature of fire, two ascertainments can be discerned:
1/The fire pixel is consistently with high saturation in Red channel (R) which means,
2/For every fire pixel, the Red channel’s value (R) is higher than the Green one (G), and the Green channel’s value is higher than the Blue one (B) which can be translated to,
From
Depending on
The examination of
Based on these equations and verified by an analysis of many fire images, the subsequent rule is built to detect a candidate pixel Pix:
Aiming to statistically validate the developed model, the subsequent policy was applied. A set of RGB images, collected from the public fire datasets, is manually segmented to recognize fire sections. Segmented areas are then converted to PJF color space.
In this section, experiments are conducted in order to assess the performances of the proposed method. A comparison with color models in different color spaces, cited in the literature, is also carried out.
The color model should be validated on different video sequences and images captured in diverse environmental conditions, such as, indoor, outdoor, daytime and nighttime. The dataset used in the experiments should be of different resolutions, frame rates, and may be shoot by a non-stationary camera. In the state-of-the-art, no standard benchmark image/video fire dataset is openly available. For this reason, we have collected samples from public image and video fire datasets. We tried to put in the majority of the real world scenarios with stationary and dynamic backgrounds such as forest fire, indoor fire, outdoor open space fire. Potential false alarms examples consisting of no flame but flame-like colors such as sunset, car lights, fire-colored objects are also included. The total number of the tested images is 251 996. The built data is much larger than those used by other researchers. It is composed of 3 fire image datasets: Bow Fire [
All the experiments are achieved on an Intel i7-2670QM CPU @ 2.20 GHz, 4.0GB of RAM, Windows-64bits. The collected dataset is used to benchmark the performance of our method. Ground truth images are calculated manually. Coding is achieved using the OpenCV/C++ library.
Video sequence | ||||||||
---|---|---|---|---|---|---|---|---|
forest4.avi | 219 | 219 | 219 | 0 | 0 | 1 | 0 | |
controlled2.avi | 246 | 246 | 246 | 0 | 0 | 1 | 0 | |
fireVid_004.mp4 | 4788 | 4081 | 4780 | 699 | 0 | 0.998 | 0.14 | |
fireVid_010.mp4 | 4950 | 4948 | 4950 | 2 | 0 | 1 | 0 | |
fire9.avi | 255 | 255 | 249 | 0 | 6 | 0.976 | 0 | |
fire15.avi | 244 | 244 | 244 | 0 | 0 | 1 | 0 | |
flame3.avi | 613 | 613 | 613 | 0 | 0 | 1 | 0 | |
outdoor_night_10m_gasoline_CCD_002.avi | 1298 | 1298 | 1298 | 0 | 0 | 1 | 0 | |
rescuer_016.mp4 | 170 | 145 | 167 | 22 | 0 | 0.982 | 0.12 | |
rescuer_028.mp4 | 364 | 364 | 364 | 0 | 0 | 1 | 0 |
The average detection rate achieved is 0.995 with the test sequences shown in
For better assessment, we conduct experiments to compare our method is with some state-of-the-art methods. These related works are the main color-based models for fire detection proposed in different color spaces as summarized in
Method | Color detection |
---|---|
[ |
RGB, HSI |
[ |
HSI |
[ |
RGB |
[ |
YCbCr |
[ |
L*a*b* |
[ |
YUV |
[ |
RGB,YUV,HSI |
The performance enhancement is expected since PJF color space does not meet the pixel values correlation and the luminance-chrominance dependence. It has also better performance than [
To conclude, using this model in a hybrid system, which combines other information such as motion, geometry, texture, etc. with the color, should evidently improve the fire detection results by differentiating accurately fire region from fire-like areas in the scenes [
In this work, we present a new color based model for fire detection in the PJF color space. The developed method is able to separate between fire and non-fire pixels. The performance of this model is mainly thanks to the capacity of the PJF color space to de-correlate luminance and chrominance and concentrate fire colors range in its J and F components. In this work, the largest dataset in the literature is collected to assess the color model. It is tested on 251 996 image samples achieving up to 99.8% detection rate and 8.59% error rate. Experimental outcomes prove that the PJF model outperforms related models. Its main contribution is its ability to accomplish the best compromise between true and false detection rates. The performance of the method will be further improved by considering Deep Learning methods to build a robust fire detection system.
This study was funded by the Deanship of Scientific Research, Taif University Researchers Supporting Project Number (TURSP-2020/161), Taif University, Taif, Saudi Arabia.