Interpretable Machine Learning for the Early Smoke Wild-fire Detection

Machine learning has achieved great success in computer vision, natural language processing and other fields, especially in the accuracy of prediction and some may have exceeded the human capabilities. Nevertheless, users still need to understand the reasons for their conclusions in a more detailed and tangible way in application scenarios. Giving strong explainability to the model also helps to ensure its robustness, and usability of the method. This proposal focuses on developing interpretable models for early smoke wild-fire detection from satellite images.

Detecting fires at their early stages is essential to prevent fire caused disasters. Research has been conducted to detect smoke in satellite imagery for fire detection. Unfortunately, the imagery data used in previous research have low spatial resolution and only contain the RGB bands, which are ineffective for early fire detection. Our team (Data Analytics Group at UniSA) has been working on early fire smoke detection with multispectral multi-sensor satellite imagery for one year and the accuracy of wild fire detection can reach more than 90% accuracy. An AI framework of deep learning neural networks that identifies wild fires has been developed. It is necessary to present users with understandable reasons for the detection so the users can validate the detection and assess its severity. Current detection models are deep learning-based and have a black box detection kernel. This project aims to make the detection transparent to users so users can use and interact with the model easily. In the following, we will introduce the main techniques for explaining predictions made by deep learning / black-box
machine learning models.

P2.47s

Project Leader:
Professor Jiuyong Li, The University of South Australia

PhD Student:
Xiongren Chen, The University of South Australia

Participants: