Onboard Hyperspectral AI: Calibration, Panoptic Segmentation, Fine-grained Analysis, and joint space-ground inference

This project will develop brand new capabilities for onboard AI processing and analysis of hyperspectral imagery on smart satellite platforms. In particular, the project will tackle the key modules of calibration, coarse and fine-grained segmentation, and joint space-ground inference of onboard AI processing of hyperspectral data. New capabilities in these areas will transform the ability of a satellite to automatically make sense of the rich and multidimensional spectral modalities in an end-to-end manner onboard the satellite itself. This will create new opportunities to enable accurate, efficient, and reliable automated detection and classification of natural phenomena and human activities over a wide area on Earth.

At the heart of the project, the research team will develop a novel multi-task learning framework for hyperspectral data. This framework will be employed to create a Panoptic Segmentation network: an approach which unifies object detection, semantic segmentation and instance segmentation in a single network to simultaneously predict a dense pixel-level segmentation across multiple spectral channels from space. In addition to this, the project will develop a lightweight deep learning based atmospheric correction network which can also be deployed onboard; and explore how joint learning between satellite and ground-based sensors can be used to support the inference of detailed information in areas not covered by ground sensors.

This Phase 1 project will develop a proof-of-concept demonstrator system, developing the key techniques for later optimisation and integration to run onboard the Kanyini satellite (SASAT1).


Project Leader:
Professor Clinton Fookes, Queensland University of Technology