Research Programs

Assessing and Enhancing Multi-Spacecraft Mission Simulation and Visualisation

Problem Centric Operations

Astrodynamical simulations provide a crucial input to space mission planning and operations. Interactive visualisation of mission configurations, particularly for multi-spacecraft constellations or formation-flying scenarios, plays an important role in both understanding options and communicating outcomes to a variety of end users or audiences.

This project will investigate current state-of-the-art software for mission simulation and visualisation. This includes an evaluation of a suite of commercial and open-source options, considering both quantitative metrics (performance benchmarks, accuracy of orbital calculations) and qualitative factors (usability, flexibility, licensing costs and platforms for delivery).

The outcomes will comprise improved understanding of the suitability of astrodynamics simulation and visualisation software (shared as a white paper and/or research publication), identification of new opportunities for research and development, and improved understanding of the needs of end users from various SmartSat partners with the goal of advancing Australia’s capabilities in mission simulation, operations, and space situational awareness activities.

P2.39

Project Leader:
Professor Christopher Fluke, Swinburne University of Technology

Participants:

Using Satellite Data to Locate and Phenotype Plants from Space

EO Analytics

Satellite imagery provides immense potential for Earth observation applications. By training AI to analyse these images, we can monitor huge areas with minimal effort. However, there are many technical hurdles that need to be overcome to successfully use these images for application areas such as agriculture and ecology. Satellite imagery has much lower spatial resolution than ground-based images, and the higher resolutions satellite images (30cm) must be taken selectively as it is infeasible to capture the whole planet at that resolution. This leads to sporadic images being available for any site with only one image taken every few months, or several on the same day. To train an AI model on these images, we need to construct training sets of ground-based observations associated with positions in satellite images. The ground truth observations are also sporadic, and these combine to make training an AI algorithm on satellite images challenging due a lack of observations that are temporally aligned with images.

This project aims to develop sample efficient AI algorithms which require less ground truth observations to train accurate models. Both semi-supervised and unsupervised training methods will be adopted to train highly effective feature extractors using a minimum of labelled data. Highly successful semi-supervised learning algorithms, and unsupervised/self-supervised methods (which have achieved very high accuracy with just a few labelled examples or no labels at all (respectively) in generic image classification) will be customized and these methods extended for satellite images.

P2.18s

Project Leader:
Associate Professor Zhen He, La Trobe University

PhD Student:
Brandon Victor, La Trobe University

Participants: