• Technical Report
P2.25

Technical Report 6 Machine Learning Onboard Satellites

SmartSat CRC; The University of Queensland; Swinburne University of Technology; Queensland University of Technology.

30/11/2021

Over the last decade, machine learning (ML) has been a key driver of many data-driven applications. As such, the rapidly growing space industry is poised to take advantage of recent ML advancements to automate much of its data processing. This includes satellite-based applications such as earth observation, communications, navigation as well as autonomous failure detection and recovery of spacecraft. Key ML algorithms such as object detection, semantic segmentation, pose estimation and anomaly detection help enable these space applications. However, many of these algorithms i.e., the trained models, impose large computational workloads requiring large, power-hungry GPUs to execute, which is at odds with operating in a space environment. On the other hand, downlinking data for processing on earth is also not an option for many satellite applications that require low-latency solutions. Edge computing is the efficient processing solution at the source of the data, which could be the key to enabling widespread adoption of ML for satellite applications. Moreover, by reducing the need to offload sensitive data, onboard processing can alleviate privacy-related barriers to ML adoption in space.

While promising, onboard processing presents its own set of challenges primarily due to its low computing resources imposed by the size, weight, volume and power constraints of satellite platforms. Therefore, to deploy high-quality ML models to computing devices onboard satellite platforms, they must be designed for efficiency without compromising accuracy, which often arises due to the accuracy-speed trade-off phenomenon in ML literature. However, it has been discovered that most state-of-the-art models are quite “wasteful” in that they often do not make optimal use of their parameter space. This provides an opportunity to either apply model compression techniques to remove redundant parameters from larger models, or design compact models with high accuracy from the outset using Neural Architecture Search (NAS). Hardware acceleration is another technique that aims to speed up computations at a hardware level, either by parallelising data processing digitally or by employing analog computing techniques to physically speed up the signal propagation. This report discusses the details of different compression and acceleration techniques and how they can be codesigned to increase efficiency for space applications.

With a growing demand for edge computing, there has been a surge in the availability of off-the-shelf hardware platforms and frameworks that support model compression and hardware acceleration. This report discusses such platforms and frameworks in detail and outlines the pros and cons of the different options. The key metrics that determine the choice of hardware platforms for a given application are the floating-point operations (FLOPs), memory requirements and performance per watt of the model. The choice of ML development framework follows from the choice of hardware, which determines the operating system capable of being flashed onto the hardware. Additionally, one of the biggest technical barriers to ML deployment in space is that space is filled with extreme radiation and temperature which can interfere with flight computers. Extreme radiation can cause bit flips, effectively corrupting computations. Radiation hardening and other shielding techniques are often necessary to get around such challenges. However, shielding can be quite expensive and add significant size and weight to volume-constrained satellite platforms. Therefore, radiation-tolerant designs using off-the-shelf computers may often be preferred due to their lower cost, smaller form factors and greater software support over radiation-hardened hardware. Examples of radiation-tolerant designs include redundant computing to perform self-checking to counter the effects of radiation.

In summary, with a rapidly growing space industry, ML has a huge role to play in automating the processing of the exorbitant amount of data collected from space every day. Although the adoption of ML algorithms for space applications have lagged, significant strides are now being taken by key organisations to encourage the deployment of ML to space environments. Therefore, it is envisaged that the power of ML can be leveraged for space applications to deliver value to society.

Read full Publication