Responsible AI in Space

In the 2010s, Global Space Governance (GSG) became an urgent issue with the growing commercialisation of outer space. Given the technical and operational complexities surrounding such enterprise, rather than adopting the traditional model of treaty making, new thinking was required to address the challenges and opportunities created by this commercialisation. In 2014, the Montreal Declaration on Global Space Governance created a Working Group to make recommendations on the peaceful and sustainable use of outer space. In 2017, the recommendations were published by the Institute and Centre of Air and Space Law at McGill University. This international study identified safety and technical gaps in the existing governance regime.

While the McGill study identifies gaps in existing space governance, it does not provide specific recommendations on how different types of technology should be regulated by the space sector. One of these technologies is Artificial Intelligence (AI). The use of AI in the space sector is both a challenge and an opportunity. Challenges include protecting the rights of all stakeholders in the harvesting of data sourced from outer space operations. Opportunities include the ability to provide control systems that enhance traffic safety in outer space.

There is therefore a need to extend existing GSGs to AI applications.

This project aims to create a field-validated AI governance framework for the Australian space sector.

P2.05s

Project Leader:
Professor Mirko Bagaric, Swinburne University of Technology

PhD Student:
Thomas Graham, Swinburne University of Technology

Participants: