Advances in Artificial intelligence (AI) and robotics will have a revolutionary impact on space operations. Utilising machine learning and deep learning techniques, AI-enabled systems are capable of both performing tasks and improving their own performance. These capabilities are powerful in the often-remote environments of outer space and will become increasingly valuable as automated space operations spread. As AI proliferates throughout the space domain, algorithms will assume many of the responsibilities traditionally overseen by humans. By exposing new satellites and orbital autonomous vehicles to new data, AI is moving from theory to application in the space environment. However, even when all initial algorithmic parameters are specified, the outputs of such systems can still be highly unpredictable, risking harm to people, property, and the environment. The operation of the Outer Space Treaty, the Liability Convention, and the Registration Convention in conjunction with AI systems encounters ambiguities that need to be clarified to ensure liability can be properly attributed in the case of damages occurring involving a space-based AI system. This paper examines the application of the United Nations space treaties, select transnational and domestic AI regulations, and various ‘soft-law’ instruments focused on the responsible development of AI systems to space-based AI systems. Reforms are then proposed to clarify the relationship between AI systems and the international legal regime that governs space in a practical manner as well as a ‘bottom-up’ regulatory approach to better facilitate the future development of regulation governing the use of AI by the global space sector.
Read full Publication