This paper considers some of the issues around autonomous systems and the different types of risk involved in their implementation. These risks are both barriers to the implementation of a successful autonomous system and risks that are consequences of the use of such systems. The different levels of automation, and different approaches to categorizing these levels, as presented in a variety of frameworks, are summarized and discussed. The paper presents an initial generic assessment structure, with the aim of providing a useful construct for the design and development of acceptable autonomous systems that are intended to replace elements of the human cognitive process, specifically in situations involving decision-making. It introduces the concept of the “logos chasm”: the gap between achievable autonomous systems and those which currently only exist in the realm of science fiction; and discusses possible reasons for its existence.