Explainable AI

Explainable AI, Machine Ethics & Software Analysis

Keywords: Explainable AI, Ethical AI, Machine Ethics, Ethical Computing, Artificial Intelligence, Technological Singularity, Software Engineering, Formal Methods, Software Verification, Program Correctness, Software Development, Hardware Development, Embedded-Computing, Microprocessors, Software, Programming, Debugging, Computer Systems, Hardware, Logic Analysis, Emulation.

Currently topics such as ‘Explainable AI‘ and ‘Machine Ethics‘ are growing in importance, as Artificial Intelligence (AI) continues to spread its tentacles into every corner of human life, making decisions that would once have been made by people. AI technologies are diverse in nature, ranging from visible AIs such as robots, driverless cars, smart-homes appliances, avatars, chatbots etc to invisible AIs such as lines of program code inside some nebulous computing cluster providing reasoning, planning & learning services for data-analytics and decision-support applications used in applications such as business, government or military planning. Given the fact that AI is frequently used to augment or replace people in intellectually demanding applications, some authors have painted sinister visions of the future where AIs (typically robots), have displaced humans or made people subservient them. Whether or not such dystopian visions could ever become reality, the facts are that current AI systems are already making decisions which affect people’s lives in significant ways. For example, chatbots and recommender engines are becoming commonplace in providing support to users of online systems, gently guiding and nudging people’s behaviour and opinions towards making decisions that that benefit their owners (eg nudging someone to make additional purchases, “customers who bought this, also bought that“). Of course, there are more controversial uses for using AI to nudge opinions. The programming of AI software is more complicated than other applications since, on the one hand, human developers code the application but, on the other hand, AIs effectively self-code themselves, through a processing of learning (eg via altering rules or weights in connections). The process of learning (self-programming) is not without risk. For example, in 2016 a Microsoft Twitter based “chatbot” called Tay, designed to learn by conversing like a teenage girl, had to be taken offline after making racist comments which it had learnt from interactions with (mischievous) online users! Beyond deliberate (mis)education of learning process, AI systems remain susceptible to problems arising from human design and coding errors. For all these reasons, it is both technically challenging but immensely important to be able to verify that AI software is functioning in ethically acceptable ways, and according to the designer’s specifications or whatever regulatory regimes are in place. Thus, it is important that AI code is understandable (explainable) and is functioning as expected (correctly). As a result, the question arises, what is the best way of creating explainable AI? Historically, researchers have argued that some approaches to AI are more explainable than others; for instance fuzzy logic rule based systems are fairly easy to read (and thus understand) in comparison to neural networks. However, even before the advent of AI, software practitioners have recognised the need for tools that can ensure traditional software is understandable (explainable) and functions correctly. For example, in the 1960s, it was feared that the escalating complexity of software would be a limiting factor on the growth of the then fledgling computer industry, as it was discovered that maintaining software was as much as 3 times the cost of developing it. Methodologies to control the complexity of software, to make it explainable and prove its correctness gave rise to several important sub-areas of computer science called variously ‘software engineering‘, ‘formal methods‘, ‘software verification‘, ‘programme correctness‘, ‘software validation‘ and most recently, ‘explainable AI‘. Mostly these methods depend on using mathematical theorems that either prove program correctness directly, or offer frameworks that constrain software development complexity (eg influencing allowable algorithmic flow structure and computational operators). Such methods enabled some types of software to be proven to function correctly, free from unexpected or unforeseen side-effects. Of course achieving these goals is not easy since computer software and hardware (in general terms) don’t operate as isolated units but mostly interact with other programs and with an ever changing external world. While it is possible to verify that a given piece of code is functioning correctly, from a system perspective it can be just one small part of a much larger computational eco-system. As with people and human society, AIs are potentially part of large, diverse and complex eco-systems, operating both within an entity (such as within a robot, or software agent) and between entities (communities of robots or software agents residing in dispersed internet connected computers) created by different designers (including the self-programming capabilities, intrinsic to AI learning!). Thus AI has the potential to spawn highly complex systems. Nature is full of examples of complex systems (eg weather systems, the brain etc) which challenge the limits of mathematical prediction based on their sheer complexity. Moreover, some processes are fundamentally non-deterministic (mathematically speaking), and thus beyond precise mathematical determination while, of course, still remaining within the scope of probabilistic methods (which are undeniably useful). Thus, while formal methods can be used to build provably correct sub-components of large real-time embedded complex systems, the nature of AI, its size, structure and how it’s embedded within non-deterministic worlds poses huge challenges to ensuring it works correctly in all circumstances. As the human brain (and its 1013+ neurons!) has acted as the main inspiration for creating AI, some researchers sum up such arguments by posing the question “is it possible to prove the human brain correct” and if not, does it matter? At a simpler level many researchers argue that even simple AI architectures, such as Brooks’ subsumption architecture (nearly always implemented in software) is inherently non-deterministic. In terms of the latent non-determinism in the world, Brookes (speaking about his subsumption architecture) defended his approach to AI with the statement “the world is its own best model“. Even if the nature of some AI systems encapsulates non-deterministic aspects, clearly they will also contain deterministic software elements that are amenable to verification and correctness proofs. While the arguments about the importance (and even the possibility) that AI should always be deterministic will no doubt rage on for some time, it is clear that wherever it is possible, it is desirable to ensure that software, especially AI, operates as per the designers intentions (and society’s expectations).

Significance of the 1982 research paper “SAS – an experimental tool for dynamic program structure acquisition and analysis

It was from the historical background described above (the need to be able to verify whether a program is functioning as expected) that the 1982 research paper SAS – an experimental tool for dynamic program structure acquisition and analysis” was written. Given the potential difficulties that complex system theory and non-determinism posed to pre-emptive theoretical approaches, this research chose to explore the possibility of creating a tool that inspected computer software in-situ, in real-time, as a means of post-development verification. The work incorporated some theoretical foundations such as McCabe’s Cyclomatic Complexity measure and the notion of logical decision constructs forming a fundamental level-independent feature which can be extracted in real-time to validate a computer’s operation. At the time the paper was published, there were instruments called ‘Logic Analysers‘ which were used to capture the activity of computer hardware but there was nothing similar for software. Thus, SAS – an experimental tool for dynamic program structure acquisition and analysis” describes such a tool which went beyond simply capturing data, venturing into analytics which cast light into the operation of the software (ie a type of reverse engineering), measured run-time algorithmic complexity (Cyclomatic Complexity) and provided post-development verification of the final product.

Victor Callaghan – 9th March 2021