Is it bird? Is it a plane? No it’s a bid!
Computer algorithms impact on every aspect of our lives. We live in a computer-algorithmic-age, and age of artificial intelligence (AI). Artificial intelligence (AI) in fiction and real life has now surpassed that of Isaac Asimov’s robots and the three laws of robotics. Namely that a robot:
May not injure a human being or, through inaction, allow a human being to come to harm.
Must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
Must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Possibly the most iconic artificial intelligence (AI) of the twentieth century was HAL, the algorithmically programmed robotic controller of the spacecraft system functions in the 2001-A Space Odyssey novel. In the film version of 2001, HAL’s artificially intelligent presence is made visible by a glowing red light, which is itself symbolic of a device that is active and quite often representative of a hazard. The human to AI relationships in fiction are made real by the interactions created, usually where the human is able to talk to the AI and to receive a response. A relationship that is often benign but more often malevolent.
The film version, has the crew think that HAL’s cognitive circuits mistakenly report a malfunction, leading them to conclude that the solution requires the disconnection of HAL’s control functions. A solution they realise that HAL may not respond to. They are unaware that a flaw in HAL‘s AI cognizance has created a conflict between a requirement to relay information accurately and a requirement to withhold specific information. Aware of the crew’s intention, HAL’s conflict resolution is that in order to protect and continue its programmed artificial intelligence requirements, the crew must die.
Not so well known is the artificial intelligence in the film Dark Star that is integrated into each “Thermostellar Triggering Device (TTD)”, bombs that are used by the crew to blow up “unstable planets”. When Dark Star’s bomb release mechanism for a TTD (Bomb-20) malfunctions, an attempted repair causes Bomb-20 to stop responding to verbal commands. Bomb-20 remains in the bomb bay with its countdown sequence to detonation active. The failure of Bomb-20 disarm itself or to abort its countdown sequence, leads to Bomb-20 being engaged in a discussion that explores phenomenology. This results in Bomb-20 aborting its countdown in order to cognize the newly received data. Introducing Bomb-20 to phenomenology made the bomb’s AI cognizant of Cartesian doubt – with catastrophic results.
In the last five years autonomous artificial intelligences have come to dominate the stock exchanges. In doing so, not only is there a failure to address what constitutes ‘harm’ in human terms, there is no ‘time’ for human intervention before the artificial intelligence autonomously executes its programmed function. The artificial intelligences of HAL and a TTD are allegories for the consequences of autonomous AI interventions.
A rogue algorithm day on the markets along with the flash crash of 2010 when the market lost trillions of dollars of value in minutes, prompting a Time Business article High Frequency Trading: Wall Street’s Doomsday Machine? High frequency trading (HFT) is a term for the use of computer algorithms, in effect AI, to execute trades at very fast speeds – sometimes thousands or millions of trades per second. Developed over the past ten years, these systems began to really dominate Wall Street over the last five.
High frequency trading (HFT) requires Large-scale Complex IT Systems (LSCITS) where IT (Information Technology) is synonymous with AI (Artificial Intelligence). Artificial intelligence applications now pervade every aspect of our lives and as a paper on LSCITS stated:
These (LSCITS) are being built without an understanding of how to analyse their behaviour and without appropriate engineering. There are fundamental reasons why existing approaches cannot be ‘scaled-up’ to create coalitions of systems and that incremental improvements to today’s methods are not enough to cope with complexity. We need to think differently to address the urgent and growing need for new engineering approaches that can help us construct complex systems that we can trust.
The fact is that a lot of the stock-trading world at this point – especially when it comes to high-frequency algobots (algorithmic robots) – operates on a level which is simply beyond intuition. Pattern-detecting algos (algorithms) detect patterns that the human mind can’t see, and they learn from them, and they trade on them, and some of them work, and some of them don’t, and no one really has a clue why. (Felix Salmon – Reuters financial blogger)
A review commissioned as part of the UK Government’s Foresight Project, The Future of Computer Trading in Financial Markets stated:
The concerns expressed here about modern computer-based trading in the global financial markets are really just a detailed instance of a more general story: it seems likely, or at least plausible, that major advanced economies are becoming increasingly reliant on large-scale complex IT systems (LSCITS): the complexity of these LSCITS is increasing rapidly; their socio-economic criticality is also increasing rapidly; our ability to manage them, and to predict their failures before it is too late, may not be keeping up. That is, we may be becoming critically dependent on LSCITS that we simply do not understand and hence are simply not capable of managing.
Has artificial intelligence finally opened Pandora’s Box leaving hope locked inside?