As Sentient as a Box of Frogs: the need for careful communication of developments in Artificial Intelligence

 

Astounding achievements of AI are coming through thick and fast and will continue to do so. These tremendous accomplishments are unfortunately being received by the public at large as prognostications of doom. Playing fast and loose with language is a big part of the problem.

Every academic in the field of machine learning would laugh in the face of anyone who suggested that we were any closer than a squillion miles from replicating human-level cognitive abilities, let alone human sentience. The public at large think otherwise. Ill-informed journalism (with more than an occasional dollop of journalistic licence) has not helped. The ill-informed pronouncements of prominent businessmen and renowned scientists have arguably been even more deleterious.

For better or worse, we’re now at or very near a point where both individuals and institutions are making important decisions (parents guiding the future studies of their children, governments and think-tanks contemplating the need for a universal income, etc.) in part based on anticipated “man vs. machine” futures. The choice of language of the research community and other pundits has to become more judicious. Even the simple term “machine learning” is a tremendously loaded (and slightly sinister) way of saying not much more than “fitting a model to data”. As for the term “artificial intelligence”… please don’t get me started. I suspect even the acronym AI is stored in my brain somewhere between “dishonest” and “downright fraudulent”.

Professor David Silber comes across as someone who is extraordinarily careful – meticulously so – with his choice of language in describing the achievements of AlphaGo and, more recently, of AlphaGo Zero (see https://www.youtube.com/watch?v=WXHFqTvfFSw). Notwithstanding, many of my friends and family have interpreted the announcements as – no exaggeration – a strong signal of our imminent demise as a species. Although it goes against the grain of celebrating success, it would be tremendously worthwhile to accompany such announcements with some explanation of how the conditions in which they have succeeded (static rules, fully transparent information, symmetric player objectives, ability to train models through realistic simulations, etc.) differ markedly from the vast majority of messy real world environments.

The use of compelling analogies that everyone can understand to portray where we are and where we might be headed can also be useful. Below are three that have worked for me:

1. Evolutionary learning – bounding machine learning’s strengths and limitations

Instincts are selected for (learnt) over evolutionary timescales and drive the behaviours of the vast majority of living creatures. They are the product of generations and generations of experience of generally quite stable environmental conditions. Causal, episodic learning – mostly the preserve of mammals – can happen on the basis of very few instances of experience. It permits its possessors to adapt quickly to significant changes in circumstances and to rapidly learn new tasks.

Machine learnt algorithms are like instincts. They require huge amounts of experience (training data) and – although often outstanding in performance on the trained-for task – will fall over, just as a species will become extinct, if required to perform the same task (let alone a different task) in an environment not adequately represented during training.

To be sure, research is now focusing on how to reduce the extent of training required and on how to reproduce episodic learning, but these are very, very much in their infancy (see, for example, Raia Hadsell of Deepmind at https://www.youtube.com/watch?v=0e_uGa7ic74, and Yann LeCunn of Facebook at https://www.youtube.com/watch?v=bkpAT4zx8QU). We will remain that squillion miles from human-level cognitive ability for, I suspect, a very long time.

2. Electrification of manufacturing – gauging the nature and extent of economic impacts

Tim Harford wrote (see http://www.bbc.co.uk/news/business-40673694) of how it took over forty years from the invention of the first usable light-bulb in the 1870s before substantial productivity gains were achieved from the introduction of electricity into manufacturing. Realising these gains required overcoming the capital costs needed to rearchitect steam-powered factories arranged on the logic of the driveshaft to ones organised on the logic of a production line. He similarly describes how gains from the introduction of computers took time because “You couldn’t just take your old systems and add computers. You needed to do things differently” (through decentralisation, outsourcing, streamlining supply chains, etc.).

The same will be true of machine learning. To take a granular example – beginning a new banking relationship can be painful for businesses large and small. Banks are obliged, in order to be compliant with stringent anti-money laundering (AML) regulation, to perform numerous identity checks and risk assessments. Much of this work – which involves identity verification with trusted public sources, screening global media for adverse events, performing litigation and bankruptcy checks, etc. – is today performed manually. The very first machine learning “point” solutions are just starting to appear – each automating one isolated manual step. To be sure, the short-term impact is one of fewer required man-hours of labour. It is only once the entire process becomes automated end-to-end that we can contemplate gains so great that banks might conceivably be able to at least partially risk score all target corporate customers in advance of doing business with them – even potentially adjusting their sales efforts and pricing to reflect better vs. worse AML risks. If they were able to do so, we would see more lending decisions made more efficiently and, arguably, more businesses financed. Technology vendors would have to engineer, sell and support these end-to-end solutions. In the medium-term, the employment gain would surely be positive.

I can think of many other examples. Economic growth is not a “man vs. machine” zero-sum game. If my children are going to find gainful employment in the coming one to two decades, I am hoping for swifter, more pervasive implementation of machine learning rather than less.

3. Science fiction futures – hypothesising over societal change

Most comparisons with science fiction dwell on whether we are headed for a benign, human-centric future along the lines of Star Trek Discovery (in which the onboard AI carries out every request with exquisite, almost condescending, politeness – especially in the face of imminent disaster) or the somewhat less agreeable “human as biofuel” future of The Matrix. Who knows?

What I do know is that the future will be much more William Gibson than Arthur C. Clarke.

Golden Age Science Fiction is now so readable in part because of the charming way in which the imaginativeness of its space visions is matched only by the conventionalism of its social visions. Women are the maternal hostesses of spaceships captained by strong-but-flawed men in need of a bit of mothering. The social conventions of the 1940s and 50s now come across as more alien than the alien landscapes into which they were lifted and dropped. Technological change in this genre has no impact on societal norms.

William Gibson’s “Neuromancer” (published, remember, in 1984 when the ZX Spectrum was awe-inspiringly moving coloured blobs around on TV screens) is known for having first envisioned something akin to the internet. It is no less astounding for the way in which Gibson portrays a world where societal norms have changed markedly because of technology (with widespread use of performance-enhancing cyborgian surgery, access to new language, etc. skills through plug-and-play brain chips, dynastic self-preservation through cloning and cryogenics, etc., etc.) and yet seem entirely believable because they’re set in a backdrop of unchanging human drives and needs. Gibson doesn’t sign-post these new norms in any way and his world is all the more realistic as a consequence.

Google has just announced the release of its Pixel Buds, permitting real-time translation of 40 languages. Will real-time, in situ translation result in greater understanding between cultures or – if fewer people learn languages and fewer gain the resulting appreciation of the associated national mindsets – less? I have no idea. What I do know is that societal norms compared across decades will change radically but, paradoxically, not in ways particularly noticeable on the day-to-day timescales in which we live our lives. Last weekend I inexcusably interrupted a tennis match to point out that a drone was hovering above us, the first I had seen. The other players looked at me as if I was some kind of dark age simpleton. Drones are, within the space of so little time since their introduction, fast becoming unremarkable. We should be reassured by this.

Remember – no academic in the field would suggest that we are any closer than that squillion miles from replicating human sentience. If this is to be more widely understood, the choice of message and language with respect to “AI” needs to be tightened up. Analogies can be helpful in this respect.

Speaking of language, and on a more light-hearted note, for those other parents of teenage children who feel as beleaguered by their torpid use of language as the last, embattled Romano-British in the face of the relentless advance of the uncouth Anglo-Saxon horde (or something like that), one final analogy to leave you with (gratefully purloined from https://writingenglish.wordpress.com):

“Her vocabulary was as bad as, like, whatever.”

Sometimes I wonder whether sentience is all it’s cracked up to be.