Artificial Intelligentsia
We can't stop talking about Artificial Intelligence in foggy, unsophisticated ways - and it's leading us down a path we may never be able to turn back from
Bloodletting the machine
I was inspired to write after sitting in on a House of Lords Select Committee session online yesterday about Artificial Intelligence and Ethics, which turned a mixture of slow-cooking ideas into something that I felt I could finally share.
(For context - the House of Lords is the second chamber of British Parliament - an institution designed to examine bills, question government action and investigate public policy. Among other things, it’s supposed to be one of a number of systems of checks and balances that keep democracy functional, democratic and legal, and preserve any number of civil and social safeties that might be threatened by the act of lawmaking. In some of these things, it succeeds extremely well. In others, it fails catastrophically.)
Anyway - a Select Committee is a dedicated group of influencers, thinkers, business leads and policymakers that is mobilised to discuss, investigate and help inform the direction of travel for compelling issues of policy and public interest. In this instance, they’ve been rolled out to discuss what may well be the most significant existential technology issue in history - how to model the fairest, safest version of human ethics systems in a world of artificially intelligent, self-learning machines.
Distilled:
How do we model human-centric ethics ideas sufficiently, accurately and safely within the world’s most powerful computing systems - and further, how do we construct new ways of imagining, testing and deploying these ‘sentient’ technologies in ways that do not threaten human safety, social justice and civil liberty?
Alternatively:
How do we prevent Boston Dynamics robots from becoming racist robocops enforcing the policies of a technocratic, corporate super-state, deployed by banks who’s software automatically decides to never again give mortgages to poor people?
The extremely unfortunate problem that we have, is that broadly speaking, the people currently tasked with facilitating this discussion don’t appear to fully understand the nature of the problem that they’re evaluating, and considering they’re supposed to be formulating the UK’s international AI position, that is fairly bad news.
Forgive them, for they know not what they do
There are a number of things I think are causing sustained noise and distortion in the spaces of Artificial Intelligence (AI), Machine Learning (ML), data privacy and the future of work, and that are setting us on a slow but definite collision with brand-new problems for social cohesion.
1. True AI doesn’t even exist
‘AI’, nor ‘ML’ (as a function of AI) even *really* exist today.
It is almost invariable that when mainstream voices discuss applied AI, they’re actually talking about some form of reasonably well documented decision tree logic - and when we talk about ML, we’re talking about deploying statistical inference across large datasets. Neither are new ideas.
The modern difference, however - is the sheer computational power, research resource and software sophistication we’re now able to throw at the two disciplines - which creates frankly staggering analytical power - hundreds of millions of computations at a throughput that we’ve just never seen before, on datasets larger than anyone can imagine. But the illusion that these computing systems can ‘think’ or deploy genuine, adaptable intelligence are wildly overstated - they’re exactly as intelligent (and as limited!) as the humans that design and build them.
That in fact, is also almost the entire problem with algorithmic bias. Creating machine ‘brains’ through the prism of human engineer bias often results in systems that accidentally act in ways that mirror the traits, biases and privileges of their creators. Make no mistake - these are powerful, ground-breaking technology strides but we need to be very specific about how we refer to them to avoid misunderstanding the problems they’re creating.
2. What does ‘Intelligence’ actually mean?
There is no current, shared public definition of ‘intelligence’ in the context of modern systems engineering.
And there is no mechanism nor incentive to distinguish AI from other regular, ultra-modern automation technologies - and it’s creating an enormous category error. ‘Intelligence’ is, of course, a profoundly difficult thing to pin down and a concept in a perpetual state of philosophical discussion. But now, it’s work that we must do.
The fact that we can’t discuss machine intelligence clearly and objectively means that we cannot objectively distinguish what are actually difficult problems of rapid technical advancement in business, what are fundamental, existential problems of future human / machine co-habitation, and where the two overlap. As is always the case, the current money is in building the stuff and deploying it early in the interest of huge corporate gain - and as such, politicos and policymakers are left chasing the tail-end of hyper-modern technologists already deploying ‘AI’ in extremely problematic formats that are currently impossible to safeguard against, regulate or legislate for.
This is precisely what happened with data privacy, and with the Internet, in general. There was nobody smart or powerful enough in public policy, regulation or social theory to understand the deep ramifications of an always-on, permanently connected, culturally fractured public world - and as such, in 15 short years, Facebook ended up mutating from a naive, university social network into an obscene money-printing factory and hotbed of global political propaganda, culture war and weaponised corporate advertising. A hyper-curated, pseudo-reality of imagined hostilities and extremisms - forming a non-trivial part of behavioural change initiatives by every major government in the world. A team too busy asking ‘can we build it’ rather than ‘should we build it’ - to extraordinary effect.
And our decisions on data are now made. The work we need to do now, amazingly, is backwards. We gave away everything and are now in a constant, sustained fight to take back the rights to our own privacy and digital volition. We face the same problem with a number of emerging technologies - AI, ML and robotics, included.
The people who understood the Internet early and deeply were paid astonishing amounts of money to deploy new web technologies, regardless of their ethical or privacy consequences. Those that did not, were left attempting to legislate around the titanic power and agility of a newly technology-heavy capital market, now awash with more cash and operating power than any other businesses in human history. We face the same problem with AI and ML technologies because we’re not specific enough about the problems that they’re presenting us with and we’re not motivated enough to get ahead of an exponential innovation curve.
3. The market is irrational
The hype is real.
Marketing hype around both AI and ML as emerging fields reduces the majority of ‘AI’ and ‘ML’ companies and their projects into the same category as the majority of ‘Dotcom’, ‘Blockchain’ and ‘Crypto’ companies - in that they are radically over-financed traditional software houses, packed wall-to-wall with smoke and mirrors that deliver a tiny portion of what they advertise, while wrangling the current zeitgeist to secure massive financing rounds from institutional investors who neither understand, nor really even care what they’re investing in.
These businesses are experts at hoovering up VC money and manipulating technology narratives, but are not always product or domain experts. There are of course exceptions to this rule - but in a venture environment that is basically just a huge spread of bets of varying risk, that doesn’t matter enormously as long as one of the gambles pays off eventually.
4. We usually mean something else
When we talk about AI and ML, we’re actually currently almost always talking about regular machine automation - repurposed to look like the magic of intelligence.
But they’re not magic, and they can’t think. They’re old ideas on monstrously powerful new platforms, creating outcomes that not many can predict, leaving the legislative and safeguards of the past in their dust. This, to the horror of civil rights thinkers, public policy experts and ethical technologists all over the world.
Machine-led automation is with us already, has been for the best part a century, and advances in long-established computing fields have now thrown old automation’s already massive potential into overdrive. Tens of billions of rows of data are being analysed under dramatically increased computational power, through networks of highly sophisticated logic pathways. Decisions stored and adapted. Their successes, carefully measured. This means one thing - machines are factors better than humans at work that requires precision, consistency and unbroken focus - and machines are being drafted in to replace us to do just that. Make no mistake - this is a labour crisis present in every ‘non-technical’ industry from retail to haulage.
And there’s a cruelty within it, underpinned by a major failing - that technology is supposed to help us. It is supposed to create solutions to problems that we aren’t motivated or well-designed to solve. On the back of our machine outsourcing, we’re supposed to have more time for novel, human interest - creativity, community, safety, deploying the wealth that machines now create for us in ways that give us back our humanity and freedoms. What has happened is the opposite. The created wealth has been hoovered upwards by a tiny collection of extremely powerful technology proprietors, and the regular labour force - massive numbers of human communities, formerly self-sustaining and capable - are left without work, perhaps permanently. At scale, this is a human work and productivity problem that we’ve never faced before. Manual industries rise and fall. But not all of them. And not all at the same time.
The consequence of a highly-automated world that uses machines to vacuum up all of the wealth it creates is poverty. Instability. Devastating social displacement and disillusionment. This is why the applied mechanics of intelligent automation needs to be delicately and strategically managed to avoid massive labour events that decimate the proud core of traditional communities.
All the monsters looked like me
There’s an in-joke in software that says that when you can’t be bothered to explain what you’ve built to management just say ‘algorithm’ over and over until they’re backing out of the room. As with all good jokes, there’s some truth in it. ‘Algorithmic everything’ is a chapter of software development that seems to have resulted in calling anything more sophisticated than a 4-case if-else statement ‘The Algorithm’, and it then giving developers and product managers carte blanche to do whatever the fuck they want with the three months they just made out of thin air and total bikeshedding bullshit.
Anyway, algorithms are a ‘finite sequence of well-defined, computer-implementable instructions, typically used to solve a class of problems or perform a computation’. The rules that make up the basis of algorithms, and all patterns of applied logic, come from somewhere in order to model that ‘thing’ correctly. Rules, in the real world, are borne of a series of material and environmental limitations and consequences, which you can see clearly in applied examples, like the law.
This tends to mean that no matter our efforts, we keep writing computer instructions that model the rules, biases, privileges and understandings presented to us in the real world. Given that the ‘real world’ is such a dramatically varied place to live depending on a person’s socio-economic standpoint, demographic, or background, we’re inevitably now making computers model the series of rules and institutional patterns that currently exist in service of the most fortunate in society. This is a massive, massive problem for humans and machines. It’s not difficult to imagine a world that leans on cutting-edge decision-making technology forever - on everything from insurances, to healthcare, mortgages and finance. And in that equation, there are sustained winners and sustained losers - and nobody left to talk it out with. The worst parts of social inequality become mirrored in a dispassionate network of computing nodes, where every human decision is in effect, a 1 or a 0.
Of course, completely free technology markets manifest the same exact structural problems as completely free financial markets. Inevitably, monopolies and patterns of structural corruption form because capital is a magnet for other capital, and the people least able to protect themselves end up becoming casualties to a system that exploits them mercilessly in the name of quasi-religious ideas of productivity and profit. This is the exact reason that we have financial regulation - and it’s precisely why we need to be forthcoming with regulation in AI and ML technologies, as well as more generalised automation technologies. The critical issue, however - is how to allow the brightest minds in the world the full ability to innovate, without wrapping them up in hundreds of different innovation-stifling regulatory bottlenecks that prevent genuine breakthroughs for human good.
Regulation isn’t a silver bullet though, and it’s not even close. It patches problems in the interim, but can often end up perpetuating other, inherently harmful hierarchies anyway. Management of emerging technologies needs to be sensitive to that for the same reasons that we need to be pro-active in eliminating algorithmic bias in machines that can never understand the deepest parts of being human.
Better living through chemistry
I think there are obvious steps that we need to take to improve discourse around AI and ML, which will go some small way to helping us better formulate human-centric responses to profound and brilliant technology advances.
Some of these are:
We need to work together to both define and widely communicate clear, shared definitions of AI, ML and Automation - not just in the context of their technologies, but also through the prism of public policy and labour markets. These definitions need to be in a sort of constant review, and and they need to be informed by a trinity of engineering input, philosophical input, and overseen by the appropriate applied public policy bodies.
In light of that, we need to be extremely precise - surgical, in fact, with our use of language around these topics into the future. This is the only way to un-muddy a water where currently ‘everything is a robot that might wake up and become evil’ when there are far greater short term risks borne of labour, inequality and social cohesion disparities to contend with first.
We need much more diverse panels to discuss and hash out solutions in this field. The House of Lords select committee was overwhelmingly old, white and affluent. We need meaningful diversity of background - age, race, gender, professional skills - a melting pot of humanity at this table. The problem requires a matrix of different viewpoints in order to get to meaningful, applicable solutions that serve the broadest possible audiences.
We need targeted, dedicated education initiatives to better understand ethics in technology. University courses need to blend Philosophy, Ethics and Critical Theory courses with Computer Science and Software Engineering courses. It is not enough to ask ‘can we build it’ anymore - because the answer, generally speaking, is ‘yes’. We must normalise asking ‘should we build it’ - because the answer, in the interests of justice and fairness, is actually often ‘no’.
We need a much better deployment of groups who have a sole mandate to support deep technology innovation, while simultaneously ensuring that it’s not used in ways that create dangerous, unfair human consequences. These sorts of safeties need to be put front and centre - but without wrapping the entire emerging sector in a thick layer of regulatory bottlenecks. In the context of international development, this is difficult. Some countries will have no commitment to this, and others will have strong commitment to it. This creates imbalances in progress that militarily and structurally present threats to the world’s political order.
We need to create a full, coherent understanding of automation as not just a technology and business problem but one of major radical social consequence. This is not mines closing, or factories replacing manual labour - this is deep, machine computation applied to almost every sector and industry. We must look at the endgame here. What does a world without work look like? How are resources shared, and why? What are our new rights? What are our new relationships with labour, productivity, recreation, leisure and idleness? What kind of a society is left when machines are deployed at the expense of the least privileged in society?
We need measurable, structural objectives that allow us to understand and influence our position in a complex matrix of humans, technologies and trade patterns for the safety and liberty of the least fortunate in our societies.
As it stands, the UK has declared an ambition to become the global leader in ethical AI - but what evidence exists that there is the motivation, interest and reward available to be able to do that? Further - if our elected representatives can’t organise on this without heavy input from hands-on technologists, how are we going to influence those discussions for the greater good?
Language is important. There’s a case to say that using language properly, on issues of great structural and social importance may be the most important part of getting to the best idea in the room - and not just the ideas of those being paid the most. And the good news is, that although many things aren’t, our language is still within our control - and using precise, thoughtful definition to navigate our most complicated, multi-armed topics means we might just have everything we need to turn the boat around.
Have a day.
T