Becoming Sentient: Controlling the Machines

Alex Dixie in Advertising

in Advertising

On 11 May 1997, the final move of an epic chess match took place. After several days of intense competition, IBM’s ‘Deep Blue’ computer beat reigning world chess champion Gary Kasparov over 6 games. Huge media attention followed this tale of man vs. machine, and the ultimate victory for the machine.

Although far from the first example of artificial intelligence and the harnessing of raw computing power to ‘learn’ how to better humans, the story acted to ignite the public’s interest in the power of these new, artificially intelligent machines.

20 years later, we are now moving into a brave new world where artificial intelligence is no longer a spectacle, pitted against the best humans can offer, but instead an omnipresence, whirring in the background and making vital decisions affecting our everyday lives. These algorithms have been in control for some time, although hidden behind the scenes. ‘Big Data’; ‘machine learning’; ‘artificial intelligence’: we have many ways of describing the technologies powering this quiet data revolution. Although some of these terms have specific application, often these are simply a marketing ploy to add a spin to the latest algorithm created by TheNextBigThing, Inc. and attempt to differentiate a slightly novel approach from the thousands of similar technologies currently in existence.

Occasionally, however, something groundbreaking is developed. A new system that will actively change the way that we perceive, interact with or experience the world. Think Google, online dating sites, high-frequency stock trading, hand writing recognition and new smart-home devices such as Amazon’s Echo. Not all of these will launch to fanfare, nor will they be marketed as artificial intelligence, but all are having a significant on the way the connected world works.

 

Smart by default

“By 2018, six billion connected devices will proactively ask for support.”
Gartner

We are moving into a world where the concept of a ‘connected’ device is fast becoming obsolete – all devices by default will soon be ‘smart’. As this technology moves into our homes and becomes an increasing part of our lives, the machine-human interaction is becoming more personal and less clearly defined. The current widely-adopted pinnacle of technology is the smartphone. However, with a few exceptions, the smartphone remains an ‘active’ piece of technology. In order to interact, people need to actively decide to do so – removing the phone from a pocket, unlocking the screen, telling the phone what to do next. Importantly, an ‘active’ piece of technology allows for the user to also decide to actively not use the technology – you can turn off your smartphone or leave it elsewhere for example.

‘Passive’ technologies are increasingly forming the new wave of intelligent devices. These devices do not require a human interface to operate – instead they work silently in the background, able to communicate with each other, to be controlled from a central smart hub (such as an Amazon Echo or Google Home) and to proactively make decisions rather than simply reacting to a human instruction. The key differentiator is that passive technologies are always-on. The first reaction to an Amazon Echo is almost without exception: “Is it always listening?“.

What this means is a network of new data flows, accessible to device manufacturers across the globe. Some of this data will be used to benefit the end-user: can’t remember whether you have milk at home? Let your fridge check and alert you when you are running low; proactively receive a notification from your washing machine that it has an issue and needs an engineer visit; or simply switch on heating and lights as you come to the end of your road following a tiring commute home.

Other data streams could have more questionable applications – your same smart fridge reporting to your health insurance provider that you consume high levels of unhealthy foods, resulting in your annual premium increasing.

The next phase (which we are already beginning to experience) is the device that not only records and reports but that actually makes its own decisions on behalf of its owner.

The question then becomes, when making these decisions, in whose best interests will (and should) these machines operate?

 

An Ethical Dilemma

As key decision making starts to move from human to machine, a natural fear begins to grow – how to ensure that the ‘humanity’ remains within the decision making process as the ‘humans’ are removed. By this we mean the ability for empathy, for ‘bending’ the rules because it is the right thing to do, for understanding, for equality and fairness. A faceless algorithm may well be able to make a statistically perfect decision every time, but this does not mean that it is necessarily seen as the correct outcome.

Again, consider the insurance market. Imagine that at the point you complete your application for medical insurance, a machine scans the profile it has created of you (established by reading your emails, reviewing your social media pages and matching with similar data it has on your family members) and provides you with a materially increased insurance premium due to its decision that you have a statistically higher-risk lifestyle and chance of illness. This may well be entirely correct from an actuarial perspective but most people would think it questionable as to whether this is appropriate.

Although currently we can act to contain these decisions with a well-defined set of rules (for example, by introducing a legal restriction on profiling for the purposes of establishing insurance risk), a two-fold issue remains: firstly that the pace of technological development in these fields is increasing exponentially, meaning that any associated rules framework is struggling to keep pace; and secondly, as the machines learn and become increasingly intelligent, new, unexpected outcomes will happen.

Within the setting of his futuristic robotic dystopia, Isaac Asimov famously proposed his ‘3 laws of robotics’:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  1. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  1. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Although these laws relate specifically to ‘harm’ or ‘injury’ to a human or the robot, you can see how a similar set of fundamental principles may need to be developed and enshrined in law to ensure that the technology being developed continues to work to the overall benefit of humankind.

These types of challenges can be difficult to visualise and comprehend when considered in the abstract. An understandable example of the need for an ethical framework can be shown by considering self-driving cars.

 

Driven to distraction?

Led by Google and Uber (and reportedly, although secretly, Apple), technology giants are attempting to lead a revolution of a fundamental to modern human existence – the car.

Taking software and techniques developed in other fields, developers from these companies are looking to use artificially intelligent, self-learning ‘brains’ to transform the way we drive (or rather remove humans from the picture entirely).  Exciting developments have already seen experimental vehicles on public roads, driven entirely without human interaction. Decision making, obstacle avoidance, optimal driving strategy is all outsourced to the computer.

On the surface, this seems like a great development. The proponents of these new technologies extol the potential benefits to the world – brandishing astonishing statistics of lives saved, jobs created, costs cut. And it is true, these technologies are exciting and revolutionary.

The impact of self-driving cars will be tremendous, saving an estimated 300,000 lives per decade by reducing fatal traffic accidents. This is expected to save $190 billion in annual critical care and triage costs.
(McKinsey)

However, there still remains the prospect of a slightly murkier world when the ethics of decision making is considered in more detail.

For example, imagine you are alone in your car speeding along at 70mph on the motorway in the inside line. Suddenly a truck pulls across in front of your car and there is not enough time to avoid a collision by braking. The only avoidance tactic available to you is to swerve onto the verge. However, on the verge a car has broken down and immediately in front of you a family (mother, father, daughter) is sitting awaiting rescue. If the car swerves onto the verge, it will hit the family. What would you do?

Most people would answer that they would apply the brakes and accept that they are likely to hit the truck, risking their own lives but avoiding risk to the family.

However, whether this is the optimal solution for a self-driving car very much depends on the prioritisation it establishes when assessing the situation. One outcome may be that the car decides that its most important, overarching role is to protect its own occupant. If so, the logical outcome would be to swerve onto the verge and save the driver. Another outcome may be to assess the minimal overall injury to people and hence, in common with the human driver’s reaction, the better outcome is to accept the collision with the truck.

But then imagine that you are not alone in your car but that you also have your family with you. How does this affect the decision?

The point here is that without rules, these outcomes are left to be established with reference to an ethical framework that is developed by the technology providers themselves. Is it correct that these corporations that are ultimately commercial enterprises aiming to maximise revenues and profits are responsible for establishing these ethical frameworks with hugely significant implications?

The self-driving car is just a single example but with decisions with significant impacts on individuals becoming increasingly made by machines, there is an extremely important question yet to be answered – how do you establish an ethical framework to ensure that everyone, man and machine, acts in a manner that is considered ‘appropriate’ by the majority of the population, but without acting to over-regulate the sector and hence stifle innovation?

 

Thoughts on an answer

The answer may well not be a legal one. Conceiving, drafting and passing legislation is a slow process. Developing new technologies is the opposite – a fast-moving, dynamic environment forever pushing boundaries and redefining expectations. Existing legal concepts are being extended into these new worlds – recently the UK’s Information Commissioner’s Office published a new paper on the application of data protection legislation to big data, machine learning and artificial intelligence, highlighting several of the same issues as are raised in this article.

Data protection challenges arise not only from the volume of the data but from the ways in which it is generated, the propensity to find new uses for it, the complexity of the processing and the possibility of unexpected consequences for individuals
(ICO: Big data, artificial intelligence, machine learning and data protection)

 What is certain is that the answer is neither straightforward nor obvious. What is immediately clear, however, is that the discussion needs to happen sooner rather than later.

Perhaps a better approach than a rigid legal framework would be the establishing of an industry regulator that acts to govern this nascent but fast-growing sector. This body could be responsible for creating and enforcing the rules framework by which these new technologies operate. There are a number of practical issues that would need to be considered, however, before this could be implemented. For example, would a single industry body to cover ‘artificial intelligence’ even be possible, given the hugely diverse applications of this technology? It seems unlikely. Equally, as the world becomes increasingly connected and decreasingly segregated by national borders, any agreed approach would need to operate consistently across the globe. This is another challenge that would need to be overcome – nations are historically protective of their ability to establish their own rules.

Most importantly, any approach aimed at creating an appropriate framework needs to be developed with a view to maintaining and facilitating the pace of development. Artificial intelligence is one of the most exciting, intriguing and important areas of human endeavour to date and this venture should continue to be encouraged – an over-regulated, restrictive legal framework would act to simply stifle this innovation.

After all, as much as we need to make sure we retain control, the machines are very much our future.

 

Becoming Sentient: Controlling the Machines was last modified: April 6th, 2017 by Alex Dixie