A cross-roads for developers in the AI journey?

Posted by Ben

19 March 2018

5 minute read

Autonomous driving technology gets closer by the day. With a large and tragic degree of inevitability, a car in autonomous drive mode seems to have claimed its first life. The article in The Guardian reporting the news reminded me instantly of a debate about the morality of AI on one of my regular podcasts: Radio 4’s Moral Maze.

It prompted me to ask the team, particularly our developers, their thoughts in relation to the matter. How would they feel about being involved in a project with such life and death consequences? Where do they feel responsibility and therefore culpability should be apportioned when flaws in software lead to such real-world, emotionally-charged tragedies? Deaths occur for such a range of reasons, and often in far greater numbers, even at the hand of a ‘single ill’ why is it (the reasons are clearly many) that the circumstances of this tragedy play on our conscious in a way that others don’t always?

These are not new considerations – flights abroad, now commonplace, propel us through the air at 35,000ft at terrifying speeds governed entirely by effective algorithms that we’ve adapted to ‘trust’. Yet the advent and almost certain ubiquity of autonomous vehicles within the next decade will be a very different and harder adaptation for us to make as a society. Why is that?

There is an acceptance and an intuitive sense that flying a plane is difficult – more so than driving. There’s a much higher barrier to ferrying people around the skies than down the road. Pilots are revered and the profession is looked up to in a way that haulage truckers do not enjoy. Accepting and allowing computers to do our jobs when they’re seen as complex is easier for us to deal with than when we feel a sense of ownership of the task.

Roll out of driverless technology (whenever that happens) across domestic, commercial and logistical journeys is likely to have the single most profound impact of any ‘one technology’ since the advent of the internet. The livelihoods of so many depends on driving.

The transition period where humans are forced to share their highways with AI drivers means the proximity of the technology will be something we all experience and see working in the flesh. As we get closer to that reality, I’m surprised by the lack of conversation and debate I hear about how societies want to organise and utilise this new technology for the benefit of mankind. The corporations leading the way in this exciting new field are not likely to slow their efforts much for such considerations in the race to be first to market. Some would argue that the ethical questions I raised at the beginning of the article are not the concern, or indeed should best be separated from those who develop innovative technology. As the pace of new technology and the resulting innovations increases exponentially some sort of framework in needed for deciding who is ultimately responsible for potentially harmful flaws. Beyond that there’s a nagging question about whether the market alone can regulate whether the technology we build truly serves people.

It’s an important question that goes wider than driverless cars since subsequent technologies may well have some more obviously negative possibilities than autonomous cars. Working checks and balances may prove ineffective if the horse has already bolted. No doubt this article will date very quickly, but if you have strong views or interesting sources we can add into this article we’d love to hear from you.

Autonomous driving technology gets closer by the day. With a large and tragic degree of inevitability, a car in autonomous drive mode seems to have claimed its first life. The article in The Guardian reporting the news reminded me instantly of a debate about the morality of AI on one of my regular podcasts: Radio 4’s Moral Maze.

It prompted me to ask the team, particularly our developers, their thoughts in relation to the matter. How would they feel about being involved in a project with such life and death consequences? Where do they feel responsibility and therefore culpability should be apportioned when flaws in software lead to such real-world, emotionally-charged tragedies? Deaths occur for such a range of reasons, and often in far greater numbers, even at the hand of a ‘single ill’ why is it (the reasons are clearly many) that the circumstances of this tragedy play on our conscious in a way that others don’t always?

These are not new considerations – flights abroad, now commonplace, propel us through the air at 35,000ft at terrifying speeds governed entirely by effective algorithms that we’ve adapted to ‘trust’. Yet the advent and almost certain ubiquity of autonomous vehicles within the next decade will be a very different and harder adaptation for us to make as a society. Why is that?

There is an acceptance and an intuitive sense that flying a plane is difficult – more so than driving. There’s a much higher barrier to ferrying people around the skies than down the road. Pilots are revered and the profession is looked up to in a way that haulage truckers do not enjoy. Accepting and allowing computers to do our jobs when they’re seen as complex is easier for us to deal with than when we feel a sense of ownership of the task.

Roll out of driverless technology (whenever that happens) across domestic, commercial and logistical journeys is likely to have the single most profound impact of any ‘one technology’ since the advent of the internet. The livelihoods of so many depends on driving.

The transition period where humans are forced to share their highways with AI drivers means the proximity of the technology will be something we all experience and see working in the flesh. As we get closer to that reality, I’m surprised by the lack of conversation and debate I hear about how societies want to organise and utilise this new technology for the benefit of mankind. The corporations leading the way in this exciting new field are not likely to slow their efforts much for such considerations in the race to be first to market. Some would argue that the ethical questions I raised at the beginning of the article are not the concern, or indeed should best be separated from those who develop innovative technology. As the pace of new technology and the resulting innovations increases exponentially some sort of framework in needed for deciding who is ultimately responsible for potentially harmful flaws. Beyond that there’s a nagging question about whether the market alone can regulate whether the technology we build truly serves people.

It’s an important question that goes wider than driverless cars since subsequent technologies may well have some more obviously negative possibilities than autonomous cars. Working checks and balances may prove ineffective if the horse has already bolted. No doubt this article will date very quickly, but if you have strong views or interesting sources we can add into this article we’d love to hear from you.