Andrew Cornell: But as you say, the traditional way of thinking is very much ‘versus’ so how do you lead organisations to have a different mindset?
BvdW: Well we are seeing a lot of fear for example around artificial intelligence and robotics and automation in the workplace. A lot of media around how it's going to remove jobs.
When you start to change it from ‘it's a job for either a machine or a human’ to actually a conversation about how humans can be enhanced by collaboration with machines - and machines can be augmented by humans - it starts to open up new opportunities for businesses and we actually see the results can be, incredibly, a lot better.
For example in pathology there's a Harvard-led piece of research describing a method which uses artificial intelligence to look for cancerous cells. The accuracy of artificial intelligence is around 92 per cent.
They then did a survey and they looked at how accurate pathologists were, human pathologists, and the accuracy was around 96 per cent. When those pathologists worked with the artificial intelligence, the accuracy went to near perfect, to 99.9 per cent.
So you see how when the machines and the humans collaborate you can actually get fantastic results.
Mark Curtis: What we're saying from a design perspective is the solution to the tensions is to try and move away from thinking about versus to and. That seems to be overall philosophically a more productive way to go about thinking about these tensions.
AC: Mark Carney, the Bank of England governor, gave a speech in which he spoke of ‘The massacre of the Dilberts’ – its thesis was these bureaucratic jobs are just going to disappear and this is a huge risk to the financial system. So presumably he needs to think more ‘and’ rather than versus?
BvdW: I was just reading an article in The New Scientist about the impact of artificial intelligence on jobs in Germany from 2011 to 2016. It actually showed artificial intelligence has grown jobs by 1.8 per cent and those jobs which have been created are higher paid.
There is no doubt some jobs will disappear. Reports do talk about up to 40 per cent of the jobs people do today which they get paid they get paid to do may disappear. But just because they can be automated doesn't necessarily mean it will be automated.
In fact a lot of the work AI can do are things where humans don't have the patience. Machines have infinite patience.
Our advice for organisations is to think about how do you design collaboration so you take humans along on the journey so they can create meaningful work – design a workplace for automation but it still has humans at the heart?
MC: We've been interacting with machines since the beginning of the industrial revolution 250 years ago.
Now the big change here is the machines now talk back to us. That's actually very new. When the machines talk to us we subconsciously anthropomorphise and so we ascribe human attributes to them and not so subtly it shifts our expectations of the relationship we have with the machine.
We could rename this: it is man's search for meaning in machines rather than the machine itself searching for meaning. And we have to design those interactions with extreme care.
Obviously if you're interacting with a factory conveyor belt or a steam engine or a loom or other things in the beginning of the industrial revolution, what you have to think about is the safety and efficiency. And that was about it. Now it's not just about safety and efficiency.
We also have to think about how does the machine and the human relate to each other and that's a significant shift and we're just on the edge now of beginning to design those relationships.
For example, we have to think about how machines understand humans - you may need to be able to detect the difference between ‘pick up that thing on the left there’ and ‘pick up the yellow one’.
Now linguistically there are two very different statements the machine has to be able to tell the difference between and realise it may be the same object.
We've identified this as a being an important new area of design. In other words you're not just designing how to control the machine you're designing the relationship the humans will have with the machine.
AC: What does that mean for the workforce then, for the skills which will be needed?
BvdW: Accenture talks about the new types of jobs which will exist - three different types of jobs. Trainers, sustainers and explainers. It's a new raft of jobs.
The job of trainers is to train the AI in the first place. To have the right data sets going and to give it the instructions for it to do what need the task needs to be done.
Sustainers optimise the AI so once it's running to give it the feedback and give it the classification and kind of ensure it is doing the job as intended.
Explainers are the people who actually explain how the AI is working to others and how it is making decisions.
There was a really lovely cartoon in The New Yorker which had an artificial intelligence police car pulling over a self-driving car and the policeman was leaning into the window and he says "does your car have any idea why my car pulled it over?"
I think that explains the explainers one beautifully. So you know there are many new roles which will also be created by the emergence of AI and robotics in the workplace.
AC: And I suppose the other question was in Mark Carney's speech but I've heard it in dozens of speeches. If even you're an optimist and you say it's going to create jobs at the same time it removes jobs, those jobs are going to require different training and skill sets, different mindsets, different workplaces - like agile, for example. If we think of your frame from ‘versus’ to ‘and’ what does it mean for a workforce?
BvdW: We do need the ability to learn to learn. It's going to become more and more critical. We need to upskill and cross-skill and teach creative problem solving, that’s critical.
MC: If you look at the big picture there are massive implications for governments on how education should be organised and what governments might want to mandate to educational authorities in order to train the workforce for the future.
I think Bron is right, there are mid to long-term implications. Really our education system was designed and is appropriate for the industrial age; it is no longer appropriate for the trends into which we are moving.
AC: You’ve talked about the machines talking back but there’s another trend here which sounds a bit like a horror movie: the computer has eyes.
BvdW: It’s the trend from reading to seeing. The MS command line has always been able to read, comprehend and react to text. But now computers are developing eyes.
These eyes are now connected to a brain – the secure bandwidth connected to the cloud with artificial intelligence and machine learning. Computers can now look at something and deduce meaning from that and react to it.
This is quite a big evolutionary leap if you imagine an organism coming out of the primordial sludge that can sort of just understand text and all of a sudden it develops eyes and a capacity to actually understand what it is seeing.
Computers are everywhere in the environment all around us. I think this is something you'll will see and notice more in the next six to 12 to 18 months. Cameras are everywhere. They are absolutely ubiquitous. So we all walk around with one or two cameras in our pockets.
They are in our cars leading the way for autonomous vehicles; they are in our environments enabling some of these retail trends like Amazon Go. When coupled with the fact that they're connected to an AI machine, machine learning, it opens up all sorts of new opportunities for organisations.
MC: Crucially there was a major consumer tipping point when the iPhone X launched with facial recognition.
If it had happened maybe three or four years ago probably other people might have thought it downright creepy. But today people just go ‘yeah, I'm cool with that. That's fine’.
This consumer tipping point is something organisations can take advantage of, where people are okay with a device scanning their face, and use it to unlock everything on your phone - all the secure data and personal information – people seem okay with that.
An example is cameras behind mirrors. This was quite a hot item at the Consumer Electronics Show and it's interesting because the mirror represents a moment of the day, maybe multiple moments whether it's brushing their teeth or putting on their makeup or ensuring they're not having a bad hair day or whatever it may be.
That's really significant because those moments of consumer habit are actually few and far between. Here we have one which is fixed in most people's lives from the early teens onwards.
If we're going to have cameras in the mirrors which are able to look at us and tell us things about ourselves it opens up the opportunity - spooky or otherwise - for a lot of interesting new services to begin to emerge.
You can imagine for example the mirror might look at you and say ‘you know what you're looking not so well today actually you have not been looking so well for about five days, there's a trend line here’.
It might show you the trend line and also tell you what these symptoms might mean and the fact they're serious enough for you to see a doctor.
It’s a living service, the service changes in real time around the customer and is dynamically responsive to their needs and does things for them - actually it should probably have booked an appointment for you at the doctors as well. Now the last bit is probably 10 years down the track but the first bit is doable now.
AC: Yet all this comes at a time, after Facebook and Cambridge Analytica in particular, when people are more nervous about their personal data, more aware of its misuse and there is more regulation around privacy?
BvdW: Absolutely and it does take us to the ethics economy. Without a doubt expectations - from both employees and consumers - have changed.
We expect organisations to not only react quickly when there is an issue with a customer experience or a data breach or something like that but to act quickly and decisively.
But we actually expect organisations now to be proactive in taking a stance on issues around customer experience and privacy and data - but also beyond that now around issues to do with society, with the environment, with social impact.
We're seeing CEOs step up and make a statement about something which has nothing to do with their business. Tim Cook wrote an open letter saying people should have values and companies are nothing more than a collection of people. So by extension all companies should have values. As a CEO I think one of your responsibilities is to decide what the values of your company are and lead accordingly.
There have been many examples now as evidence of this trend to higher expectations, maybe purpose, of people holding organisations up but also of organisations taking a stance themselves.
For example, Jigsaw in the UK putting up banners on buses saying we love immigration - and certainly it's very clear from the results of Brexit that 51 per cent of the UK population may not love immigration.
They are taking a stance on something which could hurt their bottom line. And clearly as a set of leaders they have got together to make a decision on the purpose of their organisation, their values and what they believe in and have gone out and very publicly stated that. It’s a really interesting trend we're seeing.
MC: I think the last piece of evidence which really nails this is the letter from Larry Fink, the CEO of Blackrock, a huge investor, at the pinnacle of global capitalism.
He wrote to all CEOs basically saying it's no longer enough to just focus on shareholder value. The future of companies is to consider your social purpose.
He went on essentially to say that if you do not have social purpose he will not be investing in you in the future. And that really is the ethics economy.
Andrew Cornell is managing editor at bluenotes