Artificial Intelligence: Boom or Doom ?
On May 4th, ClickSoftware organized a panel debate in London to discuss the subject of ‘Assisted Intelligence vs. Artificial Intelligence”. The aim was to find out the potential consequences – both positive and negative – that A.I. will have for our lives, with a specific focus on the workplace. Will it in fact assist us or on the other end of the scale, are we all doomed? Discussion of A.I. abounds nowadays, but the theme of boom or doom is nothing new. It’s been more than 40 years after his introduction: Stanley Kubrik’s classic “2001: A Space Odyssey”, HAL, the eerily soft-spoken computer that couldn’t make a mistake – and then did – knocking off most of the crew of the spaceship he was installed on. That was 2001 – actually, 1968. It’s getting a bit weird – isn’t it – how we have to go back past the future to talk about today – past the future of 2001… past the future of 1984… Oh, boy… HAL has since gone through countless reincarnations, and unquestionably so have A.I. conspiracy theories. Arguably the most popular, brought to us by Arnold, is all in the name … Terminator. What all these stories represent of course is technology – the lodgings of A.I. – run amuck, and we humans the inevitable victims. You say “Self-flattery! Why would anyone be concerned with the likes of us, human beings, loaded with faults and paranoia?” Point taken. But, indulge for a moment, while we return to London, where the subject humans’ relationship to A.I. was taken with earnest and with a little less melodrama. Topics covered at the debate included:
- Should we fear or embrace A.I.? What are the threats?
- What are we seeing today, where is this going?
- How is this impacting productivity in the workplace/ employment?
- What will it mean for jobs in the immediate future? Should we be worried?
- What are the benefits/ opportunities we should be looking at? Big data/ IoT
- How is the issue being dealt with?
- Who is leading in this area?
We introduce here the discussion as a whole, and the standing of A.I. in today’s world. We look to upcoming posts to take up some of the topics in more depth. Moderating the debate was senior journalist and commentator Roger Trapp, formerly of the Financial Times and the Independent on Sunday.
Moderator Roger Trapp
- Mark Bishop , Professor of Cognitive Computing at Goldsmiths, University of London
- Dan O’Hara, Senior Lecturer in English, New College of the Humanities
- Steve Mason, VP of Channels at ClickSoftware
- George Zarkadakis, Novelist, Science Writer, and Digital Transformation Consultant
In the audience was a range of guests, including journalists from the BBC, The Press Association, and Computing. Kicking off the discussion, Moderator Roger Trapp observed that “A.I. is all around us. It’s in things we’re becoming quite used to, like Siri, Cortana, driverless cars, flight technology, the healthcare industry, and many others. Computers are good at things that humans are not … and there are certainly jobs that are considered susceptible to replacement with A.I. …There is a theory that using computers to take the drudgery out of tasks will free us up to do more interesting things, but that is by no means clear.” He then turned to the panelists. As might be expected, initial comments were tentative. When HAL was introduced, A.I. was “futuristic”. Today, it is very now and very real, and, as noted by Steve Mason,
Steve Mason states that A.I is already in everything we do.
“[it] has been making strong inroads into everything that we do within industry over the past ten years. It’s behind the scenes on the Internet, and buried within applications. It allows people to take control of huge amounts of data and drives efficiencies within business. When organizations implement projects, change management needs to be considered. If they don’t manage people correctly, that can cause serious problems. People need to focus on that, in order to ensure that the A.I. project they’re undertaking will be successful. “ Mark Bishop pointed out that there are good reasons to embrace A.I., but there may also be reasons to fear it, and making that distinction is often highly subjective:
Mark Bishop pointing out the reasons we should embrace artificial intelligence.
Bishop didn’t shy away from the fact that A.I. does include things that can be alarming, but was careful to elaborate that it’s not so simple: “A.I. can be used by the intelligence agencies to spy on us. People like Stephen Hawking have flagged up singularity, the point at which machines will be smarter than humans, making humans redundant. I am skeptical about this. I don’t think any computer system can genuinely understand meaning in the same way as humans can. I don’t think computer systems can be creative. I also don’t think they can be conscious of the world in the way that we are. If I’m correct, there’s always going to be a gap, which I call the humanity gap. I think there are very good grounds for thinking that we don’t have to fear that.” Dan O’Hara responded that he thinks Hawking shouldn’t be interpreted too literally, but rather that Hawking sees his technology-assisted condition more as a prototype for the future of humanity. Going back a couple of centuries, there was the idea that if you could copy something, you could understand it. Nowadays, the emphasis is on understanding. How can we make artificial intelligence when we don’t understand it? I don’t think, though, that either of those approaches is correct. It’s somewhere in the middle.
Dan O’Hara giving his viewpoint at the conference.
At ClickSoftware, we believe that the true perspective is somewhere in the middle, and that it’s here that we can overlap between the two perspectives of Artificial Intelligence versus Assisted Intelligence. We believe in a realm of Assisted Intelligence where people can be helped to be more effective at their work without a professional conflict of interest. . In any genuine discussion of A.I., the subject of data will inevitably come up. At the debate, it was George Zarkadakis who introduced it by relating, “I was reading some research the other day, which was basically all about data. Imagine the data we’re going to have when fridges, pots and pans are speaking with each other. I read that something like 85% of Fortune 500 companies will gain no advantage in 2015 from this huge increase in data coming in. The data obviously has enormous value, but we can’t do anything with it. We’re at something of a bottleneck with it.”
George Zarkadakis introduces his theory of the bottleneck with big data and artificial intelligence.
In response, Steve Mason put forward that “The focus from an industrial perspective is how you replicate the way that people make decisions. We have been looking at how you draw in huge amounts of data, and distinguish between rules and decisions, such that organisations can perform more efficiently. The reality is that customers are demanding faster and better service at lower prices, so organisations have to improve their efficiency. Algorithms allow for that to take place.” A good example of how ClickSoftware succeeded at this was our experience with Fleetmatics. Steve went on to talk about how the cloud is enabling this transformation. “The great thing about the Cloud, going forward, is that applications are now accessible by smaller and smaller organisations. Oxford Economics recently released a report, which showed that the Cloud is being used by smaller companies to compete both against large enterprises and internationally. At ClickSoftware, we’re committed to being at the forefront of developing the Cloud in the area of Field service management.” You can learn more here.