Guest Blog: RJ Bardsley of Racepoint Global discusses Artificial Intelligence
The Responsibility of AI
By RJ Bardsley, Chief Strategist, Global Technology Practice, Racepoint Global
Cutting through the myth and marketing behind AI is difficult because it also means cutting through the optimism inherent in who we are as humans. From Star Wars and Star Trek movies to the pages of the technology media, to the halls of MIT, it is one of the few topics that has everyone talking – with good reason. Perhaps not since the heady days of the moon shot space program has such a broad swath of the population been so enthusiastic about an engineering project.
To a large extent, the AI revolution is already happening. While we’re still waiting for robots that will wake us up and serve us breakfast, AI is manifesting itself in everything from the way our phones manage battery power to that self-driving vehicle you saw on the streets of San Francisco on your last trip there. From the chip level up, engineers have been baking in self-learning aspects of the technology we use every day. Apps like Amazon’s Alexa and Google Home are leveraging machine learning to help answer questions and organize our days.
The thing about AI is that it’s a vanity project in the purest sense – we are developing technologies that mimic how we think – we are building things in our own likeness, albeit mental, not physical likeness. This requires an incredible combination of disciplines and what we end up seeing as AI probably won’t look – or think – much like us at all. AI also requires us to think differently about social responsibility and technology.
For several decades, the technology industry has seen itself as primarily agnostic when it comes to ethical use. A technologist is concerned with developing a better algorithm or a faster chip – they have not been focused on whether that algorithm or chip is used to improve healthcare or spy on social network participants. In recent weeks, we’ve seen the danger of this first hand with the Facebook/Cambridge Analytica debacle, but honestly, this is something that has been an issue for a long time. Shouldn’t smaller scale incidents like cyberbullying and geo-tracking have been more of a concern for the people developing technologies that have become so embedded in our lives?
The philosophical debate of technology’s role in good versus evil is not a new one – the difference is that today the pace of innovation and the pervasiveness of technology make the conversation much more crucial than it ever has been in the past. We are in an era that requires a new way of thinking – a socially responsible approach to the world, where we pay more attention to the balance of agnosticism and altruism in the innovation process.
AI will be pervasive in our lives. We are starting to see it with autonomous vehicles and digital assistants like Alexa, but this is only the beginning. As programs like Googles Deepmind and IBM’s Watson get to work on solving big issues like energy consumption and healthcare challenges, we need to remain focused the potential for negative uses.
In late March at the EmTech Digital Conference in San Francisco, the Partnership on AI, a group formed in 2016 by Amazon, Apple, IBM, Facebook, Google, and Microsoft, unveiled its four-point mission statement and eight tenets on AI. Essentially, its tenets included privacy protection, security, openness, collaboration and the socially responsible development of AI technology. Whether this creed becomes the bedrock of the AI movement or not has yet to be determined. Perhaps we are at a point where the technology industry has realized the importance of a socially minded north star, but that is still up for debate.