Fear of technology erodes trust in business by Shel Holtz
Not everybody succumbs to the allure of every new technology.
In fact, the rapid introduction of new technology can leave people feeling overwhelmed and left behind. Believing that business is more interested in the revenue it can earn from these technologies than the impact they have on people creates suspicion and mistrust.
“The implications of the global trust crisis are deep and wide-ranging,” Richard Edelman, president and CEO of Edelman, said when introducing the results of his company’s 2017 Trust Barometer. “It began with the Great Recession of 2008, but like the second and third waves of a tsunami, globalization and technological change have further weakened people’s trust in global institutions.” Twenty-two percent of survey respondents said the pace of innovation is fueling their lack of trust in the system.
Yet companies continue to introduce these new technologies at a breakneck pace. New research suggests fear of some of these technologies is bubbling to the surface and could threaten the infrastructures and ecosystems they need to thrive. The survey from IT training platform developer CBT Nuggets found there is little excitement and plenty of fear over the prospect of swarms of drones in the sky, yet Amazon and other companies are hard at work perfecting drones’ ability to deliver packages and handle other chores, heralding an era that will turn peoples’ nightmarish concerns into reality.
Edelman concludes that consequence of these fears (along with corruption, immigration, globalization, and eroding social values) is “is virulent populism and nationalism.”
It’s not just drones. Artificial Intelligence (AI) is also stoking fears (probably exacerbated by Hollywood’s apocalyptic fantasies about AI gone rogue), with robots that teach each other and self-learning computers making the list of impending technologies that scare people. Then there are technologies that generate both excitement and fear, including lab-grown organs, autonomous vehicles, and flying cars.
To avoid potentially disruptive backlash (such as the election of officials who promise to cleanse the skies of swarms of quadrocopter invaders and stop the outrage of lab-grown organs—despite the benefits these technologies may deliver), the companies behind them and those funding their development need to factor fear into their plans. They need to begin laying the groundwork now to build comfort, allay concerns, and articulate the advantages.
The biggest companies working in AI are well aware of the worries they work is producing, pushed by authoritative voices like Stephen Hawking and Elon Musk, who have been public in their warnings about the headlong rush to introduce AI into our day-to-day lives. To address those concerns, IBM, Facebook, Google, Google, Microsoft, and Apple have formed the Partnership of Artificial Intelligence to Benefit People and Society to establish a set of ethics standards and best practices to guide the industry’s development. The group—which will also include academic and non-profit researchers—promises yearly reports on its efforts, which will include research into specific issues.
Most people haven’t heard of the effort, which is a problem. Awareness that the industry has taken significant steps to police itself would help, along with efforts to get the word out about its initiatives and results.
If there’s a similar consortium addressing drone swarms, I haven’t heard of it.
There is work here for communicators that needs to begin with convincing businesses that being proactive about soothing peoples’ jangled nerves is worth the time and money. Pointing to the consequences we’re already seeing from the headlong rush into the future, and the fears people are expressing, would be a good place to start.