What, Me Worry?
Call me shortsighted, but I am notlosing sleep overthe prospect of a supercharged AI gaining consciousness and waging war on humans. What does keep me up at night is that humans are already wielding the power of artificial intelligence to control, exploit, discriminate against, misinform, and manipulate other humans.
Tools that can help us solve complex and vexing problems can also be put to work by cybercriminals or give authoritarian governments unprecedented powerto spy on and direct the lives of their citizens.We can build models that lead to the development of new, more sustainable materials orimportant new drugs — and we can build models that embed biased decision-making into systems and processes and then grind individuals up in their gears.
In other words, AI already gives us plenty to worry about.We shouldn’t be distracted by dystopian fever dreams that misdirect our attention from present-day risk.
The recent emergence of ChatGPT and similarlarge language models into broader public awareness has heightened general interest in both the potential benefits and the dangers of AI. A few days before I wrote this, Geoffrey Hinton, a giant of the AI field whose work underlies much of the current technology,resigned from his position at Google, telling The NewYorkTimes that he wanted to speak freely about the risks of generative AI. “It is hard to see how you can prevent the bad actors from using it for bad things,” he told the newspaper.
And there, indeed, is where we should put our attention. Throughout human history, new technologies have been put to work to advance civilization, but they have also been weaponized by bad actors. The difference this time is that not only is the technology extremely complex and difficult for most people to understand so too are the potential outcomes.
There’s a steep learning curve that all of us must be prepared to climb if we want AI to do more good than harm. The excitement about possible uses forthese tools must be tempered by good questions about how automated decisions are made, how AI systems are trained, and what assumptions and biases are thus baked in. Business leaders, especially, need to be as aware ofreputational and materialrisks, and the potential for value destruction, as they are of opportunities to leverage AI.
And while AI development is likely to continue to proceed at a breakneck pace that no open letter can arrest, the rest of us ought to proceed deliberately and with caution. In “Don’t Get Distracted by the
Hype Around Generative AI,” LeeVinsel reminds us that tech bubbles are accompanied by a lot of noise.We need to let go of the fear of missing out (FOMO) and take a measured,rational approach to
evaluating emerging technologies.
Here at MIT Sloan Management Review, we will continue to support business leaders with the research and intelligence required for clear-headed decision-making and judicious experimentation. Our ongoing focus on responsible AI practices, led by guest editor Elizabeth Renieris of Notre Dame, is a critical part of that commitment.
We’re all stakeholders in the AIrevolution. Let’s embrace thatresponsibility and strive to ensure that the good actors outweigh the bad.