Generative AI risk management: Scenario planning starts now
There are two schools of thought emerging in the public discourse around generative AI (genAI): genAI is a game-changing opportunity that will help us work smarter and grow faster or genAI poses an existential threat to our business, our clients’ businesses and, left unchecked, society in general.
The reality? Both are probably right. In posts published earlier this week, my colleagues have already shared some thoughts around how we’re viewing the opportunity for genAI tech companies and how we’re currently using genAI in our work.
This post isn’t intended for marketers and communicators; we tend to be glass half-full types. Rather, it is aimed at those who sit in C-Suites and on Boards of Directors with a simple call to action on AI risk management; don’t sit this one out. “Let’s wait and see” is a death sentence when it comes to understanding the potential risks genAI can pose for you, and the downsides to corporate reputations and valuations will be significant and swift.
GAI has the potential to make even the most realistic-looking email phishing scams look like child’s play. Indeed, it’s a bad actor’s dream come true. And your worst nightmare come to life.
- Characters will be assailed
- Deals will be announced
- Lawsuits will be threatened
- Systems will be breached
- IP will be hijacked
- Proxy fights will be announced
- Jobs will be offered… or eliminated
- Terms will be promised
- Stock prices will be manipulated
- And so on.
None of these will be real, but every single one will look, read and feel entirely real, if not—on the surface, at least—entirely credible. And, absent the swift development of enforceable regulations that provide comprehensive oversight of AI development, this wave of risk is just beginning. Deepfakes will only be made to look, read and feel more real and sound more credible over time.
It’s not only a security issue, but also a brand issue as the potential for misinformation increases. And these examples don’t even begin to touch on the potential for enterprise technology that’s raced to embed genAI into its features to go astray or serve up problematic outcomes for users.
A call for generative AI risk management
As with any potential crisis situation, planning and preparation will always be your best defense. Sit with your leadership teams and scenario plan for the unthinkable. What could happen and how should you respond when it does?
Keep in mind, genAI is not actually intelligence. It’s creating content—text, images, videos and more—by processing vast amounts of data in response to whatever it’s been told to look for. A lot of what gets generated is just plain wrong. And it can be very wrong and potentially very damaging as a result.
Business leaders must act now to ensure their IT and Communications teams are adapting, including increased reputation monitoring, content moderation, community management and more aggressive misinformation correction.
Ronald Reagan famously said, “Trust, but verify.” (He actually borrowed that from Mikhail Gorbachev, who reportedly used the Russian equivalent in every meeting he had with President Reagan.) The assumption of trust in a genAI world is setting yourself up for a fall. When it comes to any future threat and AI risk management, I would argue that “distrust, but verify” will serve you better.
This article was initially published by our sister company SHIFT Communications on SHIFT Insights.
——— Rick Murray, former Managing Partner and Chief Digital Strategist at NATIONAL Public Relations, and now Managing Partner at SHIFT Communications, sister company of NATIONAL