When you hear the phrase “artificial intelligence,” it may be tempting to imagine the kinds of intelligent machines that are a mainstay of science fiction or extensions of the kinds of apocalyptic technophobia that have fascinated humanity since Dr. Frankenstein’s monster.
But the kinds of AI that are rapidly being integrated into businesses around the world are not of this variety — they are very real technologies that have a real impact on actual people.
While AI has already been present in business settings for years, the advancement of generative AI products such as ChatGPT, ChatSonic, Jasper AI and others will dramatically escalate the ease of use for the average person. As a result, the American public is deeply concerned about the potential for abuse of these technologies. A recent ADL survey found that 84% of Americans are worried that generative AI will increase the spread of misinformation and hate.
Leaders considering adopting this technology should ask themselves tough questions about how it may shape the future — both for good and ill — as we enter this new frontier. Here are three things I hope all leaders will consider as they integrate generative AI tools into organizations and workplaces.
Make trust and safety a top priority
While social media is used to grappling with content moderation, generative AI is being introduced into workplaces that have no previous experience dealing with these issues, such as healthcare and finance. Many industries may soon find themselves suddenly faced with difficult new challenges as they adopt these technologies. If you are a healthcare company whose frontline AI-powered chatbot is suddenly being rude or even hateful to a patient, how will you handle that?
For all of its power and potential, generative AI makes it easy, fast and accessible for bad actors to produce harmful content.
Over decades, social media platforms have developed a new discipline — trust and safety — to try to get their arms around thorny problems associated with user-generated content. Not so with other industries.
For that reason, companies will need to bring in experts on trust and safety to talk about their implementation. They’ll need to build expertise and think through ways these tools can be abused. And they’ll need to invest in staff who are responsible for addressing abuses so they are not caught flat-footed when these tools are abused by bad actors.
Establish high guardrails and insist on transparency
Especially in work or education settings, it is crucial that AI platforms have adequate guardrails to prevent the generation of hateful or harassing content.
While incredibly useful tools, AI platforms are not 100% foolproof. Within a few minutes, for example, ADL testers recently used the Expedia app, with its new ChatGPT functionality, to create an itinerary of famous anti-Jewish pogroms in Europe and a list of nearby art supply stores where one could purchase spray paint, ostensibly to engage in vandalism against those sites.
While we’ve seen some generative AIs improve their handling of questions that can lead to antisemitic and other hateful responses, we’ve seen others fall short when ensuring they will not contribute to the spread of hate, harassment, conspiracy theories and other types of harmful content.
Before adopting AI broadly, leaders should ask critical questions, such as: What kind of testing is being done to ensure that these products are not open to abuse? Which datasets are being used to construct these models? And are the experiences of communities most targeted by online hate being integrated into the creation of these tools?
Without transparency from platforms, there’s simply no guarantee these AI models don’t enable the spread of bias or bigotry.
Safeguard against weaponization
Even with robust trust and safety practices, AI still can be misused by ordinary users. As leaders, we need to encourage the designers of AI systems to build in safeguards against human weaponization.
Unfortunately, for all of their power and potential, AI tools make it easy, fast and accessible for bad actors to produce content for any of those scenarios. They can produce convincing fake news, create visually compelling deepfakes and spread hate and harassment in a matter of seconds. Generative AI-generated content could also contribute to the spread of extremist ideologies — or be used to radicalize susceptible individuals.
In response to these threats, AI platforms should incorporate robust moderation systems that can withstand the potential deluge of harmful content perpetrators might generate using these tools.
Generative AI has almost limitless potential to improve lives and revolutionize how we process the endless amount of information available online. I’m excited about the prospects for a future with AI but only with responsible leadership.