Companies Are Aware of the Potential for AI Crises: Survey

Thom Weidlich 08.10.23

Share:  

Generative artificial intelligence, or GenAI, promises many things but, alas, that includes crises. The technology, which produces texts, images or other media in response to prompts, is ripe for misunderstanding and misuse. A new survey explores how prepared big companies are to deal with the challenge and what more may need to be done.

While many cheerleaders promote the promise of pressing a button to whip up a press release or op-ed article, we already have enough empirical “data” to show us the pitfalls. Think of the lawyers who used AI platform ChatGPT to submit a court filing riddled with fake cases, and were fined $5,000 each. Think of your own experiments where you spat out an AI document that looked perfectly reasonable until you realized it was rife with fiction.

The survey of 900 executives at large companies around the world was sponsored by software company Teradata Corp. and conducted by market-intelligence outfit International Data Corp. (Hat tip to Richard Carufel at Agility PR.)

‘Red Flags’

“GenAI and its potential for driving innovation and disruption has simultaneously captured attention and raised several red flags among business leaders across the world,” IDC’s Chandana Gopal and Dan Vesset write in an “executive preview” of the survey entitled “The Possibilities and Realities of Generative AI.”

“Due to its propensity to generate hallucinations, such as fake citations or content that is so close to mimicking the truth as to be believable, GenAI comes with a host of challenges that organizations must address,” they write.

 

GenAI and its potential for driving innovation and disruption has simultaneously captured attention and raised several red flags among business leaders across the world.

— IDC’s Chandana Gopal and Dan Vesset

Eight-six percent of respondents said their organizations need “more governance to ensure data quality and integrity.” “Users who do not have the data literacy to vet the output of LLMs [large language models, a type of GenAI] could potentially expose organizations to new types of risks and negative consequences,” Gopal and Vesset write.

Needed Guardrails

On the other side of the procedure is the fact that when you put proprietary data into prompts, you are essentially making it publicly available. The authors refer to the “guardrails” needed before putting AI into place.

Fortunately, almost all respondents said “they were very familiar with data ethics and the responsible use of data.” But, the authors write, “the fact that only 29 percent of CIOs [chief information officers] and CDOs [chief data officers] felt that data management and governance was completely standardized in their organization is a major risk factor.”

Interestingly, 57 percent of respondents said they think interest in generative AI will fade over time.

Image Credit: Phonlamai Photo/Shutterstock

Sign up for our free weekly newsletter on crisis communications. Each week we highlight a crisis story in the news or a survey or study with an eye toward the type of best practices and strategies you can put to work each day. Click here to subscribe. 

Related:LSU Confronts NIL and AI