On the SI AI Debacle
Sports Illustrated is experiencing an odd crisis this week that (apparently) involves artificial intelligence, or AI. The story introduces an AI plot twist: potential problems arising from vendors. The affair is confusing enough that — although SI’s publisher responded fairly quickly — it’s hard to say whether it responded appropriately.
On Monday, science and technology website Futurism ran a piece that contends Sports Illustrated has posted articles that were generated by AI and had fake author bylines, bios and headshots (found on AI-image sites).Futurism staff writer Maggie Harrison reported that, even more bizarrely, the fake personas were sometimes replaced by new ones, with the same articles under the new names.
Accusing a publication of making up not only articles but their authors is serious. Harrison wrote that after she reached out for comment from SI’s publisher, the Arena Group, the allegedly AI-concocted articles were taken down.
Monday night, Sports Illustrated posted on X (formerly Twitter) Arena Group’s statement, which leaves open a lot of questions. The company said its initial investigation uncovered that it in fact did not publish AI-generated stories. Instead, it ran reviews (such as of volleyballs) by third-party content creator AdVon Commerce, which assured Arena Group that the articles were composed by flesh-and-blood scribblers.
But, Arena Group continued, “we have learned that AdVon had writers use a pen or pseudo name in certain articles to protect author privacy — actions we don’t condone — and we are removing the content while our internal investigation continues and have since ended the partnership.”
An odd situation. But serious enough that SI nixed the vendor relationship. Harrison cited two unnamed sources involved in the content creation who insist it’s AI-generated. Arena Group’s statement also doesn’t address the allegation about the fake headshots. And why the need for a pseudonym for a volleyball review?
On one hand, we want to say Arena Group and SI responded quickly. On the other, the response leaves so many open questions it’s hard to say how forthright they’re being. We await further results of the internal probe.
It’s also obviously a problem for AdVon. Through Arena Group, it denies the stories are begotten via AI, but that’s hard to believe. “Is AdVon producing mountains of AI-generated content for publishers and passing it off as human work?” The Verge’s Mia Sato asks in her piece on the mess. “Or do publications just not care? Either way, it’s not very hard to spot — and pretty embarrassing when it’s called out.”
“Embarrassing,” as in reputation. Apparently, just as we have to worry about vendors in the data-breach–crisis space, we now have to do that in the AI-crisis space.
Photo Credit: Praneat/Shutterstock
Sign up for our free weekly newsletter on crisis communications. Each week we highlight a crisis story in the news or a survey or study with an eye toward the type of best practices and strategies you can put to work each day. Click here to subscribe.