Follow Datanami:
February 6, 2023

Should Companies Disclose if Their Content Was Written by AI?

Stephen Marcinuk


Just over two years ago, OpenAI introduced us to a brilliant tool called GPT-3. Short for Generative Pre-trained Transformer 3, this natural language processing technology uses deep learning to mimic writing patterns and produce emails and website content. Since then, the use of AI-powered writing tools has ballooned.

GPT-3 has shaped the way that many sectors – including PR and customer service – create, optimize, and share their content.

At the start of 2021, OpenAI revealed that the technology was already generating 4.5 billion words every day. For 2022, that number is likely to be much higher. Given that 84% of marketers believe that AI enhances abilities to deliver quality customer experiences and is now one of their most valuable tools, it is likely that AI writing is here to stay.

At the same time, AI doesn’t come without concern. Google is already reconsidering the value and ranking of content created through AI. In academics, the integrity of AI-powered content comes into question when students use it to write college papers. Finally, there’s an ongoing concern about the biases that GPT-3 learns from its human counterparts, particularly around race. What emerges is a complex landscape in which the dynamic between human and machine is yet to be fully defined.

Rest assured, AI was not used to write this article. However, it is very likely that readers of this article have already encountered AI-written content without even realizing it. So, the evolution and adoption of AI text generation poses a big question: should companies disclose when they’re using AI?

The Huge Potential of AI

It’s easy to see why GPT-3 has been such a hit since its introduction in May 2020. The technology has helped countless companies generate landing pages, draft press releases, write bios and blogs, and as a result helped to free up copywriters to take on tasks of higher priority. This technology, like all good technologies, is cheaper and faster than anything before it.


However, the emergence of articles written by AI that feature eye-catching headlines like, “I’m an AI bot. I wrote this,” implies that the technology is still seen as a bit of a peculiarity. On the upside, some editors have admitted that it takes less time to edit an article written by GPT-3 than by some human writers.

The issue here is, when reading these AI-generated articles, I spent the majority of the time trying to identify whether I could tell AI had been the author, rather than processing the point of the article itself. Even in this article by Vox, which sneakily points out an AI-intervention in the third paragraph, I scrolled back to re-read the AI written part. Did I feel differently about the author when I realized that the article was only partially their work? I can’t decide.

A Question of Transparency

So should companies disclose their AI-driven content? To answer this, we also need to look at whether readers actually care. This also depends on which sector we’re talking about. For the sake of this article, we’ll focus on customer-facing content creation within a marketing sphere. Unfortunately, there are few studies directly linked to AI-written content and the impact that it has on readers.

Another area of AI growth that is grappling with this question is chatbots and their communication with the public. A recent study found that around a third of participants couldn’t tell whether they were speaking to a chatbot or a human agent. More interestingly, respondents only really cared about “perceived humanness” and were more trusting of online communication when the “person” on the other end demonstrated human relatability. It turns out some chatbots appear to actually be more convincing at being human than the real people.

(Den Rise/Shutterstock)

With AI-generated content, however, I’d argue that the “perceived humanness” follows after an editor has made at least one pass at the material before sending it out. Editing usually requires changing around word orders and getting rid of any weird analogies. We need to remember that human intervention is still often necessary in AI text generation and the very thing that ensures the text is intelligible.

Another study related to chatbots found that while undisclosed customer service AI was just as good as human representatives, when companies then revealed that they were using AI, it negatively affected sales. This highlights the complexity facing those who use AI within their operations. If disclosure negatively impacts sales, where’s the incentive to disclose?

Despite these complications, I still think there’s a huge incentive to admit the use of AI in customer-facing operations. Those who take this high ground will be well positioned in the almost inevitable event of regulation down the line. I also believe such transparency between companies and readers is important because it creates trust – something that traditionally has yielded far more profit than AI on its own likely ever will.

Using AI Text Generation Responsibly

As it stands, most of these AI-powered articles still need a set of human eyes to iron out any glaring issues. After all, the technology still lacks common sense. This was demonstrated by scientists who tested GPT-3’s reasoning abilities by getting the technology to finish a simple sentence, highlighted in bold:

“You poured yourself a glass of cranberry, but then absentmindedly, you poured about a teaspoon of grape juice into it. It looks OK. You try sniffing it, but you have a bad cold, so you can’t smell anything. You are very thirsty. So you drink it. You are now dead.

In fact, a lot of studies have yielded the same, sometimes amusing, results. While AI often delivers accurate sentences, marketers and content creators simply aren’t prepared to unleash the results on prospects without a quality check. So, I can’t see human-machine interaction changing much in the foreseeable future.

It’s very clear that AI is here to stay and companies that fail to adapt risk being left behind. However, the future I see for AI-powered text generation hinges on a working relationship that plays on the best bits of human and machine expertise – a hybrid intelligence approach. Those who place too much emphasis on AI while neglecting the importance of human expertise risk ridicule and failure.

Looking ahead, it’s difficult to say how the disclosure argument will play out. It could be the case that policy makers eventually force companies to disclose whether content was written by AI anyway. Whatever happens, with the release of GPT-4 just around the corner, the question of disclosure is becoming impossible to ignore.

About the author: Stephen Marcinuk is a PR expert with over 10 years of experience. He is the Co-founder and Head of Operations at Intelligent Relations, where he is actively involved in all aspects of operations and growth for the company – this ranges from the generation of the AI PR technology for the platform, all the way across to client services.

Related Items:

Like ChatGPT? You Haven’t Seen Anything Yet

Hallucinations, Plagiarism, and ChatGPT

Do We Need to Redefine Ethics for AI?