I received my first AI-generated voicemail this weekend. And I don’t mean the usual robocall. I mean a voice that sounded awfully human — complete with pauses for “breath” — offering me a name (Gail), claiming to be from a bank, requesting a call back ASAP, and letting me know she’d be available until 2:30pm CST.
The main giveaway that Gail is not, in fact, a real person, was the unusual cadence of her speaking, odd pauses in the middle of a sentence that a real human, even one nervously reading a script, would not take.
I’m not surprised to receive a spam call with an AI-generated voice. Even so, it is still disconcerting to realize that AI-generated voices have reached this point in development, breaths and all. Even more disconcerting? Soon, I won’t be able to tell the difference between a human voice, or an artificial one.
So yes, I have thoughts about generative AI, some of ‘em spicy.
To be clear, I’m not completely against generative AI. It can be great for handling busy work — like organizing large data sets, for instance. That, to me, is a great use of the tool. You go, Glen Coco!
I’m not here to poo-poo on progress altogether. But I am here to pose questions on the ethical implications of generative AI that we — as marketers, business owners, and humans — should be asking. I know that ethics isn’t quite as thrilling as watching generative AI spit out a whole medley of marketing material in under 30 minutes, but alas.
We need to talk about it.
So in the coming weeks, I’ll be writing a series of articles on generative AI, mainly as it relates to the marketing world.
Today, I’m specifically addressing the free version of OpenAI’s ChatGPT — and two things to understand before using it.
Your data isn’t private
To be fair, is any data private these days? But it’s important to know how OpenAI uses consumer data. All data you input is used to train the models unless you opt out.
And yeah, it’s not only the bots crawling your data. Humans may actually take a look at your prompts, too. As OpenAI explains: “A limited number of authorized OpenAI personnel, as well as specialized third-party contractors that are subject to confidentiality and security obligations, may view and access user content” in some circumstances.
So if you’re going to use ChatGPT in any of your marketing activities, leave out personal and proprietary information.
In fact, a friend at a major corporation recently shared with me that the company banned all use of third-party generative AI tools for this very reason.
Here’s the full explanation of how OpenAI uses consumer data.
(Note: GPT-4 for API developers has different data privacy policies. You can read about them here.)
ChatGPT is an impressive liar
Called hallucinations, language learning models (LLMs) like ChatGPT can provide confident — and yet completely wrong — answers to queries. Open AI, creator of ChatGPT, states as a limitation: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”
Subbarao Kambhampati, a professor of computer science at Arizona State University and one of my go-to experts of AI, explains:
“It is important to understand that ChatGPT has no concept of truth or falsity. Unlike search engines which give users pointers to documents authored by humans (who know the difference between truth and falsity), LLMs do not index the documents they are trained on. Instead, they learn the patterns and correlations between words and phrases — the information that is stored in the billions of weights defining their trained networks. What this means is that when ChatGPT is ‘completing’ a prompt, it is not doing this by copy-pasting documents but, rather, by constructing a plausible completion on the fly, using its network.”
I’ve seen several folks recommend using ChatGPT as a research tool. A blog post from an agency made me a li’l grumpy when it suggested using it to research competitors, suggesting a prompt like “What are the top trends in this particular industry?”
But… that’s just not how ChatGPT works.
The free version of ChatGPT is not connected to the Internet.* OpenAI says it here:
“ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content. We’d recommend checking whether responses from the model are accurate or not.”
ChatGPT is not a search engine. It can’t be used like Google. Sure, it can help you with ideation in your research process, but any answers it produces should be observed with caution.
The bottom line: ChatGPT can write grammatically-correct prose. But it’s a language learning model meant to predict patterns, with no clear source of truth. If you use ChatGPT to produce anything, you still need to do the work of fact checking it for accuracy.
*While the free version of ChatGPT is not connected to the Internet, the paid version, ChatGPT-Plus, does have web browsing capabilities.
>> I’m chatting about generative AI on my LinkedIn — come join the convo!