The Perfect Ad: It Won’t Come From AI, It’s In the Name

The perfect ad is when your best friend shares a brand with you. It is decidedly not artificial. Even a hint of artificiality can impact the impression we form, and the last thing we want when considering a purchase is to feel duped. While the amalgamated wisdom of the Internet, packaged into something that interacts like a human, is a breathtaking tool, it’s not your friend.

If a chatbot tells me it likes Duff beer, I instantly know the emotion behind that statement isn’t genuine. It can’t have first-hand experience with beer. Sure, it might be funny to press the question further and ask if Duff is better than Bud. Google might tell me that AI thinks people guess the imaginary brand is modeled after a heavily hyped one. But, I’d think it’s more like PBR, as I project a few layers of depth onto The Simpsons’ humor. But at the core, AI’s emotions are not real; they are simulated.

AI has the prowess to plumb my inferences, but by the very definition of the word, it simply can’t be genuine. We can’t even know what’s truly genuine—it’s an emotion built from layers of trust, and that trust is what makes genuine advice so valuable when it comes to a product pitch.

The Quest for Genuine Advice: A Paradox in Marketing

What’s curious is that, while genuine pitches exist, we obsess over the perfect approximation of genuine pitches. The typical setup for TV ads involves a neighbor sharing sage advice to buy this or that product. It’s what marketers aspire to achieve—the organic recommendation. But here lies the problem, akin to the parable of Schrödinger’s cat: the act of inserting a pitch into a genuine conversation often negates the appearance of genuineness. Usually, this shift from “genuine” to “abhorrent” happens quickly. It’s a tension that’s been around for so long that it’s rarely questioned. AI represents the most extreme attempt at simulating genuineness, and while it has cost nearly as much as devising nuclear energy, it is sweeping through ad tech like a new messiah. But what if, instead of trying to simulate something real, we could actually be genuine?

Influencers: A Step Toward Genuine Transparency in Advertising

The rise of influencers challenges the stigma of pitches being inherently deceptive. We trust the presenter. We know they are being paid; this transparency relieves us from the fear of being duped. The influencer will likely refuse to promote a bad product to their hard-earned audience. This kind of transparency makes the conversation feel real, and in turn, it blesses the brand being promoted. Trust between the influencer and the audience is the key here.

The Creepy Side of Data-Driven Ads

So why do I see ads for things that have nothing to do with me when I read something posted online? Some ads might even be things I would abhor. The ad placement is based on data about the viewer that can be unsettling to think about, and it creates an unwelcome presence in an otherwise genuine exchange between people. Now, consider a different scenario: What if the author of that post had selected the brands that pay for the hosting service? In this case, the marketer would have made a partner out of the customer, and the transparency of that relationship would make the presence of ads feel like a genuine exchange. If it’s free, then I am the product, but that’s okay if I can choose the product I’m promoting.

The Market Dynamic: Reclaiming Control

Moreover, we could take this a step further. If advertisers paid the author directly instead of relying on algorithms to guess what scenario might unfold, we would shift away from data-mining tactics. Perhaps, if no one is willing to promote a product, that’s a good thing for the market. Pricing harmful products out of circulation signifies that the market is truly working for the benefit of the genuine buyers and sellers.

Maybe we shouldn’t underestimate the intelligence of the customer. One genuine ad is worth a thousand unwelcome pop-ups that attempt to guess my desires with data that should never have been gathered in the first place—unless you’re my friend.

GeistM: Where Transparency and Trust Lead the Way

What happens when AI gets a wallet? Selling to bots is not the same as selling to people. There’s a fundamental difference in how AI interacts with products versus humans, and we will need to reckon with the implications of AI-driven purchasing behavior as it becomes more common.

At GeistM, we believe in being genuine and transparent in everything we do. We don’t let AI speak for us, and we always ensure transparency and trust in our ads, campaigns, and content. 

We understand that consumers value authenticity, and we’re committed to providing it, without relying on algorithms or artificial communication. Our approach is rooted in building genuine connections with audiences, creating campaigns that feel human and trustworthy, and ensuring that our clients’ brands are represented with the integrity they deserve.

7 Dangers You Face When Using AI In Marketing 

In today’s ever-changing tech world, artificial intelligence (AI) is a huge deal across many industries. People love the idea of quicker, more efficient processes and automated tasks, so businesses and marketers are jumping on the AI bandwagon.

However, as with any new and evolving technology, there are risks to consider. 

Although everyone‘s excited about AI, it’s important to consider the pros and cons of incorporating AI tools in your marketing. You don’t want to take an action that you might later regret.

In this post, we’ll discuss the seven biggest AI risks in digital marketing and how to avoid them. 

1. AI Can Have Legal and Ethical Concerns 

The integration of AI technology in digital marketing presents both legal and ethical concerns that mimic the growing pains of emerging technologies like digital advertising and social media. The problem is that it can take years for laws to catch up with the rapidly advancing tech world. 

So, the widespread use of AI in digital marketing might seem amazing now, but it may become tricky in the future. Such change in the status quo could potentially disrupt entire digital marketing firms. Just think about AI and its collection of sensitive information and the risk of unauthorized personal data dissemination.

AI is not programmed to ask for permission before collecting data. It ignores most users’ privacy policies of most users — and that’s just one example. 

2. AI Can Negatively Impact SEO 

A major downside to AI is a lack of SEO. Many businesses think they can replace writers and let AI ChatGPT draft successful copy for their online sites. 

However, when it comes to rankings, Google — and other search engines — prioritize excellent content, written for humans, by humans. While AI-generated articles may appear high-quality at first glance, the SEO algorithms view it otherwise, which can lead to lower rankings over time. 

SEO requires a lot of time, effort, and expertise to implement, and you definitely can’t rely on AI tools like ChatGPT to optimize your article for you. 

3. AI Produces A Robotic Tone That Fails To Connect With Readers 

Using AI for content creation is more complex than simply requesting it to write an article on a specific topic. Businesses must put in additional effort to organize and structure the content effectively. AI-generated content may fail to connect with your audience. 

The main reason is that it sounds robotic and lacks a distinctive tone or personality. This impersonal nature makes it difficult for the content to resonate on an emotional level with readers. 

Effective content marketing requires understanding the audience’s emotions and tailoring the message accordingly, which AI currently can’t achieve.

To do that, you’ll need a human writer who has the expertise to write engaging content that resonates with the right audience.

4. AI-generated Content Is Often Inaccurate

Another significant danger of AI is that it’s only as reliable as the data it’s been trained on. Since algorithms ravenously scrub and analyze existing content available online, there’s a risk of producing inaccurate or outdated material. 

ChatGPT recently displayed signs of laziness due to the lack of learnings from those using the tool and degradation in the prompts used. This alone proves that you cannot rely on the results that artificial intelligence provides you.

In terms of “facts,” AI-generated content often lacks proper citations, which can undermine a company’s credibility. Despite efforts to ensure accuracy, AI requires greater human insight for this critical task. As marketers, it’s imperative that the information we put out there is accurate, whether it’s for clients, demographics, or the broader public.

5. AI Can Display Negative Stereotypes

Researchers at USC examined two large AI databases and discovered that more than 38% of the data contained biases

Bloomberg’s analysis of more than 5,000 images created with Stable Diffusion is just as shocking: 

“The analysis found that image sets generated for every high-paying job were dominated by subjects with lighter skin tones, while subjects with were more commonly generated by prompts like “fast-food worker” and “social worker.”*

The prevalence of bias in AI has significant implications for marketers. Targeting ads that use platforms that exclude substantial portions of the population certainly undermines efforts to reach the broadest possible audience.

Beyond performance issues, there are more significant consequences if ads unfairly target or exclude specific groups. For instance, a real estate ad that discriminates against protected minorities could result in legal repercussions under the Fair Housing Act and scrutiny from the Federal Trade Commission. This is a big red flag to watch out for! 

6. AI Content Is Highly Repetitive

When exploring AI technologies like ChatGPT, you may find that the responses it generates can be quite remarkable. They can be diverse, captivate, and even exhibit a semblance of human thought and behavior.

However, you’ll notice that after a while the content grows repetitive and even tired. Essentially, the same content gets reiterated with a similar tone. 

This is a primary concern for digital marketers, so it’s a significant drawback to adopting AI tools. 

7. Editing AI Content Is A Challenge 

Editing requires time and significant attention to detail. It can take strong content and make it exceptional. But your content won’t get there if you rely solely on AI writing tools. ChatGPT requires revision and edits, preferably by content experts. For example, at GeistM, each draft goes through thorough editing by experienced writers to ensure it lands and resonates with its target audience.

High-quality content requires thorough planning, research, understanding of the topic, clarifying the key message, adapting the tone to suit the brand and audience, and multiple rounds of editing and proofreading.

Although AI can assist in planning and structuring content, human input remains essential in crafting meaningful and engaging content. If you’re not a seasoned editor or copywriter, hastily reviewing and publishing AI-generated content can actually do your business more harm than good. 

Final Thoughts 

AI tools are smart pieces of technology. They can grab an astonishing amount of information from various sources and provide lightning-quick answers within moments. But human creativity and strategic thinking remain key to crafting effective marketing campaigns, and that’s something AI just can’t provide — yet

So, rather than risking your website authority with Google or burning money on weak marketing copy, let our human writers bring out the best in your marketing campaigns! 

If you’re truly serious about growing your business, get in touch with GeistM today.

Written by: Julia Steiner

*Source: https://www.bloomberg.com/graphics/2023-generative-ai-bias/