I’ve attended a lot of conferences, summits, seminars, discussions, panels and webinars over the last couple of years focused on AI for communicators and marketers.
And I’m always surprised by the lack of critically important, foundational knowledge about what generative AI (Gen AI) and large language models (LLMs) actually are.
Instead, many of these discussions are centered around outputs, applications and broad use cases like content creation or personalization and the tools that make these tasks faster and more efficient.
And while all this information is certainly useful, we often stop short of addressing the deeper questions: how, when or even why to use AI.
By focusing primarily on tools and surface-level outcomes, these conversations risk reducing AI to a set of flashy applications rather than empowering communicators to explore its true creative and strategic potential.
There are some very notable exceptions, of course.
I was particularly inspired by a presentation by Helen Todd, Founder of Creativity Squared, on the #ImaginationAge. Todd painted a future of collaboration and coexistence with AI, where the limits of creation are defined only by what we can imagine. OpenAI Founder Sam Altman reflected a similar sentiment in his blog post titled The Intelligence Age.
“… humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is.”
And yet for all of the truly spectacular things that these algorithms can do, the less we understand AI, the more likely we are to embrace it.
A recent study published in the Journal of Marketing found that those with lower AI literacy were more receptive to it, not because they perceive it as more capable or ethical but because they perceive it to be more magical and thus have a greater sense of awe when thinking about AI completing tasks.
Conversely, researchers predicted that once someone understands how the “magic trick” works, they are less likely to feel that same sense of wonder, demonstrating how expertise can numb our overall emotional responses.
But the “magic” doesn’t have to disappear when we understand that LLMs are pattern recognition systems trained on vast human knowledge. Instead, this understanding can transform our awe into an appreciation for how these patterns can be leveraged creatively.
To bridge the gap, we need to focus on finding ways to maintain our sense of awe and wonder while building practical understanding.
This is particularly crucial in fields like marketing and communications, where the difference between mediocre and exceptional AI implementation often comes down to sophistication of the application. Someone who understands prompt engineering can achieve far more impressive results than someone treating AI as a magic all-knowing entity.
AI is a tool – a very powerful and impressive tool – but a tool nonetheless. It requires human expertise to unlock its full potential, and its greatest strength is ultimately its biggest limitation. AI’s power lies in its ability to generate outputs based on patterns in its training data.
When we treat AI as magical, we risk overlooking these critical limitations. Hallucinations aren’t system bugs but rather a natural consequence of how LLMs work. They’re pattern prediction engines, generating outputs based on probabilities, not comprehension.
Without understanding that, we risk using AI superficially.
I’m not asking us to choose between awe and understanding – just the opposite. By understanding how these tools work, their strengths and limitations, we can better recognize their true potential and guide more inspired usage.
Lexi Trimpe is a Director of Digital + AI at Franco. Connect with her on LinkedIn.