A few months ago, I read an article about a group of academic researchers who published a report with erroneous information. It turns out the group had used generative AI, and the platform manufactured false information about specific companies. The most surprising part of the story, at least for me as I undertake my own doctoral research journey, was that, given the rigorous research principles governing academic work, no one at any part of the process appeared to follow standard verification protocols to validate the output before it was finalized for submission.

In an age where AI touches almost every aspect of our lives, the old adage “trust but verify” has never been more relevant. This principle emphasizes the importance of placing confidence in concepts, systems, and other people while rigorously checking the validity of what we are told or believe to be true. As communication professionals, I believe we can play an essential role in this verification process by providing human oversight to AI-generated output and ensuring its appropriateness to human-to-human communications from the perspectives of trust, transparency, and truth.

Haiilo’s Artificial Virtual Assistant (AVA) can enrich your internal communications and empower your team to work more efficiently and effectively.

How Technology Ruled The Day

There’s no denying the massive impact of two paradigm shifts that occurred in the last four years. First, the world as we knew it was forever changed by the pandemic, bringing us ever closer to technology as a way to stay connected to each other. Then, OpenAI made ChatGPT available to the public, and AI became a household topic of discussion, further transforming everything we knew to be true. As a self-avowed techie, AI, machine learning, and natural language processing were not new concepts to me. They have existed in some shape or form for almost 50 years, occupying a necessary place in the background of everyday technologies. However, the sudden democratization of technology changed our collective perception of what was possible and laid bare our complicated relationship with innovation.

The 2024 Edelman Trust Barometer reveals an interesting paradox about the acceleration of innovation and concerns about societal instability as a result. While businesses remain more trusted than other institutions, the same doesn’t hold true for trusting companies to introduce and integrate innovative technologies in a manner that is safe, accessible, and beneficial to society. In the 2015 edition of the report, a high degree of trust was linked to the successful launch and acceptance of innovation. Nine years later, respondents are no longer as accepting or optimistic that certain technological innovations, notably AI and gene-based medicine, will benefit them or society.

💡Learn more about the digital employee experience and how to get it right

Trust but verify AI Edelman

The Conundrum Called AI

While there are different types and uses of artificial intelligence systems, today, people specifically mean generative AI systems when speaking about distrust in AI. Most of us are not concerned about using AI-powered navigation apps on our phones, asking Siri or Alexa for help, or letting Spotify’s AI algorithm optimize our listening experience. But allowing generative AI systems to create—a distinctly human trait until now—without guardrails is beyond our comfort zone. In my professional capacity, I can’t help but wonder if AI has a brand problem compounded by a lack of communication. The value proposition could continuously be tweaked, but the opaqueness exhibited by those responsible for the development and deployment of AI is the real culprit. And when we can’t reasonably understand what’s happening (asking ‘why’ until we’re satisfied is another distinctly human characteristic), our Spidey senses start tingling, and our inner skeptic arches one well-shaped eyebrow as if to say, “Oh, really?” (Cue the wide-eyed Nicolas Cage meme).

At its core, “trust but verify” represents the duality of wanting to have confidence in AI systems’ abilities to execute complex and time-intensive tasks quickly while requiring rigorous scrutiny of their operations and outcomes. By balancing these complementary needs, we can have confidence that we understand the steps that led to the results and ensure transparency—which is a cornerstone of trust—when communicating how we know the outcome to be true. It’s similar to showing your math professor how you arrived at the answer. Transparency requires that AI processes and decision-making be open to examination and that there is clear evidence of robust protection measures, reinforcing trust through confidence in security practices. 

Likewise, these systems must demonstrate reliability across various contexts and over time. Even among people, trust takes time to build. It’s done through practicing consistency and authenticity in words and actions. In any technology context, reliability also means that the system is performing as intended, without unexpected failures. AI hallucinations and producing false data to make a point are unacceptable failures, which is why I believe it’s early days for generative AI to be fully trusted. Companies and developers working on next-generation AI will need to prove that their creation is safe, accessible, inclusive, and beneficial to society.

AI Trust but verify

Navigating a New World For Comms Pros

False positives exist in medicine, too; that’s why doctors use other forms of diagnostics to verify results. In my opinion, communication professionals need to play a comparable role in probing the mechanisms in place for monitoring and maintaining AI system performance and independently validating the findings. To do so, we must first understand the inner workings of these technologies so we can intelligently and confidently convey to our audiences the ‘what,’ ‘how,’ and ‘why’ behind AI-generated results and add a human perspective. 

Despite its capacity to make sense of high volumes of unstructured data or create myriad content types, generative AI cannot comprehend why the information could be relevant to humans and what will make us react. We’ve all heard the saying, “People will surprise you.” And we’ve all seen it in action in our personal and professional lives. Unpredictability is yet another human attribute. So, while it’s acceptable for communicators to use generative AI in content creation and free up time for more strategic endeavors, it’s not appropriate to publish AI-generated responses without first considering how these will be received by your audience and adapting the message for human consumption.

Another area where the communication function will need to take the lead is in educating their organizations and stakeholders on the ethical implications of AI. Whether it’s Congressional hearings in the United States or legislative acts in other parts of the world, lawmakers are grappling with rules to control a technological innovation that’s constantly evolving. Talks of banning generative AI can be likened to the Luddites protesting against the inevitability of machinery and the Industrial Revolution. However, concerns about privacy, bias (conscious or unconscious), and accountability are valid and must be addressed in the public discourse on AI. In fact, a report on the state of AI and the C-suite found that 54% of organizational leaders express worry over the regulation of AI.

In the absence of legislative guidance, communication professionals can work with their peers inside the organization to develop frameworks for the ethical use of AI, communicate these practices to all stakeholders, and perform regular audits to ensure these are upheld and also updated as the technology evolves. Recognizing the need, IABC has developed a set of guiding principles on the ethical use of generative AI technologies by communication professionals. These principles complement the IABC Code of Ethics, which guides ethical and responsible communication practices and can be adopted by communicators for use in their organizations.

🔎 Check it out: Top 5 Ways AI Can Be Used to Embrace Internal Communications

Trust Verify AI

Communicating About AI

As AI systems become increasingly integral to our work, a critical question arises: How do we build and maintain trust in the age of AI? For communication professionals, understanding the nuances and complexities of AI becomes paramount when communicating its potential use cases and pitfalls to a diverse audience. I have always believed that technology and trends might change, but effective communication utilizes tried and tested principles. Therefore, communication strategies for AI are not an exception to the rule. 

Here are five proven practices that communication professionals can use to communicate about AI.

Clarify Complex Concepts

Use clear, concise language and practical examples to break down AI’s complexity and explain how it works, including its limitations and the processes in place to independently verify what’s produced.

Share Proof Points

Highlight use cases, reliability measures and results of audits to demonstrate the ethical use of AI and the diligent checks in place to ensure its integrity.

Foster Open Dialogue

Encourage and facilitate open discussions about AI, inviting questions, concerns, and feedback. It helps to demystify AI and reaffirms a commitment to transparency and accountability.

🔌 Explore further: Workplace Technology: Why It Matters and How to Choose It

Emphasize Continuous Verification

Communicate the ongoing efforts to monitor and evaluate the accuracy of AI-generated content and systems. Make sure people understand it’s a continuous process and that feedback helps improve it. 

Advocate for Standards

Underscore the importance of standards, guidelines, and ethical frameworks to govern the proper use of AI in your organization and be part of the development process.

By adopting these practices, communication professionals become critical players in guiding their organizations through the adoption of AI and implementing a structured communication approach to continuously ensure transparency, trust, and independent verification of AI-generated results. Furthermore, adopting a “trust but verify” approach also positions us as mediators between AI technologies and their audiences, taking on multiple roles as conveyors of information, educators, ethical guides, and facilitators of trust. Now, that’s being a strategic communication professional.

🚀 Read more: Haiilo Continues to Lead the Evolution of Generative AI in Employee Communications

About the Author

Maliha Aqeel, PMP, SCMP, MC, is an award-winning brand marketing and strategic communication professional with a proven track record of delivering results for global brands in the financial and professional services industry, such as APCO Worldwide, EY, and Fix Network World. She is the Founder & CEO of The Ideas Collective Inc., an independent strategy consulting firm in Toronto, Canada.

She has worked in corporate and agency roles for over 20 years, connecting the dots between strategy and marketing to drive business objectives. She has extensive experience advising C-suite and senior executives and co-creating solutions with her clients to bring their brand purpose to life through impactful integrated marketing and communication programs.

A well-connected global professional and thought leader, Maliha is Chair of IABC’s international board of directors. She has won several international awards for brand development, content marketing, publications, and internal communications, including two IABC Gold Quill “Best of the Best” awards for employee engagement and COVID-19 crisis response management and communications. In 2021, she was recognized as IABC Canada’s 61st Master Communicator, a lifetime achievement award for contributions to organizational communication in Canada.

She holds a Master of Business in Strategic Marketing from the University of Wollongong, Australia, and is pursuing a Doctorate in Business Administration at Royal Roads University, Canada. Her research explores the link between social purpose and customer engagement strategies in the hotel industry.

Haiilo Manager Haiilo Manager

Get your
free demo now!

Improve internal communications
and employee experience in your company.

Group 14