In the dynamic landscape of communications, the integration of artificial intelligence (AI) introduces possibilities while also presenting challenges that demand thoughtful consideration. As businesses harness AI to boost efficiency, it’s imperative to address not only the proven positive impact on productivity, but also the lingering concerns surrounding job displacement, ethical implications, and biases ingrained in AI algorithms. The latter is especially important as we head into an election year and are seeing an influx of AI-generated misinformation that will shake the confidence of audiences.
AI saw widespread adoption in 2023 with businesses across a wide range of industries adopting the technology. Once exclusively the domain of tech-centric industries, the rising demand for Generative AI (Gen AI) products has produced impressive growth, with Bloomberg anticipating that Gen AI will become a $1.3 trillion industry by 2032. A major factor has been the increased availability of Gen AI engines, which have made it easier for companies to experiment with the technology and determine its potential uses.
One industry that has found considerable value in Gen AI is communications. Managing strategic communications has become increasingly complex over time with new challenges constantly emerging. With the help of AI technology, communications teams can act with greater efficiency, supported by a varied selection of digital tools. From streamlining content ideation to collecting social insights, AI can offer meaningful support while enabling communications professionals to focus on creative and tactical efforts.
While there have been concerns about AI replacing human creativity in communications and public relations, AI cannot be evocative and creative in the same way that people are, meaning that human communications professionals are as important as ever. Even as technology improves, AI will be best served as a unique and useful tool rather than a replacement for human creativity and knowledge.
Despite its many benefits, AI comes with several shortcomings. AI is still prone to generating false or misleading information due to technical issues and/or biases that have been baked into the algorithms. Equally problematic is that bad actors can easily use AI technology to purposefully generate and spread misinformation for political tactics. As this election year continues to gain momentum, the political ramifications of AI implementation have become increasingly pertinent. Governments have been quick to roll out AI technology to expand and improve public services, but few have fully reckoned with how those platforms might fail certain groups. AI platforms can have flaws with major implications for class, gender, and race, and without taking active precautions against these flaws, AI ends up doing more harm than good undermining voting accessibility, information, and trust.
There are over 160 million registered voters in the U.S. and given that the 2020 presidential election saw the highest voter turnout ever, this election cycle will be a fierce one. As such, we’re going to see AI-driven misinformation efforts continue to ramp up. We’ve already seen hints of what this could look like: 2023 saw a viral fake story about a bombing at the Pentagon accompanied by an AI-generated image, which caused a public uproar and a dip in the stock market. Additionally, Governor Ron DeSantis recently aired an attack advert that used AI-generated audio that replicated Donald Trump's voice, showing that content that blurs reality is even seeing mainstream adoption.
Misuse of AI only serves to fuel public mistrust, not just in the political sphere, but also in broader communications, resulting in negative perceptions around the technology. It is important for tech companies to reckon with AI’s political impact -- a failure to do so will hurt them in the future. Overall, businesses must navigate this landscape with a dual focus: leveraging the benefits of AI while ensuring transparent practices to alleviate fears and misinformation.
To find a balance between AI’s opportunities and challenges, companies must adopt proactive measures, such as the establishment of clear policies and ethical guidelines governing AI usage in their communication strategies. Furthermore, businesses should actively work towards inclusivity and diversity in AI development to mitigate algorithmic biases and foster responsible and equitable deployment. Embracing open dialogue on issues such as privacy, data security, and formula accountability is also vital to building and maintaining trust with stakeholders. Transparency is key, as people are more willing to trust a piece of AI-generated content if it is clear where it was generated and what happened to it along the way.
Ironically, the most valuable tool for combatting AI-generated misinformation might be AI itself. Semantic analytics conducted by AI-based tools can analyze textual content and detect cues, including word patterns, syntax construction, and readability, to differentiate computer-generated content from human-produced text. There is also spread analysis, which can identify the differences between how fake news travels across social networks when compared to genuine news stories. AI can also support human fact-checkers and their efforts to curtail the spread of fake news, assessing the credibility of sources in real time.
AI will only see wider implementation in the future, so we must tackle the many risk factors that come with it. It’s potential for bias could undermine people’s voting rights, while bad actors could use the technology to spread misinformation to manipulate political narratives. To maintain the trust of stakeholders, businesses need to adopt a nuanced approach on how they utilize the technology, acknowledging both the promises and pitfalls in the communications sphere. By fostering innovation while responsibly addressing potential challenges, organizations can maximize the benefits of AI while mitigating possible misuse.