It’s fair to say that many people in many industries – comms professionals included – feel overwhelmed by the speed with which AI is evolving. The enormous amount of attention ChatGPT generated on its release in late 2022 – it likely reached 100 million users by January of 2023, the fastest adoption of an app ever, according to UBS – underscores both the public’s interest and apprehension about the technology’s risks and possibilities.
It is also true that there’s a lot of misinformation about AI, some of which was addressed in a recent webinar, Uncovering the Black Box of AI: How communications leaders can take advantage of AI and avoid pitfalls. The more people understand exactly what AI is, the more they can maximize the benefits and minimize the risks. For comms leaders, this moment is an opportunity to pioneer the use of AI in business processes.
To answer that, we first need to ask, what is human intelligence? Longstanding philosophical debates aside, it is generally understood to be the ability to learn, reason and solve problems. It exists on a spectrum of course, and is not an up/down or black/white sort of quality.
So, what does that make AI? A fitting analogy is that of a “cognitive forklift.” It’s a tool that augments human creativity and ingenuity by helping with cognitively heavy tasks. To understand what this means for comms work, it’s important to first recognize the two basic kinds of AI: generative and discriminative. Both have different properties and serve different ends.
Generative AI (which for now gets most of the attention) looks for commonalities between different data sets and produces new content based on what it learns.
The second type is discriminative AI, which is about analyzing differences between data sets, and helps in decision-making.
For comms, both types of AI are useful in different ways. Generative AI is good at proofreading, writing first drafts, or summarizing data. Discriminative AI is good at analyzing sentiment in stories, making distinctions between superficially similar things, and accelerating strategic decisions. Because it only deals in facts, discriminative AI cannot “hallucinate” or make things up, making it a trustworthy aid for important work.
The opportunity for scaling creative work through AI is obviously huge, but needs to be weighed against the risks. The possibility of generative AI applications hallucinating and producing falsehoods is real and has already led to real-world lawsuits. Signal AI’s legacy in discriminative AI offers a unique advantage as we venture into the generative space, with trusted offerings built atop our premium, diverse and licensed content. By relying exclusively on trusted data sources, we can guarantee far more reliable results.
Getting the most out of any new technology requires understanding its strengths and weaknesses, and for AI, establishing best practices early on is critical to success. Here are some we recommend:
A brief Q&A concluded the webinar. The following is an edited summary:
Will AI, generative or discriminative, replace aspects of comms jobs, if not whole categories of jobs?
We believe AI will augment human skills and intelligence. Properly used, it will help scale creativity and help people become more impactful in their organizations, not replace them.
How should comm skills change to make the most of AI?
Creativity, intuition, and storytelling have always been critical in this business, and will remain so. But going forward, people need to immerse themselves in data, using it to both explain their thinking and to foresee potential problems. Executives in the C-Suite have tended to let junior people handle this kind of research, but in the future, they’ll need to be more data-literate themselves.
How are you thinking about disinformation risk?
This is a big issue that will only become more pronounced and significant. It will be everybody’s problem. The careless use of generative AI, just like the careless use of any technology, carries risks. Currently, Signal AI uses generative AI only as a summarization mechanism to produce concise descriptions and answers based solely on facts extracted from our high-quality and high-trust content. Signal AI does not use generative AI to generate information from scratch, which is exactly where the risk of hallucination comes from. While there’s no silver bullet, trusted content sources will become increasingly critical to constraining the output of generative AI.
What’s the biggest misperception about AI in comms?
The biggest misconception is about what AI actually is. Regarding comms, most people think AI and ChatGPT are one and the same. ChatGPT, and large language models (LLMs) in general, are not “know-it-all oracles.” LLMs are language models, which means they’ve learned how to put words together in a sequence that is syntactically, grammatically, and semantically correct. They are not knowledge models, and have no way to associate a word with its actual meaning. For comparison, when children learn the word “dog” for the first time, they associate the word “dog” with the image of a dog, the sound of barking and other contextual information. An LLM, however, is only exposed to the statistical usage of “dog” across many sentences. For an LLM, the “meaning” of a word depends on the “meaning” of other words, and this is a circular dependency that is not broken. That’s why human judgment will remain critical, and generative AI outputs should be treated as first drafts for us to adapt, correct, and adjust.
What are some specific metrics or indicators that AI can help track and analyze to measure the success of a PR campaign?
Where AI can add the greatest value is not necessarily tracking a PR campaign, but recommending the next best action to optimize the outcome. AI can be used to determine which channels work best for different audiences, what time of day (or day of the week, month, etc.) to push an announcement to generate the most attention and engagement, what tone of voice to use and what kind of creative assets to embed.