Note: the following article recently appeared in Catalyst, the official publication of the International Association of Business Communicators (IABC). I volunteer for both
IABC DC Metro and IABCLA, and I wanted to share the content.
By: Adam Fuss, SCMP, MITI
On 25 January, Shel Holtz’s Catalyst article, “Generative Artificial Intelligence for Communicators” served as a wake-up call of sorts. Widely respected in the IABC community and broader communication profession, Shel outlined compelling and ethically sound use cases for how generative artificial intelligence (AI) technology can fit within the professional communicator’s toolkit.
Generative AI is all the rage now, and for good reason given its potential to transform content creation and other areas of life. Although not perfect by any stretch — no technology is — generative AI is here to stay and is almost certain to improve rapidly. Rather than debate whether it should play a role in our work, as professional communicators we would be far better served by debating how we should use it.
Not Entirely New
Generative AI tools like ChatGPT are relatively new, but the idea behind them is not. The underlying technology has been used in content creation for longer than people realize.
Machine translation, for example, has been particularly successful. Like generative AI, it uses neural network models to do the labor-intensive work of putting metaphorical pen to paper. It also comes with worries of its own regarding things like data security, privacy, accuracy, copyright, intellectual property and professional displacement, to name just a few.
Less frequently discussed are the potential impacts of these technologies on human creativity, critical thinking and professional development. With generative AI, we’re still very much in the realm of hypotheticals, but my own experience with machine translation over the years suggests that the path from possible to probable to definite may be very short indeed.
The time to broaden and deepen the ethical discussion of these technologies is now.
"What’s Taking so Long?"
I first encountered Google Translate in 2007, shortly after its launch. At the time I was working as a full-time translator, editor and copywriter, playing my small part helping Russian and western companies — or, rather, the people who worked for them — communicate better and engage more deeply with one another and their respective markets.
Curious about Google’s new technology, a colleague and I gave it a try using a few texts we were writing for an oil industry client. To our surprise, the results were quite good for both the English and Russian samples. Sure, Google mistranslated some numbers and didn’t do so well with the formidable double negation in Russian, but the output was something that could be cleaned up by professionals like ourselves.
For the next few days we used Google Translate for several full texts. While initially thrilled at having our jobs seemingly done for us, we quickly realized we were spending more time fixing Google’s translations than we would have spent doing the job properly straight away. When our boss began asking about the slowdown in our work, we decided to end our experiment.
The Power to Connect and Inform
Fast forward to the past several years, and the world is a very different place. Google Translate has improved by leaps and bounds. Machine translation technology more broadly is used everywhere from social media to travel sites, often with people completely unaware. It’s used by translation agencies to speed up the process of translating standardized texts (such as contracts and technical documentation), has been greenlit for use in medical settings and has mobilized European support for Ukraine to an astonishing effect. It has, in no uncertain terms, changed the world.
"Just Clean Up the Google"
Social media posts and travel reviews are one thing, but what about murkier use cases?
Many years after that first professional encounter with Google Translate, I found myself giving it another go when a client requested I turn around a 2,500-word translation in less than two hours — a tall order, to put it mildly. The document in question was a set of talking points to prep a Russian executive for an upcoming television interview. Under ideal circumstances, at least twice as much time would have been required to translate it properly.
“Just clean up the Google” was the response I received when protesting the lack of time.
I hesitated for a number of reasons. I feared turning in substandard quality that would reflect poorly on me as a professional. I felt like I was cheating. I harbored concerns about potentially giving sensitive intellectual property to Google in the process. This felt like far too serious of a project to let a machine take the lead.
But since there was no time for debate, I proceeded, careful to remove words and names I knew were sensitive before entering the text in Google. To say that I was blown away by the results would be an understatement.
Technically accurate and grammatically correct, the English version — available instantaneously — gave me an excellent starting point. Some sentences required restructuring, a few minor translation errors had to be corrected, but a number of sentences needed nothing at all. Several times I had to remind myself that I wasn’t working with human-generated output; it was that good.
Turning in a cleaned-up Google translation felt a bit dirty at first, but I rationalized it by telling myself that the document was never going to be published and that my client’s best interests demanded moving as quickly as possible.
But How Good Is Machine Translation at Connecting, Really?
Going back to common use cases — social media, travel reviews, medicine — those surely mean greater access to content that simply wouldn't be possible without machine translation. Exactly no one, after all, is going to invest in translating comments left by the masses on Trip Advisor or tweets about European security policy. The talking points, though slightly more problematic in the sense that this executive wasn’t handed the very best English translation possible, ultimately represented another use case where the good outweighed the bad. He ended up giving a really good interview, after all.
Had I been using Google Translate to translate a published work, however, I may have crossed a line. The less-than-perfect English in many Google translations, though technically accurate and grammatically sound, is often stylistically off even when cleaned up. But it wasn’t until well after I had submitted the job that I began to fully appreciate this.
Consider the following two sentence formulations, which mimic some of the language I was working with:
1) According to the results of the latest survey of customers that was conducted by our company….
2) A recent customer survey ran by our company showed that…
There’s nothing wrong with the first sentence. It’s accurate and faithfully reflects what would appear in numerous languages where passive voice is acceptable. It’s how a native English speaker who’s not used to writing professionally might formulate it in a hurry. In all likelihood, something similar made it to my client even after my clean up. But it’s clearly not as good as the second version, which is closer to what I would have written myself.
And therein lies the problem.
Machine translation is capable of generating such accurate and grammatically sound copy that one can easily get lazy and simply accept sub-optimal output. This state of affairs can just as easily hinder communication as aid it, especially when people are made to sound like the bureaucrats they may well be or, in the worst-case scenario, like the robots they most certainly are not.
I can only wonder how this experience will soon be mirrored with content generated by ChatGPT and similar tools, if it’s not already.
The Tough Questions We Must Ask
Shel Holtz rightly points to the IABC Code of Ethics as a starting point for how professional communicators should approach the use of generative AI tools. But as this trend continues to emerge with real-world use cases, IABC’s code (and others like it) is not currently well equipped to answer the bigger questions that we would do well to discuss sooner rather than later. Here are just a few, in no particular order of importance:
1) How can people be expected to devote time to their craft (writing, translation, photography, graphic design, etc.) when “good enough” output can be generated instantaneously and free of charge by AI? Or do we expect them to?
2) How can we prevent or mitigate talent displacement resulting from AI use in our profession? Or do we?
3) Will people develop a preference for the output of machines? Is that a problem? How do we guard against it?
4) What does increasingly good machine output mean for training future professionals? Do machines become gatekeepers? Do machines train people? Does machine output provide a standard against which students and early career professionals are measured?
5)Is there a point to funding the study of languages, translation, creative writing, journalism, or other creative skills like videography, graphic design and photography?
6) Will people forget how to research topics and lose their ability to think critically about sources? What does fact-checking look like?
7) What will we do when AI tools introduce bias, even though their output might be factually correct? Will we be skilled enough to spot it? Will we care enough?
8) What does attribution look like? Should communicators or publications be required to disclose the use of generative AI? Should there be a threshold or percentage of how much AI-generated content is acceptable in a given communication?
9) Should there be an expectation that humans should always be involved when generative AI is used?
These are not easy questions, but organizations like IABC are well positioned to address them. Several years ago, the Chartered Institute of Public Relations (CIPR) in the UK and the Canadian Public Relations Society jointly published the “Ethics Guide to Artificial Intelligence in PR.” Now is the perfect time for IABC to go a step further and create a robust position statement and/or expand our Code of Ethics to specifically cover issues related to generative AI use in the same way that the American Translators Association did several years ago with machine translation.
Business communicators are often criticized for reacting to events as they happen; such is often the nature of our work. But with generative AI, we would do well to be strategic and forward-looking in addressing both the potential promise and peril that these tools hold. It’s an effort we owe our organizations, our profession and ourselves.
Photo: Alex Knight
Your comment will be posted after it is approved.
Leave a Reply.
I'm Eli Natinsky and I'm a communication specialist. This blog explores my work and professional interests. I also delve into other topics, including media, marketing, pop culture, and technology.