Andy Shepherd, Envision Pharma Group; Matt Lewis, MPA, Inizio Medical; and Chirag Jay Patel, Cactus Communications

All views and opinions expressed in this article are those of the authors and individuals quoted and do not necessarily reflect the opinions of their respective employers/organizations or of ISMPP.

LinkedIn: Andy Shepherd, Matt Lewis, Chirag Jay Patel

Email your questions and comments on this article to TheMAP@ismpp.org.


With the initial curiosity around ChatGPT and generative artificial intelligence (AI) continuing at time of press for this article, we looked to explore this technology further and the implications for SciComms and publishing. The ISMPP Artificial Intelligence working group has gathered insights from relevant experts in technology and publishing to present more nuanced perspectives on the strengths, weaknesses, challenges, and opportunities for generative AI in the medical publications and communications industry, now and over the next 5 years. By providing insights on where generative AI may be headed, we hope to inform important decisions about how best to use or invest in AI technology in the coming years.

A Brief History of Generative AI

Interest in generative AI, that is, AI techniques that involve creating or generating new data (eg, images, text, or music) using complex algorithms and machine learning models, has exploded over the last year, but generative AI itself is not a new concept. The idea can be traced back to Alan Turing’s “Imitation Game,” proposed in the 1950s to test if a machine can show human-like intelligence, also known as the Turing test.1 As early as the 1960s, long before the advent of the internet, generative AI was introduced in the form of chatbots, but they were limited and could only do what they were specifically programmed for.2 By 2014, generative adversarial networks had been developed, greatly improving the quality of images, audio, and video that could be generated.

Most recently, large companies including Amazon, Baidu, Alphabet, Meta, and Microsoft, as well as many smaller startups, have significantly increased their investment in generative AI. This technology is being used across different fields, including art, music, gaming, and fashion, and is expected to play an important future role in society and the economy at large. In this article, we focus on text-generative AI and ChatGPT in particular, as they are directly relevant for language, scientific narrative, and communication to professional and patient audiences.

Large Language Models and Hallucinations

Large language models (LLMs) are trained on large volumes of text to model statistical relationships between words in a way that allows them to generate human-like responses to prompts. These models may incorporate billions of parameters, which are so complex that no human could possibly know the vast number of relationships and effects the models are tracking. Recently, there have been rapid increases in output quality following increases in LLM sizes and the sheer volume of training data available on the internet. However, it is vital to remember these platforms are only generating a human-like response, with no actual understanding of the output, and for this reason they have been referred to as stochastic parrots. One important consequence is that an LLM response can present “made up” information as if it were fact—a phenomenon known as “hallucinations.” As recently as the start of this year, most AI chat platforms would fail when presented with basic mathematical or logic questions, providing something that looked like fact but that was not accurate. With the rapid pace of AI development, logic and mathematical modules have already been added to some platforms to solve these obvious issues, yet the underlying risks of hallucinations remain.

What is ChatGPT?

ChatGPT (released November 2022) is an AI chatbot LLM, based on the generative pretrained transformer 3.5 (GPT-3.5, released March 2022) with current versions being based on GPT4 (released March 2023), that can generate human-like responses to written prompts and engage in conversations on a wide range of topics. At its core, ChatGPT is an AI technology that leverages advanced math and statistics to estimate the next most probable word likely to appear after being prompted. It can be used for marketing, sales, creating content, language editing, education, conducting literature reviews, and many other time-consuming tasks. Physicians are currently using it for writing and sending prior authorization denial rejection letters, preparing patient education materials, and discussing treatment plans with caregivers. However, there are also concerns about the use of this technology, including the potential for it to fabricate facts and references, to spread disinformation or to manipulate public opinion, to be used in cyber attacks, or to automate tasks that could lead to job losses. Despite these concerns, the development of ChatGPT is a major step forward in the field of AI and natural language processing.

Expert Opinions

Beginning in December 2022 and ending in February 2023, prior to the release of GPT4, we asked a panel of experts in technology and publishing the same 3 questions and synthesized their responses. Comments expressed are individual opinions and do not represent the opinions of their affiliated organizations or ISMPP.

Q1: Since ChatGPT was released to the public, there has been a groundswell of interest in generative AI. What was your initial reaction?

What did you think when you first heard about ChatGPT, asked it your first question, and saw what others were doing with it? If you are like us, the authors, you were hooked from the first questions forward and most likely spent hours, if not days, asking it all sorts of questions. Maybe you even asked it to rewrite a proposal or an email, write a blog post, create a viral social media post, or asked it “what do you know about me?”

The experts we asked were no different in their initial reactions. All were surprised by how well it worked, how easy it was to use, and that it could provide answers in a human-like way. Caroline Halford (Development Director, Springer Healthcare) said, “In terms of ChatGPT, I was initially excited… Imagine a robot that could automatically create content and plain language summaries… The time to publication would be so quick.” Caroline was not the only one to see the possibilities with this new tool. David McMinn (Managing Director, Lay Summaries Ltd) said, “[I] saw immediate potential application for the work that I’m doing, as well as in med comms more broadly.”

I was caught between thinking ‘this is an amazing tool with so much potential’ to ‘oh my, this tool can wreak more havoc across news, science, and truth.’ – Susan Willner

At the same time, others were rightly concerned with how the technology could be misused. Adam Day (Data Scientist, SAGE Publishing) shared, “my initial reaction to every generative AI announcement of recent months has been: this is obviously going to be used for research fraud. ChatGPT is not perfect; it has a tendency to ‘hallucinate,’ basically make things up to fit your prompt and make you happy. [Does this] remind you of your kids?” Prathik Roy (Product Director, Springer Nature) warns, “After all, trust and reproducibility are the cornerstone of science, and generative AI can’t yet make the distinction between validated and nonvalidated data sets.” Science needs to be seen as reliable and reproducible to thrive and gain trust among policy makers, politicians, and the public. Without trust, the vacuum is filled by conspiracies. As a publisher, Rachel Burley (Chief Publications Officer, American Physical Society) said her initial reaction about ChatGPT was “now we will need AI tools to detect the use of auto-generated text in submissions to our journals,” which she quickly followed by indicating such tools have already been introduced.

Q2: Now that we have all had some time to process and consider its implications on our work, in this space, what are your thoughts?

When we, the authors, posed this question to ChatGPT itself, it replied, “Overall, I believe that ChatGPT has the potential to transform the way we work and communicate, and I am excited to continue being a part of this progress.”

The mass adoption of ChatGPT and real impact won’t happen until we find a way to better verify the output – Josh Nicholson

On the surface, ChatGPT seems to work really well; however, given time, you may realize the data it is providing can be misleading. Joanne Walker (Co-Founder, Becaris Publishing) found this out first-hand when asking it for peer reviewers: “But on closer inspection, of the ‘people’ that were suggested, I could not find any of them on Medline or Google.” For Ian Mulvaney (Chief Technology Officer, The BMJ), the hype is justified and these tools “operationalize the ability to program very fuzzy tasks,” but “they will drive the cost of creating plausible content down,” further increasing the risk posed by fake content. These issues pose a big challenge for publishers. Rachel Burley advises that “publishers and journals will need to develop policies and practices to minimize the negative impact.” Several have already established policies on how authors can use AI tools like ChatGPT. Although ChatGPT and LLMs can potentially help us with work, school, and research, they do have limitations. This is where Sourav Dutta (SVP, Strategy and Corporate Development, Cactus Communications) believes humans can play a critical role, “if a human works alongside ChatGPT, the output can be more accurate and be delivered much quicker.”

Q3: In 5 years’ time, how do you see AI being used in SciComms? What lessons should we take away from our near future that can help us and our colleagues prepare?

Generative AI is moving so quickly that it is hard to guess what SciComms will look like in 5 years. There is certainly a lot of excitement around what it could be, especially on making AI solutions more trustworthy. According to Josh Nicholson (Co-Founder and CEO, scite), trust can be earned by “showing how the output was generated in a way that is easy to understand and verify will help make research more accessible.” There also seems to be agreement among experts that ChatGPT and its peers will be instrumental in helping authors prepare manuscripts, generating plain language summaries, personalizing content to the user’s needs, and understanding—even conducting—peer review. The belief is that AI will help reduce the amount of time spent on writing, editing, and reviewing, and it will allow authors, editors, and reviewers more time to focus on more complex tasks. Susan Willner (Associate Director, Publications, American Society of Nephrology) states, “we can expect them to improve and match the quality of human-written summaries over time” and believes that AI will help reduce publication times.

At the same time, everyone involved in the business of science communication needs to make sure AI is not replacing people but, instead, making what people can do better. Jennifer Regala (Director of Publications, American Urological Association) calls on leaders in scholarly publishing to follow “emerging technologies and associated opportunities and threats so AI is used to enhance and support authors, reviewers, readers, and editors.” Although AI can speed up literature reviews, write articles in multiple tones and perspectives quickly, and analyze vast amounts of data in seconds, it cannot express empathy, understand context, or connect with other people the way a human writer can. So, what can we do to prepare? Adam Day offers 3 suggestions: (1) watch and learn, (2) help make AI tools work well with consistent open standards, and (3) deal with research misconduct—fundamentally challenging the incentives for misconduct.

Once the hype-cycle has settled down, we are likely to realize that LLMs are just another tool, an advanced tool that will complement the workflow of a researcher, editor, or anyone in and outside the scientific domain. – Prathik Roy

Generative AI gives everyone the potential to create new content at a much faster rate than was previously possible. It can improve efficiency across a wide range of processes and even contribute to the development of new products and services. However, these same strengths also present as potential weaknesses. The volume of content being generated within specialist fields is already vastly more than any individual could keep up with, and many questions will need to be addressed regarding the quality and originality of AI-generated content. Although the threshold of acceptability for these risks will undoubtedly vary depending on the application, there is also the risk of a single “bad actor” using these tools to generate vast amounts of low-quality or even deliberately misleading content. These concerns regarding how AI technology could be misused will need to be addressed. Yet, even so, many think the potential benefits of generative AI outweigh its risks.

The Age of Augmented Intelligence is Now

ChatGPT is the fastest adopted technology in history, gaining over 1 million users within the first 5 days of release. If you have not yet used it, the authors, experts, and reviewers of this article humbly ask you to take the time to try it (or one of the other generative LLMs out there). Ask it something you already know the answer to, then ask it something you have always wondered. If you do not like what you are told, tell it so and how to improve its response, and in what style—from formal English to a tabloid news report, as a letter by Edgar Allen Poe, or even as a haiku or limerick. You will know you have hit “pay dirt” when you find yourself arguing with the platform and someone in your life catches you doing so. Initial focus on verification will remain, alongside data privacy, governance, compliance, copyright, biases, and the like; but as these hurdles are addressed, they also point towards a vision for the future: what might be possible with this innovation and its successors? Make no mistake, we have squarely entered the Age of Augmented Intelligence, when Humans Plus AI delivers not just automated or digitized content, as we did in the past, but truly creates synergies in our work and in our worlds.

Conclusion

This article is the output of the ISMPP AI working group. We (the authors) conceived the article in December 2022 and published it in May 2023—relatively quick by publication standards. Yet, in that time, the generative AI field has already evolved, and there has been a “Cambrian explosion” of startups, apps, and new thinking emerging that is clearing the Augmented Intelligence path forward. We look forward to joining you on the journey and hearing about your first steps. Please feel free to reach out to the authors with your questions, comments, and experiences with generative AI at TheMAP@ismpp.org or via LinkedIn (Andy Shepherd, Matt Lewis, Chirag Jay Patel)

References

1. Turing AM. Computing machinery and intelligenceMind. 1950;59(236):433-460.

2. Weisenbach J. ELIZA—a computer program for the study of natural language communication between man and machine. Computational Linguistics. 1966;9(1)36-45.


%d bloggers like this: