Simon Fry, Springer Healthcare

Artificial intelligence (AI) is being researched and applied in many industries, and it is showing promise in the clinic. Given this trend, what might be the effect of AI on the medical communications landscape and its expert practitioners in the future?

In late 2018, Gartner – a well-known information technology research company – declared that “Peak AI Hype” is upon us1, but it is not just hype. In many areas of medicine, AI has already demonstrated solid promise and, in some cases, is building a track record of success. In healthcare at least, it seems that AI is here to stay. Indeed, for observers of the technology involved in medicine, the question has become “when” (not if), and how deeply, AI will prove that it can consistently support the delivery of outcomes that are profoundly better than a human acting without AI-enabled tools. Clearly, diagnostics has the most to gain – but there is now speculation and solid research showing that AI may also excel in treatment decision-making2. But, will AI forever remain the Watson – or will it eventually mature to become the Sherlock Holmes of the clinic? And if so, what will this mean for those who have built their professional lives documenting the evidence platform on which licensing, formulary, and prescribing decisions are based? For some insight, check out this fun tally of human vs. machine achievements in medicine: https://spectrum.ieee.org/static/ai-vs-doctors.

Obstacles for AI in Health Care

© Master Video / stock.adobe.com

Ajay Agrawal et al. in their book The Simple Economics of Artificial Intelligence3 described a beautifully simple, single question that has long been the economist’s equivalent of Occam’s Razor, bringing supposedly era-defining technologies like AI into focus and free of the clutter of jargon and hype. The question is: “What does AI make cheap?” The answer comes simply: AI makes prediction cheap. Agrawal explained that far from defining a new economics and world order, “cheap prediction,” just like all technology that came before it – from the wheel to the microprocessor – will result in the following pattern: First, society will use “lots of it;” second, society will use cheap prediction to solve myriad problems that may never before have been framed as problems of prediction. So, might cheap prediction demonstrate clear superiority in prescribing treatments in the same way as it appears to be doing in many areas of diagnostics? Might cheap prediction be capable of scaling into a globally available toolset that radically improves the longevity and well-being of billions of human lives?

© xijian / Getty Images / iStock

In their 2017 article “Artificial Intelligence in Healthcare: Past, Present and Future”2, Jiang et al. concluded that while groundbreaking progress is being made in machine-based treatment decisions, the single biggest challenge is the lack of financial incentives to deliver on the human, social, financial, and political factors that will provide the data that is the lifeblood of machine learning. This is where, again, Agrawal’s book can provide insight. The authors observed that when something becomes cheap, another consistently demonstrated economic effect is that it “raises the value of complementary activities.” In the case of machine-learning powered treatment decisions, the capture, storage, and availability of high-quality anonymized data is the single biggest enabling complementary activity. Considering the enormous potential upside for profoundly better patient outcomes, with less waste and less medical error, and its negative consequences (monetary and human), building the data infrastructure to support the complementary activity of effective real-world medical data farming will potentially be the single most profitable commercial activity in healthcare for the next generation.

AI’s Impact on Medical Publications and Medical Communications

As it relates to medical publications, the beginnings of how AI might influence the development of manuscripts and their submission is already in view. For example, Synchrogenix, part of the Certara Group, is demonstrating that when fed with a clinical trial report (CTR), the statistics plan, and the relevant tables and figures, its AI tool can render 80% of a first draft within 24 hours. Other technology that is currently under wraps at Springer Medizin (the German publishing group of Springer Nature) is able to take a manuscript and produce an AI-powered “reader guidance system,” which would help time-strapped medical professionals and other readers quickly consume the “critical insights” from Springer Medizin’s clinical “Focus papers.” As this technology gathers momentum, a full CTR might soon be converted to “5 critical insights for decision makers,” displayed in context, with no written human intervention. The core technology underlying this capability is Natural Language Processing (NLP), Natural Language Understanding (NLU), and Natural Language Generation (NLG) ‒ and all are heavily dependent on progress across the spectrum of AI technologies.

NLP is generally classified as one of the hard-to-do aspects of AI, and it’s easy to imagine why. Two-way communications with humans, either spoken or written, in any of our age-old, inefficient, ambiguous, redundant languages that have evolved out of guttural utterances of long-gone prehistoric ancestry feels like it should be alien and impenetrable to AI, the primary substrate of which is mathematics. But for those wanting or tending to agree with this supposition, there’s a wake-up call! Try watching IBM’s latest AI Debater compete in a live debate with a human world-debating champion after just 15 minutes of scanning the web for research. Click here4 to view the debate. Undoubtedly, there is the occasional jarring mispronunciation of words, suggesting that the system “understands” nothing about its subject. But in the final assessment, the ability of this AI to summarize complex subject matter, detect logical arguments, and present them ‒ not just coherently, but in some cases with subtlety and nuance ‒ to a human audience is very impressive. Check out this link for more details on how Project Debater works.

© aldegonde le compte / Fotolia

The timeliness of this debate is uncanny. In a 2011 paper, Frank Denson, MD, wrote that while back in 1950, the amount of medical content was doubling every 50 years, Denson predicted that by 2020, this would double every 73 days.5 For the many regulators, payers, and, most significantly, prescribing healthcare professionals tasked with keeping apprised of all this evidence and expertise, this represents a simply untenable situation. However, such tools as AI Debater are now demonstrating talent for tireless searching, distillation, curation, summarization, and presentation of empirically based, logically reasoned decisions. Far from being harbingers of doom, is it possible that this technology could save society from self-induced cognitive paralysis in the face of all this information overload?

Ultimately, the question for the medical communications profession, once AI becomes an important consumer, has to be “What does what we do, do?” and “What are our outputs intended to achieve?” This can be quickly summarized as regulatory submissions, payer approvals, clinical education and decision support, and recording research in academic literature. When looking broadly across this set of assets (eg, pharmacological profiles, descriptions of the disease landscape, burden of disease data, cost-effectiveness data, clinical guidance, narrations of clinical trial adverse events), a critical observation is that all of these communication formats are actually documents that are already largely templated to benefit the reader and are generally free from subjectivity, surprise, or deviation from structural norms. There’s no room or need for human expression here ‒ and ultimately, these are the types of documents that Synchrogenix’s nascent NLG are likely to be good at writing.

It’s very likely that regulators will already have their own AI tools trained to help sort through masses of data and information; improve the robustness, quality, and utility of technology assessments; and reduce the cost of doing so6. Also, prescribers will migrate to combined human/machine decision-making in the not-too-distant future. In this world, it would be naive to assume that the same medical communications deliverables that have served a valued purpose when only humans comprised the decision-making apparatus would still be the optimum way of supporting an entirely new way of evaluating and generating actions from data and evidence. Moving toward the best vision of the future, where decision-making around the application of pharmacology shifts from single pivoting decision-points to continual dynamic re-evaluation, the need to optimize the information pipeline from data creators to regulatory decision-makers and prescribers will become an opportunity to evolve medical communications into an enabler for truly symbiotic collaboration between humans and machines, creators, and consumers.

© mrPliskin / Getty Images / iStock

However, a roadblock to the implementation of AI is that the computational structure created during the training process is extremely difficult to reduce into human-understandable terms, and thus, it is very difficult to explain how the system reached any given decision. Considering that there are clear opportunities for AI to be used in situations that carry potential risk to human life (eg, self-driving cars, medicine), AI Ethicists (another new job title for the future!) are already demanding that accountability in the form of explain-ability be built into AI systems7,8. This gap between the potential power of technology to heal and the human understanding to deal with potential misuse and errors is comfortable ground for medical communication professionals. It is easy to see how a role in helping to explain AI components of medical systems and products could become a fertile territory for an entire new set of communications; perhaps here, it is possible to glimpse the necessity for the evolution of the medical communications profession, providing critical human-to-human understanding in this rapidly changing area, and an essential bridge to a future where AI power is fundamental to improving health at a global scale.

The Exciting Future Ahead

The theme of this year’s European Meeting of ISMPP, January 22-23, 2019, in London, UK, was “Fighting Fit for the Future.” But far from a potential future where AI blights humankind’s prospects, it is, instead, possible to imagine a future in which AI technology is an enabler, hopefully outperforming humans and reducing the error rates in diagnostics and treatment that have beset the medical profession for generations. Consider futurist Gerd Leonhard’s quote that “in the face of widespread automation, the value of things which cannot be automated explodes in value.9

Readers will note the similarity between Leonhard’s idea and Agrawal’s insight that cheap prediction raises the value of complementary activities. Leonhard’s perspective, however, evokes a deeper, more impactful prediction for medicine. The ability of humans to increase the healing power of medicine through the diligent application of empathy and emotional connection with other humans has long been noted10,11, but perhaps is still underestimated and undervalued. Could AI’s dominance in empirical decision-making usher in an era for which the real value of the human actors is founded in these softer, but traditionally harder to achieve, objectives of genuine empathy, high-quality patient communication, trust, etc.? How might the work of communication professionals need to be tuned, not only to promote this need for emotion and empathy, but also to take account of a fundamentally different set of cognitive skills that will define the healthcare professional of the future?!

One of ISMPP’s goals is to nurture a professional body founded in ethics, transparency, and a commitment to further science and human health by upholding the highest standards in the quality of medical communication deliverables. Time will show that these objectives are included among the complementary activities that will explode in value as AI raises the bar. For those who embrace this change and are willing to evolve, there is a very exciting future ahead.

© doble-d / Getty Images / iStock

References

  1. http://www.cityam.com/265597/gartner-hype-cycle-2017-artificial-intelligence-peak-hype
  2. Jiang F, Jiang Y, Zhi H, et al – Artificial intelligence in healthcare: past, present and future – Stroke and Vascular Neurology 2017;2:doi: 10.1136/svn-2017-000101 https://svn.bmj.com/content/2/4/230#ref-3
  3. The simple economics of artificial intelligence (Ajay Agrawal)
  4. IBM Project Debater. https://www.youtube.com/watch?v=m3u-1yttrVw
  5. “Challenges and Opportunities Facing Medical Education,” Tran Am Clin Climatol Assoc. 2011, 122: 48-58.
  6. https://www.pharmalex.com/wp-content/uploads/2018/10/TOPRA-Regulatory-Rapporteur-AI-article-Oct18.pdf
  7. https://towardsdatascience.com/explainable-ai-vs-explaining-ai-part-1-d39ea5053347
  8. https://singularityhub.com/2019/03/19/to-be-ethical-ai-must-become-explainable-how-do-we-get-there/
  9. https://youtu.be/JbPUdJorBKY?t=3887 and https://www.futuristgerd.com/2015/08/new-video-on-digital-transformation-automation-and-robotics-gerd-leonhards-keynote-at-kpmg-symposium-in-chicago/
  10. https://accessmedicine.mhmedical.com/content.aspx?bookid=1116&sectionid=62686963
  11. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3529296/

Featured Image: image credit ‒ © kaprikfoto / Fotolia

%d bloggers like this: