May 2025
AI in Healthcare:
Today's Reality and Future Projections
Table of Contents
Note: Images created with the help of generative AI technology.
The Takeover Continues
Artificial Intelligence (AI) is rapidly transforming the healthcare landscape, introducing both innovative opportunities and complex challenges for medical malpractice insurance. As AI technologies become integral to clinical decision-making, insurers are re-evaluating risk assessment models and policy structures to address emerging liabilities.
AI's role in healthcare spans from enhancing diagnostic accuracy to streamlining administrative processes. However, its integration raises critical questions about accountability and coverage. Traditional malpractice policies, often crafted before the advent of AI, may not adequately address scenarios where AI tools contribute to errors or omissions. This gap necessitates a reassessment of policy language to ensure it encompasses AI-related risks. The question of whether liability falls on the healthcare provider, the AI developer, or both remains a contentious issue, influencing underwriting practices and claims management.
In response to these challenges, insurers are adopting advanced technologies like predictive analytics and machine learning to enhance underwriting accuracy and claims processing efficiency. These tools enable insurers to identify potential risks and fraudulent activities proactively, thereby improving overall risk management.
As AI continues to evolve, the medical malpractice insurance industry must adapt to ensure comprehensive coverage that addresses both traditional and emerging risks. This dynamic landscape underscores the importance of ongoing dialogue between healthcare providers, insurers, and policymakers to navigate the complexities of AI in healthcare.
As you may have already surmised, this introduction was produced by AI. The software appears to have largely borrowed content from our partners at Western Summit.
While we cannot deny that AI technologies have quickly become part of our everyday lives, the limitations are still plainly obvious. AI technology can certainly recycle, and at an impressive level of speed and accuracy. However, it cannot generate new content without producing lackluster results (or simply passing off plagiarism as creative output).
This highlights a fundamental truth about our industry: we are a niche space. Experts in medical malpractice are few and far between. While technology may alter our work (hopefully making our jobs easier over time), there is no replacing the fundamental expertise a seasoned insurance professional can bring to the conversation.

Always On: The Risk and Reward of Ambient Listening AI in Healthcare
Artificial intelligence is positioned to transform healthcare, even if AI’s most alluring and headline-grabbing promises—like predictive diagnostics that anticipate illness before symptoms appear—remain projections for the future. While much of the spotlight is on what is to come, a quiet AI transformation is underway.
A survey conducted by the Medical Group Management Association in the summer of 2024 found that 42% of medical group leaders reported using some form of ambient listening AI.1 These systems, sometimes called “AI scribes,” aim to capture interactions between physicians and patients through discrete microphones installed in examination rooms. The AI then generates suggested notes and billing codes for physicians to review and enter into the medical record.
With the promise of increased efficiency in the documentation process, it is easy to see why physicians are drawn to this technology. A 2024 American Medical Informatics Association (AMIA) survey revealed that nearly 75% of healthcare professionals believe the time and effort required for documentation impedes patient care.2 In another study, over 77% of respondents indicated they often work later than desired or take work home due to excessive documentation tasks.3 Less time spent documenting allows physicians to spend more time with patients and potentially helps combat physician burnout.
The benefits of this technology are obvious and potentially transformational for physicians, but AI also brings new risks to consider. Not the least of which is patient consent. Recent evidence suggests that most patients are skeptical about the utilization of AI in healthcare. In a 2022 survey by the Pew Research Center, 60% of respondents reported they would feel uncomfortable if their provider relied on AI for their medical care.4 To help alleviate such concerns, physicians should have patients execute a detailed consent form that explains how the ambient listening system works, what is preserved, and the system’s deletion policy.
Physicians should also obtain and document a patient’s verbal consent at every visit before triggering the listening system. This ensures the patient remains comfortable having their protected health information shared with the system. Additionally, this verification is a legal necessity in states that require consent from all participants to a recorded conversation.
Discoverability is another issue physicians should be mindful of when utilizing this technology. By now, it is well known that most, if not all, malpractice litigation includes a discovery request for all relevant digital communications and metadata stored in the electronic medical record. Such data has provided fertile ground for plaintiffs’ attorneys seeking to weave a narrative in favor of their client, and ambient listening AI could be even more problematic.
While physicians are typically only privy to the AI-generated notes, ambient listening systems may also capture and store a raw audio recording of the entire patient encounter. Whether this data is retained depends on the design and configuration of the specific system. As such, physicians and practices must work with vendors to determine whether their chosen system stores complete audio recordings. If a system retains a complete audio recording of the patient-physician interaction, it will undoubtedly be discoverable in litigation.
In a worst-case scenario, there could be an inconsistency between the note in the EMR and the audio recording. Even a seemingly minor inconsistency could undermine the accuracy and reliability of the entire medical record. It may also be used to suggest that the physician failed to review the AI-generated notes adequately. Negative optics of this nature can derail otherwise defensible cases.
Against this backdrop, there is scant justification for retaining these audio logs once the AI-assisted note has been accurately added to the electronic medical record. To address this, practices should implement clearly articulated retention policies for all data captured by the AI system that is not added to the medical record. Beyond preventing the creation of unnecessarily discoverable data, a well-defined retention protocol that is consistently adhered to should ward off allegations of spoliation. Collaboration with vendors will be needed to ensure the chosen retention protocol is in place.
AI is already reshaping how healthcare operates, and these are just a few risk issues that need to be considered. As this technology evolves, its integration into everyday medical practice will only deepen. Amid these rapid advancements, physicians must remain vigilant to emerging risks, even as they navigate the often-dazzling promise of innovation.
Risk Recommendations
Physicians utilizing ambient listening AI systems should consider the following risk management steps to reduce legal risks:
1. Develop clear, documented patient consent protocols.
2. Implement policies for retention and destruction of audio data.
3. Train providers on what is being captured and how to communicate accordingly.
4. Engage legal counsel in evaluating how these systems intersect with controlling discovery rules.
1. Medical Group Management Association. Ambient AI Solution Adoption in Medical Practices: A White Paper by NextGen Healthcare. Summer 2024. https://www.mgma.com/getkaiasset/b02169d1-f366-4161-b4d6-551f28aad2c9/NextGen-AmbientAI-Whitepaper-2024-final.pdf
2. American Medical Informatics Association. “AMIA Survey Underscores Impact of Excessive Documentation Burden.” AMIA, June 3, 2024. https://amia.org/news-publications/amia-survey-underscores-impact-excessive-documentation-burden
3. Miliard, Mike. “AMIA Survey: Documentation Burden Is Impacting Patient Care.” Healthcare IT News, June 5, 2024. https://www.healthcareitnews.com/news/amia-documentation-burden-impacting-patient-care
4. Pew Research Center. “60% of Americans Would Be Uncomfortable with Provider Relying on AI in Their Own Health Care.” February 22, 2023. https://www.pewresearch.org/science/2023/02/22/60-of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care/

The Advantages of AI for Claims
Claims is integrating AI in several functions to help boost efficiency and substantially reduce costs.
Confidential File Summaries
The team is working with an outside vendor, CLARA Analytics, which will use its intelligence platform to help automate the creation of confidential file summaries, documents that present key information on a claim:
- Treatment facts
- Policy limits information
- Deductible information
- Dates of loss
- Names of involved parties and treaters
- Opinions of insureds
- Experts and defense counsel
- Analysis of applicable venue, judge, and plaintiff counsel
These summaries can take Claims Specialists, at times, up to 8-10 hours to create. CLARA can help reduce that time to minutes.
Automating confidential file summaries frees up time for Claims Specialists to work on medical and legal issues on files. The process also allows Claims staff to spend more time interacting with our insureds and business partners, which helps ProAssurance achieve our Carrier of Choice goals as well as our customer service goals.
Medical Record Summaries, Deposition Summaries, and Medical Chronologies
Claims also partners with several vendors that use AI to help facilitate medical record and deposition summaries and medical record chronologies.
Depending on the number of records or the length of a deposition transcript, it can take attorneys, paralegals, or Claims staff 10 or more hours to complete these summaries. Law firms also charge $110-$250 an hour for this service.
By using AI the work is performed in minutes, leading to a substantial reduction in costs and greater production in less time.
Claims Predictive Modeling
Working with the company Growth Protocol, Claims is additionally developing a claims predictive modeling product, which will use AI to collect ProAssurance data and court data across the country. That data will help predict specific factors related to cases, such as average settlement values, average range of verdicts, and average duration of cases. The product will also gather insight into courts’ tendencies and plaintiff counsel playbooks (what motions they file, cases won, cases lost, experts used).
Having such information quickly available to the defense team will be extremely valuable in helping develop defense strategies on files, including valuation of cases.

An Update on Phyllis, Eastern’s AI-Powered Associate: Two Years of Success
It’s been two years since Eastern Alliance introduced Phyllis, an AI-powered digital team member, to its mailroom operations. Since her debut on April 26, 2023, Phyllis has transformed how Eastern handles insurance documents and claims processing. What began as an innovative solution to automate routine tasks has evolved into a sophisticated system that now handles almost all of Eastern’s Claims mail either fully automated or with minimal human intervention.
Background
Phyllis emerged from a partnership with Roots Automation, creators of the world’s first language-oriented platform enabling insurance-knowledge workers to create AI-powered “digital coworkers” using speech and text. Eastern began exploring the platform in September 2022, and after an extensive training period led primarily by Jennifer Zimmerman, Eastern’s Business Systems Manager, Phyllis was ready to transform the mailroom’s operations by April 2023.
Over the course of several months Roots Automation worked with Zimmerman as she explained exactly how the mailroom associates did their job, step by step, indexing thousands upon thousands of emails—primarily vendor bills, correspondence, state forms, and medical reports. She narrated how all of them needed to be reviewed and indexed to the right document type and sent to the right person.
As Zimmerman painstakingly explained the work, Phyllis began to learn.
She came programmed with key capabilities needed to effectively manage the documents and systems commonly found in an insurance mailroom. She holds an extensive amount of insurance-specific data and process knowledge through Roots Automation’s proprietary InsurGPT AI model—a large language model (LLM) created specifically for insurance.
She is built with AI that is capable of reading screens, documents, forms, and data like a human. She can dynamically identify and interact with objects on a computer screen, such as buttons, icons, form fields, and field names.
Leveraging advanced Natural Language Processing (NLP) to collaborate and communicate with people around document exceptions, Phyllis is continually adapting and learning.
She captures transactional and meta-data through interactions with people. If she is not “confident” in her ability to process an item, she can send that item for “review,” where one of our human team members can handle it for her. Phyllis undergoes monthly performance retraining, with the goal of increased confidence and ongoing improvement—meaning she gets smarter every month.
Growing Capabilities
Over the past year, Phyllis’s capabilities have continued to expand.
“When we went live with our new claims platform, Insurity, it brought us claim numbers in a completely different format,” said Zimmerman. “Our new numbers look like this: 225-000123456. Phyllis learned this new format when we went live in January 2024. However, we started to notice that we were receiving mail with the ‘dash’ removed. They would send us ‘225000123456.’ Without the dash, Phyllis was not matching/finding the claim numbers, which resulted in more work for us to manually review.” Roots helped to retrain Phyllis, and she learned to recognize the number and re-insert the dash.
Most recently, in December 2024, Phyllis learned to index high-priority mail. “It was the week of Christmas and was a great present for the team!” said Zimmerman. They had struggled with handling 125+ high-priority tasks that had to be indexed on the same day. This proved to be a challenge, especially on Mondays if they had a team member out of the office. Now they can easily work the high-priority mail with Phyllis each day.
Impressive Results
The benefits are substantial. Year to date, Phyllis has fully automated 67% of documents with no human intervention needed. She was able to reprocess and automate another 22%. Only 11% of her tasks have required manual processing.
Prior to Phyllis, average turnaround time for Claims mail was always three days. Now we have same-day processing.
The efficiency gains are impressive as well. Phyllis reduced processing time for “priority-five” mail from five days to just one hour—a 100-fold improvement. Overall, Eastern has saved more than 2,700 human hours since Q1 2023.
Most importantly, however, is that no humans were harmed when implementing Phyllis. She may do the work of several people, but she doesn’t replace our valuable human team members. “We have had no staff reductions with Phyllis,” said Harry Talbert, Eastern’s Senior Vice President, Information Systems. “The idea is to automate inefficiencies and reallocate people to other aspects of their work where they can provide more value—value that only a real person can provide.”
AI-powered digital coworkers are designed to streamline processes, not replace jobs.
Read more: “Meet Phyllis: Eastern’s AI-Powered Mailroom Associate,” ProVisions, February 2024, https://agents.proassurance.com/provisions/feb24#phyllis.
![]()
|
![]()
|

Your AI Library
In 2023, as part of our retention campaign, we offered the popular book Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, by Dr. Eric Topol, cardiologist and executive vice president of Scripps Research.
In case you missed out, we have a few copies left. Let us know if you’re interested by emailing AskMarketing@ ProAssurance.com and include your mailing address.

About the Book
Topol examines the role AI will play in the future of medicine. He says that AI is quickly becoming an essential tool for cutting through the clutter to improve the speed and accuracy of diagnoses.
Deep Medicine provides an inspiring look at how deep learning algorithms can further a physician’s ability to create custom treatment plans. Topol has gathered a variety of examples of current AI use in medicine—from utilizing wearable sensor data to providing coaching via virtual assistants and reviewing medical histories via language processing. This book shows us how the power of AI can make medicine better and reveals the paradox that machine learning can actually make humans healthier—and even more human.
What Our Staff Thinks
This was a great read about how Topol sees the world of medicine changing for the better with the use of AI technology. The thing that stuck with me was the chapter on how AI and pattern recognition could help radiologists miss fewer anomalies in the films they are reviewing. This is a tough class as we all know and finding solutions to better medicine and the care patients receive would benefit all of us.
As both a lawyer and a risk manager, I first read Deep Medicine five years ago and was struck by its vision of the future. At the time, the rise of artificial intelligence felt distant—now, it’s undeniably part of our present. While AI continues to evolve, the role of risk management remains grounded in assessing its real-world implications. I find that many of Dr. Eric Topol’s insights have already materialized. For our policyholders, the benefits of AI—when thoughtfully vetted, implemented, and maintained—are already making a meaningful difference to the practice of medicine. This is no longer a theoretical conversation. AI is here, and it’s reshaping the industry.
It is evident from the first chapter that Dr. Eric Topol is passionate about the subject of AI in medicine. His in-depth analysis provides interesting insight into a variety of use-cases and, I believe, presents a strong case for how this emerging technology will strengthen the art of medicine over time. If you are looking for coverage on the subject of AI in healthcare that leaves no stone unturned, this is certainly it.
Video Resources
.jpg?width=300&name=sddefault%20(1).jpg)
Today's AI algorithms require tens of thousands of expensive medical images to detect a patient's disease. What if we could drastically reduce the amount of data needed to train an AI, making diagnoses low-cost and more effective? TED Fellow Pratik Shah is working on a clever system to do just that. Using an unorthodox AI approach, Shah has developed a technology that requires as few as 50 images to develop a working algorithm—and can even use photos taken on doctors’ cell phones to provide a diagnosis. In this TEDGlobal Talk, Shah shares how this new way to analyze medical information could lead to earlier detection of life-threatening illnesses and bring AI-assisted diagnosis to more healthcare settings worldwide.

In this TEDMED Talk, Greg Corrado discusses the enormous role that AI and machine learning will play in the future of health and medicine, and why doctors and other healthcare professionals must play a central role in that revolution. From distilling data insights to improving the decision-making process, Corrado sees a multitude of ways that AI and machine learning can help magnify the healing powers of doctors.

Listen as Dr. Siddhartha Mukherjee, Pulitzer Prize winning author of The Emperor of All Maladies interviews Dr. Eric Topol about his book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again.

In this TEDx Talk, Dr. Kurt Zatloukal, professor of pathology at the Medical University of Graz, Austria, and head of the Diagnostic and Research Center for Molecular Biomedicine, makes the case for why medicine needs AI.

Decoding the AI Lexicon
A Glossary of Terms
There is a lot of jargon in AI. The Alan Turing Institute created a glossary of common terms and definitions for non-specialists interested in AI without the technical language. We have included a few of them here.
The design and study of machines that can perform tasks that would previously have required human (or other biological) brainpower to accomplish. AI is a broad field that incorporates many different aspects of intelligence, such as reasoning, making decisions, learning from mistakes, communicating, solving problems, and moving around the physical world.
A wide-ranging field of research that deals with large datasets. The field has grown rapidly over the past couple of decades as computer systems became capable of storing and analyzing the vast amounts of data increasingly being collected about our lives and our planet. A key challenge in big data is working out how to generate useful insights from the data without inappropriately compromising the privacy of the people to whom the data relates.
A software application that has been designed to mimic human conversation, allowing it to talk to users via text or speech. Previously used mostly as virtual assistants in customer service, chatbots are becoming increasingly powerful and can now answer users’ questions across a variety of topics, as well as generating stories, articles, poems and more (see also ‘generative AI’).
Synthetic audio, video, or imagery in which someone is digitally altered so that they look, sound or act like someone else. Created by machine learning algorithms, deepfakes have raised concerns over their uses in fake celebrity pornography, financial fraud, and spreading false political information. ‘Deepfake’ can also refer to realistic but completely synthetic media of people and objects that have never physically existed, or sophisticated text generated by algorithms.
An AI system that generates text, images, audio, video, or other media in response to user prompts. It uses machine learning techniques to create new data that has similar characteristics to the data it was trained on, resulting in outputs that are often indistinguishable from human-created media (see ‘deepfake’).
A type of foundation model that is trained on a vast amount of textual data in order to carry out language-related tasks. Large language models (LLMs) power the new generation of chatbots and can generate text that is indistinguishable from human-written text. They are part of a broader field of research called natural language processing, and are typically much simpler in design than smaller, more traditional language models.
A field of AI involving computer algorithms that can ‘learn’ by finding patterns in sample data. The algorithms then typically apply these findings to new data to make predictions or provide other useful outputs, such as translating text or guiding a robot in a new setting. Medicine is one area of promise: machine learning algorithms can identify tumors in scans, for example, which doctors might have missed.
A computer system involving multiple, interacting software programs known as ‘agents.’ Agents often actively help and work with humans to complete a task—the most common everyday examples are virtual assistants such as Siri, Alexa, and Cortana. In a multi-agent system, the agents talk directly to each other, typically in order to complete their tasks more efficiently.
A field of AI that uses computer algorithms to analyze or synthesize human speech and text. The algorithms look for linguistic patterns in how sentences and paragraphs are constructed and how the words, context, and structure work together to create meaning. Applications include speech-to-text converters, chatbots, speech recognition, automatic translation, and sentiment analysis (identifying the mood of a piece of text).
An AI system inspired by the biological brain, consisting of a large set of simple, interconnected computational units (‘neurons’), with data passing between them as between neurons in the brain. Neural networks can have hundreds of layers of these neurons, with each layer playing a role in solving the problem. They perform well in complex tasks such as face and voice recognition.
Software and data that are free to edit and share. This helps researchers to collaborate, as they can edit the resource to suit their needs and add new features that others in the community can benefit from. Open source resources save researchers time (as the resources don’t have to be built from scratch), and they are often more stable and secure than non-open alternatives because users can more quickly fix bugs that have been flagged up by the community. By allowing data and tools to be shared, open source projects also play an important role in enabling researchers to check and replicate findings.
Source
“Defining data science and AI,” The Alan Turing Institute, https://www.turing.ac.uk/news/data-science-and-ai-glossary.

Chatbot Cheat Sheet
Chatbots are AI solutions that simulate human-like conversations. If you have ever used your phone to ask Siri a question or asked Alexa to play your favorite song, you’ve interacted with a chatbot.
The New York Times compiled a list of phrases and concepts useful to understanding the new breed of AI-enabled chatbots like ChatGPT.
Bias
A type of error that can occur in a large language model if its output is skewed by the model’s training data. For example, a model may associate specific traits or professions with a certain race or gender, leading to inaccurate predictions and offensive responses.
Emergent Behavior
Unexpected or unintended abilities in a large language model, enabled by the model’s learning patterns and rules from its training data. For example, models that are trained on programming and coding sites can write new code. Other examples include creative abilities like composing poetry, music and fictional stories.
Hallucination
A well-known phenomenon in large language models, in which the system provides an answer that is factually incorrect, irrelevant or nonsensical, because of limitations in its training data and architecture.
For more on learning about AI, check out The New York Times’s five-part series on becoming an expert on chatbots.
Source
“Artificial Intelligence Glossary: Neural Networks and Other Terms Explained,” The New York Times, March 27, 2023, https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html. (free subscription required)

The Bind Order
This selection of accounts ProAssurance bound recently is intended to give our partners tangible examples of risk classes we’ve been successful quoting and that we’d like to see more of. These examples are anonymized with final premium rounded, but otherwise present actual accounts.
NEUROLOGY
Alabama
Limits: 1M/3M
Admitted Premium: $29,100
PEDIATRICS
Florida
Limits: 250k/750k
Admitted Premium: $12,100
VASCULAR SURGERY
Delaware
Limits: 1M/3M
Admitted Premium: $91,600
ANESTHESIOLOGY
California
Limits: 2M/4M
Admitted Premium: $96,900
ORTHOPEDIC SURGERY
California
Limits: 1M/3M
Admitted Premium: $112,200
PAIN MANAGEMENT
New Jersey
Limits: 1M/3M
Admitted Premium: $40,900
PLASTIC SURGERY
Florida
Limits: 1M/3M
Admitted Premium: $14,900
EMERGENCY MEDICINE
Alabama
Limits: 1M/3M
Admitted Premium: $10,600
OTORHINOLARYNGOLOGY
Nevada
Limits: 1M/3M
Admitted Premium: $39,200
PEDIATRICS
Texas
Limits: 200k/600k
Admitted Premium: $5,100
CONCIERGE MEDICINE
Rhode Island
Limits: 1M/3M
Admitted Premium: $2,500
GYNECOLOGY
California
Limits: 1M/3M
Admitted Premium: $29,300
MID-LEVEL PROVIDER
Ohio
Limits: 1M/3M
Admitted Premium: $2,900
ASSISTED LIVING
Missouri
Limits: 1M/3M
Admitted Premium: $21,700
AMBULATORY SURGERY CENTER
Alabama
Limits: 1M/3M
Admitted Premium: $19,000
HOME HEALTHCARE
Connecticut
Limits: 1M/3M
Admitted Premium: $10,000
New Business Submissions
Our standard business intake address for submissions is Submissions@ProAssurance.com. For specialty lines of business, please use one of the following: CustomPhysicians@ProAssurance.com, Hospitals@ProAssurance.com, MiscMedSubs@ProAssurance.com, and SeniorCare@ProAssurance.com. Visit our Producer Guide for additional information on our specialty lines of business.
The types of business and premium amounts are illustrative of where we have written new business and not intended to reflect actual pricing or specific appetites.
.png?width=300&name=MicrosoftTeams-image%20(28).png)
Researchers at Harvard Medical School found that a new open-source artificial intelligence tool is diagnosing patients as accurately as leading proprietary models — like OpenAI’s GPT-4 — for the first time.
In a study published two weeks ago, open-source model Llama 3.1 405B was tested against GPT-4 on a set of 70 cases, beating out GPT-4 on both the correctness of the first suggestion and the final diagnosis. (The Harvard Crimson)
Artificial intelligence-backed notetaking assistants are likely reducing clinician burnout, but the tools’ financial impact for health systems is still unclear, according to an analysis published by the Peterson Health Technology Institute.
Early adopters of the scribes, which typically record providers’ conversations with patients and draft a clinical note, report they appear to lessen burnout and cognitive load associated with documentation. The tools could also improve patient experience by ensuring clinicians don’t have to focus on taking notes during an appointment. (Healthcare Dive)
In a Duke Health-led survey, patients who were shown messages written either by artificial intelligence (AI) or human clinicians indicated a preference for responses drafted by AI over a human. That preference was diminished, though not erased, when told AI was involved.
The study, published March 11 in JAMA Network Open, showed high overall satisfaction with communications written both by AI and humans, despite their preference for AI. This suggests that letting patients know AI was used does not greatly reduce confidence in the message. (Medical Xpress)
OpenAI, the maker of ChatGPT, released an open-source benchmark designed to measure the performance and safety of large language models in healthcare.
The large data set, called HealthBench, goes beyond exam-style queries and tests how well artificial intelligence models perform in realistic health scenarios, based on what physician experts say matters most, the company said in a blog post. (Fierce Healthcare)
AI has the potential to expand access to high-quality medical guidance. For people in remote or underserved areas, AI-powered tools could help triage symptoms or offer early insights. But that’s a far cry from replacing the role of trained physicians. Medicine isn’t just about information. It’s also about context, judgment, empathy and experience.
The danger lies in assuming that because an AI can deliver facts, it can replace care. That kind of thinking leads to over-reliance, which then leads to underinvestment in the clinical workforce and ultimately worse outcomes. (The Hill)
Insurers and insurtechs are increasingly touting AI-driven solutions, promising faster underwriting, enhanced claims experiences and improved customer engagement. Yet, many are merely rebranding automation as intelligence. While this may not always be intentional, the line between innovation and false advertising can become blurred. Transparency is crucial. Customers deserve to know what they’re really getting, and insurers and insurtechs need to ensure their claims about AI are both ethical and accurate. Misconceptions about AI in insurance are rife, making it challenging to discern real innovation from a marketing spin. (Forbes)
As insurers pursue growth strategies targeting younger demographics and invest in artificial intelligence to improve operations, they face a common obstacle – trust.
Recent research from GlobalData revealed that skepticism toward both insurance providers and AI technologies is shaping consumer attitudes and impeding adoption. (Insurance Business Magazine)

The AI Paradox: Using AI to Discuss AI
While having dinner with my cousin Jessica, a dermatologist, I asked for her perspective on using artificial intelligence (AI) to screen patients. She responded dogmatically: “My attending said that AI will never be able to diagnose as well as the experienced eye of a physician.”
I challenged her assertion with some quick math: “How many skin lesions will a dermatologist see in their professional lifetime? 500,000? A million? AI doesn’t just 'see' a lesion; it breaks down thousands of data points per image. So, if an AI tool is trained on one million lesions, and each lesion produces hundreds to thousands of data points, you’re looking at billions of data points. Do you know any dermatologists who can do that?” She laughed and said, “Right ... tell the lawyers.”
The Invisible AI Revolution
Here’s the irony of modern healthcare: Many physicians remain openly skeptical about artificial intelligence while unknowingly using it every day. From a hospital’s EHR system that may include Clinical Decision Support (CDS) to that “smart” ventilator in the ICU, AI algorithms are already embedded in healthcare infrastructure—often with little transparency about who bears liability when things go wrong. While they’re enhancing patient care, they also raise new liability questions:
- Who’s responsible if the AI miscalculates?
- Did the clinician override the AI appropriately?
- Is the institution monitoring AI behavior like human staff?
As an MPL insurance agent, clients expect you to navigate this complex landscape and provide clear guidance. The challenge? How do you prepare for business conversations about AI when the field is evolving faster than any one human can track?
Your Secret Weapon: Using AI to Discuss AI
To calm my cousin’s knee-jerk rejection of AI’s utility, I reached for my iPhone and asked ChatGPT generative AI to list the liability risks associated with AI-aided diagnosis. In seconds, it returned a list that included diagnostic error liability, informed consent issues, standards of care impact, and documentation requirements. Jessica again laughed and said, “So now you’re using the very thing that’s causing liability issues to explain the liability issues?”
Here’s where the paradox becomes your advantage: You can use AI tools like ChatGPT, Google Gemini, or Microsoft 365 Copilot to prepare for sales calls and business meetings about AI:
- Before meeting with a client, use AI to summarize complex use cases specific to their specialty. Try prompts like: “Summarize the liability risks of using AI-powered predictive analytics in the emergency department.”
- Ask the AI tool to provide you with possible questions to use during the conversation based on those liability risks from the first step: “What questions might I ask a radiologist to develop a conversation about professional liability when using AI-assisted imaging?”
- Translate legal jargon into clinical reality: “Rephrase this AI exclusion clause in terms a cardiologist would relate to.”
- Roleplay responding to tough questions you might be asked: “Play the role of a neurologist concerned about using AI to analyze brain scans. Ask me challenging questions about liability and coverage.”
A word of caution: While AI can help you prepare, it shouldn’t replace your judgment or experience. Always review the AI’s responses, verify all facts, and bring your humanity to the conversation.
A New Opportunity
While some HCPs, like my cousin Jessica, remain skeptical about incorporating AI into their practice, the liability risks remain. That’s where you come in. Your clients need you to help guide them through the paradox where technology creates both the risk and the means to understand it. Master the balance between technological tools and human judgment, combine it with ProAssurance’s vast resources, and you’ll position yourself as the indispensable advisor your clients trust to help them navigate this complex landscape.
![]()
|
Written by Mace Horoff of Medical Sales Performance. Mace Horoff is a representative of Sales Pilot. He helps sales teams and individual representatives who sell medical devices, pharmaceuticals, biotechnology, healthcare services, and other healthcare-related products to sell more and earn more by employing a specialized healthcare system. Have a topic you’d like to see covered? Email your suggestions to AskMarketing@ProAssurance.com. |

Risk Management May Releases

With looming physician shortages and increased autonomy for certain Advanced Practice Providers (APPs), the relationships between supervisory physicians and APPs are more critical than ever before. Analysis of closed claims supports the idea that while individual members of this healthcare team maintain patient care responsibilities, certain risk reduction strategies aimed at enhanced communication, comprehensive documentation, and structured expectations increase patient safety, and improve quality of care and defensibility in the event of a claim.
Read More
The Allegation
The defendants failed to inform the mother of the risks of vaginal delivery with probable fetal macrosomia and failed to timely and correctly perform the maneuvers to relieve the shoulder dystocia.
Read MoreBrian Cools Has Retired
Brian Cools, Brand Strategist, retired this month. Brian served in various roles at ProAssurance over the past nine years, focusing on the development and management of the ProAssurance brand. Prior to his work at ProAssurance, Brian served as a trusted vendor doing marketing work for over 20 years.
Brian’s work can be seen on everything from banner stands and ProAssurance’s trade show booth to point-of-sale materials, the website and right here in ProVisions. Brian was a driving force behind converting what was once a simple monthly newsletter to a themed magazine.
Thank you, Brian, for all of your creative ideas through the years, and all of the fun along the way. We wish you well on all of your future endeavors.
- The ProVisions team
ProVisions Team
- Communications
- Design
- Digital Marketing










