Skip to content

February  2026
The Future of Healthcare


Table of Contents

About Our Issue

Balancing Innovation with Liability

We are well into the digital revolution of healthcare, and trends such as the integration of AI in healthcare practices, outlining strategic steps for successful AI adoption in patient care and healthcare administration settings, advances in sensors and patient monitoring, and using digital means to share and evaluate healthcare information are less a vision of the future and more day-to-day reality.

The ProAssurance 2026 customer appreciation giveaway features Future Care: Sensors, Artificial Intelligence, and the Reinvention of Medicine by Dr. Jag Singh. This text provides an optimistic view of where these trends could take the current healthcare system—balanced with real-world examples and cautions that must be coupled with rapid change and growth.

This issue zeros in on key trends discussed in the book, as well as highlights real-world queries and risk management considerations flagged by ProAssurance’s Risk Management team—emphasizing the balance between innovation in healthcare practices with the potential liability in this evolving medical landscape.

WebBannersFeb26-01
AI IN A HEALTHCARE PRACTICE:

A Game Plan for Success

In the 2024 article AI Will Be as Common in Healthcare as the Stethoscope,  Dr. Robert Pearl, former CEO of the Permanente Medical Group, opined that artificial intelligence would soon emerge as a powerful equalizer—helping to address physician burnout while advancing the quality of patient care. The article explains that AI can process vast, nearly limitless volumes of data to personalize care recommendations and support  optimal treatment plans and outcomes. That optimism, however, also carries an implicit pressure to adopt AI as quickly as possible. While many articles emphasize AI’s potential to reduce administrative and clinical burdens, far fewer address a critical question: How should healthcare practices begin the process of selecting and implementing the right  AI solution for their specific needs? The following practical steps are intended to help guide practices forward. 

1) Look Inward Before Outward

No practice can—or should—attempt to address every operational challenge or deploy every available AI application at once. Leadership should begin by engaging key stakeholders, regardless of practice size, to assess the organization’s specific needs and capacity. Which administrative or clinical tasks are placing the greatest strain on staff? Should initial efforts focus on billing, documentation, patient no-shows, or inventory management? Practices should also evaluate which AI vendors provide robust implementation, training, and ongoing IT support. Will the transition be managed internally, or will outside expertise be needed to address legal, workflow, or technical considerations? Aligning project scope with the practice’s project management capabilities is essential to successful adoption. 

2) Build From the Ground Up

Adopting AI is a strategic, forward-looking decision, but rushed implementation, misjudged priorities, or overestimating organizational readiness can undermine what might otherwise be a sound business choice. Introducing AI into the clinical environment requires a strong foundation and a clear understanding of the long-term commitment involved. In many cases, it is prudent to begin with administrative applications—such as scheduling optimization, staffing support, or supply chain management—before moving into clinical decision support. AI tools that directly influence patient care warrant heightened scrutiny to ensure ethical, safety, and legal standards are met. Thoughtfully laying this groundwork is critical to realizing sustainable long-term benefits. 

3) “Optional” Means Go

In sports, “optional” practices often mean full participation—and the same principle applies to AI adoption. Once an organization commits to an AI solution, leadership and staff must be fully engaged. Successful implementation depends on understanding workflows, staying current with system updates, and developing the skills necessary to use AI as a complement to clinical judgment and operational processes. Whether in a solo practice or a large health system, prioritizing consistent training, monitoring competency, and dedicating time for staff education will improve outcomes. Ongoing education throughout each phase of implementation fosters shared ownership and confidence. When staff fully understand the purpose, benefits, limitations, and documentation requirements of the AI solution, they are better positioned to raise questions and identify concerns early. 

4) Make a Thoughtful Decision

The rapid growth of AI developers and products makes careful evaluation more important than ever. Practices should resist the temptation to rush adoption and instead devote sufficient time to identifying solutions that align with their operational needs and patient populations. Due diligence should include confirming that the AI product’s underlying data sets are appropriate and representative of the practice’s patients. Physicians and administrators should feel empowered to thoroughly assess how the product is trained, what data it relies upon, and how its outputs are generated. Vendor support, implementation timelines, and workflow integration are also critical considerations. Because AI can influence patient care, ethical and safety considerations must remain paramount. If a solution does not meet the practice’s standards, walking away—regardless of marketing appeal—is a prudent decision. 

5) Get It in Writing

Regardless of size, any practice adopting AI should do so under a formal, written agreement with the chosen system vendor reviewed by legal counsel. Key contractual provisions warranting careful review include data ownership and retention, payment terms, indemnification and liability, services and deliverables, HIPAA compliance and encryption requirements, privacy and non-discrimination clauses, remedies for misinformation or system errors, vendor update obligations, recalibration protocols, and ongoing IT maintenance responsibilities. 

Adopting AI in healthcare represents a significant organizational step. By thoughtfully assessing needs, building a strong foundation, committing to training, conducting rigorous due diligence, and securing well-structured contracts, practices can leverage AI’s benefits while minimizing risk. Taking these steps now can help position a practice for long-term success in an increasingly AI-enabled healthcare landscape. 

JenniferFreedenwTITLEupdate

WebBannersFeb26-04
RESOURCE FEATURE

Beyond the Hype: Ambient AI in Practice

In this episode of Rapid Risk Review (ProAssurance’s risk management podcast) host Bradley Byrne discusses the implementation and impact of ambient listening AI in healthcare with Brandon Teenier, CFO of Aspire Allergy and Sinus. They explore the motivations for adopting this technology, physicians' and patients' reactions, compliance considerations, and the importance of vendor selection. The conversation highlights the benefits of AI in reducing clinician burnout and improving patient experiences, as well as the need for ongoing training and support. They also address ethical concerns regarding patient consent and the importance of maintaining human oversight in AI-generated documentation.

TUNE IN

WebBannersFeb26-02

The Promise and Challenge of Incorporating mHealth into Clinical Settings

From the days of attaching pedometers to shoes, consumers have been seeking ways to track their health and fitness in real time. The trend accelerated with the launch of mass market health trackers like Fitbit in 2008 and Apple Watch in 2015. From those beginnings, mobile health (aka mHealth) including consumer wearables such as smartwatches, bands, and rings—even smart clothing—have helped users monitor a myriad of healthcare and fitness data in the pursuit of a healthy lifestyle. However, as popular as these devices are, they have not had a prominent place in clinical settings. That may be changing.

Consumer wearables incorporating AI have advanced far beyond the step counters of yore to high-tech devices that help users monitor a variety of health markers such as blood pressure, heart rate and rhythm, sleep patterns, blood oxygen saturation (SpO2), stress levels, arrhythmias, glucose levels, and sleep apnea.1 The more advanced wearables can also process the data on the device using AI to provide insights and alerts to the user and their caregivers.1 Hospitals are integrating AI with clinical therapy, and physicians are increasingly incorporating wearables into patient care plans.1

In recent years devices have been gaining approval by U.S. regulators. These are just a few high-visibility examples of devices receiving FDA clearance:

  • 2018: Empatica’s Embrace smart device for seizure-monitoring use in children
  • 2023: The Empatica platform for cardiac digital biomarkers
  • 2024: Apple’s hearing-aid feature on their AirPods Pro 2, as well as sleep apnea detection on compatible Apple Watches
  • 2025: Apple’s hypertension detection tool available on some Apple Watch devices

Additionally, the FDA in January 2026 issued guidance on low-risk general wellness products—including wearable devices—to clarify that they are not subject to stringent medical device regulation. General wellness products include tools geared toward maintaining a healthy lifestyle that either (a) do not reference diseases or medical conditions or (b) help to reduce the risk or impact of certain diseases or medical conditions.2 With this regulatory clarification, entry to the market for general wellness devices could get easier. When asked about the clinical accuracy of non-clinical devices, FDA Commissioner Marty Makary responded, “Let the market decide ... Let doctors choose from a competitive marketplace which ones they’d recommend for their patients.”2

While this new FDA guidance may ease entry into the wellness market for more devices, obstacles remain for widespread use in clinical settings.

Clinician Buy-In and Skepticism

A survey of both clinical care and holistic/wellness providers showed that they had a tempered interest for integrating patient-generated health data (PGHD) into their clinical workflow and patient interactions. They did, however, see this data as a complement to traditional clinical data. Holistic/wellness providers, however, were interested in using wearables and PGHD for behavioral change. Both groups had concerns about workload implications and apprehension about having to distill the vast amounts of PGHD of varying validity into clinically actionable information.3 The study concluded that the health data collected from consumer-grade wearable devices could be used to improve patient outcomes—particularly to promote health access and equity. However, concerns such as the burden on staff of already busy clinics to process the data and on clinicians to interpret the data could still prove to be barriers. As a result, the study concluded that optimal integration of consumer-grade wearable devices into clinical workflow will require investing in health systems’ infrastructure.3

Data Security and Privacy

Healthcare data is among the most valuable data on the black market. Unlike credit cards and bank accounts that can be readily canceled and recreated anew, medical records cannot be recreated so easily, so the stolen data can be exploited for a longer period of time. This makes healthcare data a tempting target for cyber criminals.

Patient privacy is another challenge. Because AI-based systems—including consumer wearables and clinical systems—require large volumes of clinical data to train their algorithms, they pose a privacy risk to patients, who may not be aware that their data is being shared in this way. There’s evidence to show that their concerns may be warranted. In 2021, for example, more than 61 million fitness tracker user records from Fitbit, Apple, and potentially other devices were breached at a now defunct New York-based health and wellness company, GetHealth. The company enabled users to unify their data from multiple wearables, medical devices, and app data including names, birthdates, weight, height, gender, and geographical location.

Even beyond cyberattacks, sharing agreements may find patient data shared in ways that patients are unaware.4 The development cycle of AI involves technology often begun in academic research labs and then transferred to commercial entities for developing real-life uses. Because of this transfer, the technology and the underlying data often end up being owned and controlled by private entities.4 It’s also been shown that even anonymizing or deidentifying data may not be sufficient, as algorithms have shown the ability to successfully reidentify the data. These techniques for re-identifying patient data effectively nullify scrubbing the data and could compromise privacy. One study showed an ability to re-identify data at a rate of 85.6 percent for adults and 69.8 percent for children.4 This raises issues as to whether existing privacy consent processes are still valid.

The shift from using patient data in clinical care to using it for commercialization and AI model training “brings up questions about whether the current methods for obtaining consent from patients are suitable for these new purposes.” When the patient doesn’t know all the entities who own and use their private data or how that data is used—and that anonymized data can be re-identified—it’s difficult to see how true consent for its use can exist.

Conclusion

Wearables and health apps are becoming more prevalent in use among consumers and their data more clinically accurate. If the privacy issues can be resolved, new research studies and clinical infrastructure advances may provide a more complete picture of how effective wearables and apps could be in clinical use.

References
  1. Zulfkar Qadrie, et al., “Wearable Technology in Healthcare: Opportunities, Challenges, and Future Directions,” Smart Wearable Technology 1,A12 (2025) 1–17, DOI: 10.47852/bonviewSWT52025578.

  2. Anuja Vaidya, “FDA to Ease Healthcare Wearables Oversight,” TechTarget, January 7, 2026.

  3. Selene S. Mak, et al., “Integrating Consumer-Grade Wearable Devices and Patient-Generated Health Data into Clinical Care: Perspectives from Healthcare Professionals at a Learning Health System,” Journal of General Internal Medicine (2025), DOI: 10.1007/s11606-025-09876-x.

  4. Blake Murdoch, “Privacy and Artificial Intelligence: Challenges for Protecting Health Information in a New Era,” BMC Medical Ethics22, 122 (2021), DOI: 10.1186/s12910-021-00687-3.

WebBannersFeb26-05

A Look at AI Successes in Healthcare

There continues to be plenty of buzz and optimism surrounding AI and its potential to revolutionize healthcare. Take the use of large language models (LLMs) in high-risk specialties like hematology and oncology, for one example. A recent evaluation by physicians showed Gemini 2.5 Pro and GPT o4-mini models outperforming human benchmarks in identifying and correcting classification and medication dosage errors in clinical documentation. The results of this evaluation suggest value for these LLMs as an assistant to physicians in documentation review and quality control. Documentation errors, such as overlooked lab values or incorrect medication dosages, are alarmingly frequent due to human limitations working against the physician, burdens like fatigue, heavy workloads, and time constraints. These errors can lead to serious adverse events such as treatment delays and costly readmissions for patients. Applying these models, in combination with physician review, can potentially improve documentation accuracy, and in turn, improve treatment quality and safety for patients.1

AI is playing significant roles already, with advancing successes across multiple dimensions of care.

“AI is no longer an experiment,” states Ben Shahshahani, PhD, Chief AI Officer at Cleveland Clinic. “It’s a real, scalable tool that can support patients, providers and health systems—improving outcomes, reducing stress for caregivers, and making care more accessible and efficient for everyone involved.”

At the same time, these innovations come with a healthy dose of skepticism, plus some reassurance and support for physicians and healthcare teams. These individuals possess the most comprehensive and complete picture of their patients’ health and can provide them the safest, most informed treatment options. Cleveland Clinic praises these tools not as a replacement for the expertise of a trusted medical provider but as enhancements. When used responsibly, and with protections in place for data security and patient privacy, AI is making care more practical, accessible, and more personal.2

Easing the Administrative Burden

AI helps automate and streamline routine tasks and workflows, improving efficiencies and alleviating the administrative headaches for healthcare teams brought on by paperwork, billing, and scheduling (to name a few). Generative tools can summarize patient notes, draft reports and referrals, and minimize manual data entry. These tasks are simplified and operational costs come down. Providers have more time to spend with their patients and more freedom to devote to their practice.3

Quicker, More Confident Diagnosis (with Clearer Imaging)

AI technologies give physicians more support in diagnostic decision-making and treatment recommendations. Their diagnostic tools can spot and flag early warning signs of disease in imaging, like x-rays, ultrasounds, and MRIs, subtle signs that even seasoned physicians can miss. With rising patient numbers and increasing pressure to maintain operational efficiency, while producing clear radiographic images, providers have the benefit of equipment that streamlines workflows. Providers can scan more patients in less time, and AI-enabled cameras improve patient positioning which effectively enhances image quality while reducing radiation exposure. Large amounts of patient data become actionable information for physicians, enabling them to make more confident diagnoses and develop appropriate, more effective courses of treatment. In turn, patients experience less anxiety during and after office visits. There are fewer false positives, which mean fewer patient callbacks—and less reason to worry.2,3

Faster Care in an Emergency

When seconds matter, AI can flag a medical emergency, such as a stroke or pulmonary embolism, the moment a radiographic image is taken. An automatic alert is sent immediately to the care team, allowing faster triage and mobilization and coordination of care resources.

Tracking Health Changes

Long-term tracking and monitoring of conditions give providers the “big picture” of their patient’s health and enable more informed and consistent treatment planning. This is particularly helpful for nodules or masses, where changes in size can indicate cancer. AI can monitor and identify changes between scans, or note variations in measurements from one radiologist to another. An alert can notify the physician about these changes, indicating that another look may be necessary.2

Personalized Care

Providers have tools that can analyze and interpret very large amounts of patient data (including health records, genetic profiles, and imaging from past patients with similar conditions). AI algorithms and natural language processing (NLP) can extract information from structured data (such as that from electronic records) and compare that to information in unstructured data (such as in discharge summaries or narrative patient notes). These comparisons can reveal risk factors or other relevant insights about a condition that may otherwise not have been noticed. Providers are aided in developing ongoing and preventive care plans that are holistic and tailored more precisely to each patient’s unique needs, and at each stage of their condition.3

Speaking of more personalized care, AI encourages supportive self-care that complements the support patients already get from their providers. Helpful chatbots, virtual health assistants, and intuitive, conversational apps help individuals play a more proactive and engaged role in their own health and well-being. These features enable easy appointment scheduling, health monitoring, and medication reminders. Patients can also set up telehealth visits and get personalized education on healthy lifestyle choices like diet, exercise, and sleep. Mental health support is also available and is now more accessible with these applications.2,3

Research Support

Advancements in data analytics are helping researchers interpret increasing amounts of data more easily, and this may lead to potential breakthroughs for the future of medicine. Researchers can use data insights to develop new medications and more effective treatments. They can gain a better understanding of how diseases work, predict risks and treatment successes, and uncover and track larger health trends.2

References
    1. Peter May, et al., “Artificial Intelligence-Assisted Error Detection in Complex Clinical Documentation: Leveraging Large Language Models to Enhance Patient Safety in Oncology,” JCO Clinical Cancer Informatics, American Society of Clinical Oncology Journal, January 6, 2026, https://ascopubs.org/doi/10.1200/CCI-25-00194.
    2. Cleveland Clinic, “How AI Is Being Used in Healthcare—and What It Means for You,” December 22, 2025, https://health.clevelandclinic.org/ai-in-healthcare.
    3. Philips, “Seven key benefits of AI in healthcare for patients and healthcare professionals,” December 16, 2025, https://www.philips.com/a-w/about/news/archive/features/2025/seven-key-benefits-of-ai-in-healthcare-for-patients-and-healthcare-professionals.html.
Further Reading

The Joint Commission and the Coalition for Health AI (CHAI): Guidance on Responsible Use of AI in Healthcare

ProAssurance, ProVisions, AI and Medical Liability, February 2024

Medmarc Insurance:

 

WebBannersFeb26-03
2026 RETENTION CAMPAIGN

Preparing for Tomorrow’s Medicine

Building and sustaining strong relationships with our insureds remains central to ProAssurance’s marketing strategy. Each year, our customer retention campaign is designed to thank policyholders for their trust while providing a timely, relevant resource that supports their professional lives. The campaign has become a cornerstone of our retention efforts and a proven driver of engagement.

A Proven Approach to Retention

Our annual retention initiative centers on a complimentary resource—most often a thoughtfully selected book—offered to insureds in the standard market, typically small- to medium-sized groups, approximately 90 days prior to renewal. These direct mail pieces include a response offer, allowing insureds to request the resource at no additional cost.

For more than a decade, this approach has delivered consistent results. We target a 20 percent response rate and have seen reply rates as high as 25 percent, with renewal rates for responders exceeding those of non-responders by as much as 10 percent. The data continues to reinforce the value of investing in meaningful, relevant touchpoints with our insureds.

futurecarecover2026 Campaign Theme: Discover What’s Next in Healthcare

Healthcare is evolving at an unprecedented pace. Artificial intelligence, advanced sensors, and digital health technologies are reshaping how physicians diagnose, treat, and care for patients. To help our insureds stay informed and prepared, this year’s retention campaign focuses on the future of medicine and the innovations redefining clinical practice.

This year’s complimentary book is Future Care: Sensors, Artificial Intelligence, and the Reinvention of Medicine by Dr. Jag Singh. The book explores the digital transformation of healthcare and its implications for both patient care and medical practice, examining the rise of virtual care, the expanding role of sensors, and the growing impact of artificial intelligence.

Whether readers are curious about emerging technologies or seeking practical insight into how digital transformation may affect their practice, Future Care offers a thoughtful, accessible look at what lies ahead.

Why This Book

Each year, the complimentary resource selected for our retention campaign aligns with one of three guiding themes: physicians writing about the practice of medicine, patient safety and risk management, or a timely trend shaping healthcare.

Future Care squarely addresses the third category. As artificial intelligence, digital health tools, and data-driven care rapidly move from concept to clinical reality, many physicians are navigating both opportunity and uncertainty. Singh offers a balanced, accessible exploration of these technologies while staying grounded in clinical experience.

By offering Future Care, we aim to equip our insureds with context and insight into innovations that will influence patient expectations, care delivery, and risk considerations in the years ahead.

How the Campaign Works

The 2026 retention campaign follows a structured, multi-touch approach:

  • Mailing 1: An introductory mailing to the practice manager at each office location
  • Mailing 2: A thank-you note to insureds with reply card to request the book
  • Mailing 3: A thank you card with reminder to request book
  • Mailing 4: An email reminder each quarter to insureds who have yet to respond

Mailings are spaced approximately one month apart, and insureds who respond are removed from subsequent follow-ups. Campaign performance is actively monitored at each stage, allowing for adjustments to optimize engagement. A final reminder email concludes the campaign.

Looking Ahead

From physician-authored reflections and patient safety insights to timely explorations of emerging trends, our retention resources are carefully selected to align with what matters most to healthcare professionals. The 2026 campaign continues this tradition by addressing one of the most significant forces shaping medicine today.

By combining relevant thought leadership with practical risk management tools, we aim to strengthen relationships, support our insureds’ evolving needs, and reinforce ProAssurance’s role as a trusted partner—today and into the future.

If you’d like to read the book yourself, email AskMarketing@ProAssurance.com and provide your mailing address. We will provide ongoing campaign updates in future issues of ProVisions.

WebBannersFeb26-11
TALES FROM THE RISK MANAGEMENT HELPLINE

AI in Practice

When a cutting edge, disruptive technology like AI is introduced into an industry, there are bound to be questions about its implementation, best practices, and pitfalls. When the industry is healthcare—where the smallest event can impact patient health and safety and where medical liability is lurking behind every decision—the consequences are multiplied.

That’s why ProAssurance has a team of highly skilled risk management consultants who spend time on our Helpline advising our insureds and helping them find answers, often providing links to ProAssurance Risk Management resources for deeper dives.

Below are some of the types of questions callers have asked about AI in their practices and some of the suggestions provided by our Helpline team. The topics range from documentation and ambient listening to AI scribing, privacy, and even one about using an Amazon Echo Dot to stream music into a pediatric exam room. (Who knew?)

  • “A policyholder contracting with a radiology group was unhappy with the results from their AI decision support program and wants to know how to document this.”

  • “A practice plans to delete all AI suggestions and was asking for advice.”

  • “An OB group called to discuss the security risks of utilizing a phone app such as Freed AI for AI scribing (which requires copying and pasting) versus DAX (which is very expensive). Since they use the Greenway patient portal, they decided to look into Greenway’s solution, which is HIPAA compliant and does not require copy and pasting.”

  • “The president of an insured practice emailed asking for an ‘AI playbook’ for AI transcription risks. The representative answered his questions and provided a link to our AI webinar.”

  • “A caller explained they are in the process of incorporating AI dictation software into the practice. He wanted to discuss the consent process, recording, and overall risk management considerations. We discussed this, and additional resources were sent in follow-up.”

  • “A caller was considering an AI program to help with their documentation but wanted to know the risks. We advised that the biggest risk is failure to proofread to ensure that a note is thorough and accurate.”

  • “A caller wanted information about using an AI chart system.”

  • “A caller wanted to know if she needs patient consent to use an ambient listening program for AI documentation.”

  • “A caller had questions regarding AI scribing including informed consent, storage and transfer of information, note accuracy, and integration into the EMR.”

  • “A caller asked about using AI to help with documentation. We advised that MDs need to thoroughly review every note before signing off. They are responsible for their note content.”

  • “A Helpline representative provided advice regarding the use of AI to transcribe into the medical record, including patient consent, review and sign-off of all encounters, HIPAA/privacy concerns, and use of legal counsel prior to signing a contract.”

  • “A ProAssurance underwriter relayed a general question about the risks related to using AI scribe technology. He wanted to know if we had any resources about general risks. We sent him links to a 2-Minute video and a seminar about AI as well as a summary of risk management considerations. The representative encouraged him to pass along our direct contact to any insureds with this question.”

  •  “An insured Practice Manager emailed us in response to our ‘AI in Healthcare’ webinar, asking if it was OK for them to stream music into pediatric exam rooms or labs where patient information is discussed. They are using the Amazon Echo Dot.”

  • “A ProAssurance agent emailed on behalf of an insured who was trying AI in the office to 'record' and transcribe office visits to create a note in the medical record.”

  • “A ProAssurance underwriter was asked by an agent about the risks and considerations with using ambient AI scribes.”

  • “A caller was using an AI scribe to document parts of the medical record. The practice is currently getting consent from the patient at every visit and asked if they could get consent yearly when they get all other yearly forms signed. We advised yearly consent is fine, and they should include the consent process with their AI policy.”

  • “A caller had questions about using AI for chart review and creating summaries but also wanted to ask specifics about the coverage his policy would provide when doing this.”

  • “An insured was requesting assistance with enforcing a new policy/procedure for cell phone usage in the exam room and was worried about the AI component which, if not checked, may turn into a breech in PHI. We discussed their current HR policy on cell phones and suggested expanding upon that, documenting education, and expanding their code of conduct attestation to include the use of cell phones while at work.”

  • “A hospital contact called seeking guidance concerning their AI policy and procedure. We sent resources to get her started.”

  • “An insured was considering using ambient listening and wanted to discuss the pros and cons. Our representative sent the insured a copy of our article on AI ambient listening and also discussed best practices should they elect to implement this technology.”

  • “A caller had questions about consent for AI scribes.”

TheBindOrderBannerUPDATE

The Bind Order

This selection of accounts ProAssurance bound recently is intended to give our partners tangible examples of risk classes we’ve been successful quoting and that we’d like to see more of. These examples are anonymized with final premium rounded, but otherwise present actual accounts.

SOLO PHYSICIANS

ENDOCRINOLOGY
Florida
Limits: $250k/$750k
Admitted
Premium: $11,000

FAMILY PHYSICIAN
Kansas
Limits: $500k/$1.5M
Admitted
Premium: $14,000

OTORHINOLARYNGOLOGY
New Jersey
Limits: $1M/$3M
Admitted
Premium: $36,000

OPHTHALMOLOGY
Nevada
Limits: $1M/$3M
Admitted
Premium: $19,000

ORTHOPEDIC SURGERY
Nevada
Limits: $1M/$3M
Admitted
Premium: $59,000

GYNECOLOGY
Virginia
Limits: $2.75M/$8.25M
Admitted
Premium: $12,000

MEDICAL GROUPS

PATHOLOGY
Alabama
Limits: $1M/$3M
Admitted
Premium: $17,000

RADIOLOGY
Wisconsin
Limits: $1M/$3M
Admitted
Premium: $8,000

ANESTHESIOLOGY
California
Limits: $1M/$3M
Admitted
Premium: $6,400

PEDIATRICS
Pennsylvania
Limits: $500k/$1.5M
Admitted
Premium: $58,000

INTENSIVE CARE
Wisconsin
Limits: $1M/$3M
Admitted
Premium: $61,000

INTERNAL MEDICINE
California
Limits: $1M/$3M
Admitted
Premium: $3,600

 

SENIOR CARE

SKILLED NURSING
Texas
Limits: $1M/$3M
E&S
Premium: $178,000

FACILITIES

EMERGENCY MEDICINE
Virginia
Limits: $2.75M/$8.25M
Admitted
Premium: $1,512,000

MISCELLANEOUS MEDICAL

BEHAVIORAL HEALTH
Rhode Island
Limits: $1M/$3M
E&S
Premium: $18,000

MID-LEVEL PROVIDER
Michigan
Limits: $1M/$3M
E&S
Premium: $3,000

URGENT CARE
Alabama
Limits: $1M/$3M
E&S
Premium: $9,000

 

New Business Submissions 

Our standard business intake address for submissions is Submissions@ProAssurance.com. For specialty lines of business, please use one of the following: CustomPhysicians@ProAssurance.com, Hospitals@ProAssurance.com, MiscMedSubs@ProAssurance.com, and SeniorCare@ProAssurance.com. Visit our Producer Guide for additional information on our specialty lines of business.

The types of business and premium amounts are illustrative of where we have written new business and not intended to reflect actual pricing or specific appetites.

Get all past editions of The Bind Order on our Marketing Materials page.

MicrosoftTeams-image (28)
Misuse of AI Chatbots Tops Annual List of Health Technology Hazards

Report also sounds the alarm on insufficient planning for systems outages, substandard medical products, missed recalls of home diabetes management devices, and more.

Artificial intelligence (AI) chatbots in healthcare top the 2026 list of the most significant health technology hazards. The report is prepared annually by ECRI, an independent, nonpartisan patient safety organization. (ECRI)

Read more →

40 Million Americans Use ChatGPT for Healthcare: Report

More than 40 million Americans use ChatGPT daily to ask questions about healthcare, according to a new report from OpenAI that highlights how patients and clinicians are increasingly turning to AI to navigate a complex and strained U.S. healthcare system.

The report, “AI as a Healthcare Ally: How Americans Are Navigating the System With ChatGPT,” was shared with Becker’s by an OpenAI spokesperson. It is based on anonymized ChatGPT message data and OpenAI-led research. (Becker’s Health IT)

Read more →

Top Healthcare AI Trends in 2026

While health systems will continue their AI rollout, use of the technology could evolve amid intensifying competition from EHRs, fragmented regulations and growing M&A opportunities. (Healthcare Dive)

Read more →

ChatGPT for Healthcare, Claude AI Pose Governance Challenges

Experts advise healthcare organizations to ask: "Who will be accountable for decisions influenced by artificial intelligence – the clinician, department, vendor or hospital? And what will be needed to defend AI-influenced decisions?" (Healthcare IT News)

Read more →

63 Healthcare Providers Call for Stronger Safeguards in National Data Exchange Frameworks

Sixty-three healthcare providers across the U.S. are urging stronger oversight and transparency in national health data exchange frameworks, warning that current safeguards are inadequate to protect patient privacy.

In a Jan. 22 letter to Mariann Yeager, CEO of The Sequoia Project, the organizations called for changes to the Trusted Exchange Framework and Common Agreement, or TEFCA, and Carequality, two major interoperability frameworks used to exchange patient health information. (Becker’s Hospital Review)

Read more →

Study: AI Can Flag Cognitive Decline in Clinical Notes Nearly on Par with Humans

A study using agentic artificial intelligence to detect early signs of cognitive decline in unstructured medical records found the technology achieved near-expert performance without any human guidance.

Mass General Brigham researchers built a multi-agent workflow that relied on five debating AI agents using large language models(LLMs) from Meta: Llama and Med42. The data were based on 200 real MGB patients and more than 3,300 clinical notes. (Fierce Healthcare)

Read more →

Ties that Bind updated Banner

You Can’t Change History … But You Can Try to Change Perspective

TTBFeb26

If you’ve sold into healthcare for any length of time, you know “new” is a double-edged sword.

Sometimes you’re the one introducing new products. Other times, you’re defending against competitors with new products and advising customers to slow down and think through why the “latest and greatest” might not be so great once they see real data.

When I was the one pushing “new,” I'd lean into the upside. Faster. Cheaper. More efficient. More convenient. I'd paint a picture of how much better things would be for the doctor and the patient. But ...

When selling against new, I did the opposite. The gloom surfaced as I talked about unintended consequences and asked “what if …” questions. I'd try to appeal to a customer’s common sense with questions such as, "How much do we really know yet? Do your patients want to be part of an informal clinical study?"

Medicine Without Walls

Healthcare is changing faster than at any point in history. We thought the internet was disruptive, but this is different. Medicine used to be one-on-one, face-to-face. Office visits. House calls. Clear boundaries. Now we have telemedicine, remote patient monitoring, wearables, and patients taking advice from AI tools as to whether they even need to see a doctor at all.

Last week, my wife walked into my office with a nosebleed. Not a drip—a gusher. After showing her how to pinch below the nasal bone to stop the bleeding, I logged into Google Gemini and asked how long to wait for bleeding to stop before seeking care. Then I asked whether urgent care or the emergency department made sense given her clinical history.

Fortunately, the bleeding stopped within 15 minutes. All good.

But it hit me later ... how many patients are doing this after their physician has already prescribed treatment or given instructions? And if they compare advice, make the wrong call, and never mention it—who’s liable when things go south?

A Target Rich Environment for Interesting Conversations

It seems like the healthcare stakeholders you speak with about professional liability and expanding healthcare technology fall into three predictable groups:

The first group is already worried. These are the physicians and practice leaders who feel the risk is increasing with AI, remote monitoring, and patient self-triage. There's little need to push hard with this group. Ask questions and let them talk. “What concerns you?” “Where do you feel exposed?” They don’t need convincing—they need someone who can help them make sense of their liability concerns and feel protected.

The second group isn’t worried at all. Some believe that AI reduces liability. Better documentation. Earlier detection. More data. This is where you can gently introduce some concerns. "What happens with missed alerts?” “How do patients interpret ‘remote monitoring’?” It's not about being negative; it's about helping them see reality.

The third group hasn’t really thought about it. As of right now, this is likely the largest group. Engage them with simple scenarios about a patient following AI advice rather than discharge instructions, or a delayed response to an RPM alert. Then ask, “Have you ever thought through how liability plays out in that situation?” You're not selling coverage yet; you're opening the door.

Technologies like AI and remote patient monitoring haven’t eliminated risk; they’ve just shifted it to new places. And if you’re a MPL agent, that makes you more valuable than ever—if you’re willing to help clients think clearly about those shifts before talking coverage.

I learned early on that “new” cuts both ways. Today, the agents who win are the ones who help their clients see both sides—and make sure they’re protected when new falls short.  

Headshots10

 

Written by Mace Horoff of Medical Sales Performance.

Mace Horoff is a representative of Sales Pilot. He helps sales teams and individual representatives who sell medical devices, pharmaceuticals, biotechnology, healthcare services, and other healthcare-related products to sell more and earn more by employing a specialized healthcare system.

Have a topic you’d like to see covered? Email your suggestions to AskMarketing@ProAssurance.com.

RMUpdatesBanner

Risk Management Updates

NEW RISK OFFERING:
Resident Rundown Podcast

This podcast series, hosted by Barbara Hunyady, JD, CPHRM, covers the medical malpractice insurance concerns on the minds of residents, fellows, and other early-in-career doctors. These six episodes cover topics such as when and how to get malpractice insurance, how premiums are calculated, what happens when you’re in a lawsuit, how to protect your personal assets, and how to avoid getting sued in the first place. 

Episodes are available on Spotify, Apple Podcasts, and iHeartRadio. You can also browse all of the episodes on the Risk Management website. 

VIEW EPISODES

Keep Up-to-Date on All Our Risk Management Resources

Our weekly risk management newsletter features the latest releases from ProAssurance’s Risk Management department—as well as highlights from our expansive online library of tools and publications. Join our email list.

OUR 50TH YEAR

The Big Reveal: This Year’s Sock Design Got Extra Funky

50thSockIf you haven’t come across our ProAssurance-branded socks, you might just be living under a (pet) rock. Each year, the designers in the ProAssurance MPL Marketing department come up with a new pattern for our most popular piece of branded merch—and this year we landed on something totally far out.

As part of our 50th anniversary celebration going on throughout 2026, we decided to truly embrace the 70s aesthetic and designed a pair of nifty vintage tube socks. Since we are veering from the norm, it took a few iterations to truly nail the look, but we hope you’ll agree that the final result is out of sight.

Socks will be available at ProAssurance events and in our trade show booth as part of our ongoing one-sock trick. If you’re not familiar, our missing sock campaign is when we put a single sock in the attendee bag picked up at registration. That sock’s tag directs the attendee to our booth to collect its match. This has been a fun way to draw in booth visitors where we’ll talk all things ProAssurance.

If having a few nifty giveaways would help you buy or renew more insurance, our agency partners are welcome to request a few pairs for your use. Simply email AskMarketing@ProAssurance.com with what you need, where you need it sent, and the timeline for your request.

News & Updates

Rate Changes

We are committed to responsible pricing that reflects the current risk environment. A review of our rate plan and rating factors has resulted in an updated rate strategy for new and renewal accounts in the states listed below.

The following updates, which have been filed and approved, may impact insureds. Note the carrier and effective dates below:

MISSOURI (ProAssurance, effective March 1, 2026)

A base rate increase of 4.7%. After revisions to some classes and specialties, claims free credits, and risk management credits, the overall rate increase is 2.4%.

MISSOURI (NORCAL, effective March 1, 2026)

A base rate increase of 5.0%. After revisions to some classes and specialties, the overall rate increase is 5.6%.

OKLAHOMA (NORCAL, effective April 1, 2026)

A base rate increase of 8.0%. After revisions to some classes and specialties, the overall rate increase is 6.7%.

ProVisions Team

Emily Kelly-Gillingham
Emily Kelly-Gillingham Communications & Digital Marketing Director
KristenHensley
Kristen Hensley Communications Supervisor
ScottSpinola
Scott Spinola Senior Communications Coordinator
Kaelin O'Reilly
Kaelin O'Reilly Communications Specialist
EricaHess 1
Erica Hess Manager, Creative Services
Kat McPeak
Kat McPeak Creative Services Coordinator
ErikSeelman 1
Erik Seelman Senior Graphic Design Specialist
AndrewSegura
Andrew Segura Digital Marketing Supervisor
BethUlle
Bethany Ulle Senior Digital Marketing Specialist
ProVFeb26Footer