Skip to content
February ProVisions

February 2024 
AI and Medical Liability


Table of Contents

About Our Issue

AI has become a buzzword attached to a variety of tech-related endeavors in 2024. It’s perhaps become overused, or at the very least used without the proper context.

AI, or artificial intelligence, specifically refers to software programs designed to mimic human thought patterns—programs designed to look at datasets to solve whatever problem the user asks it to tackle.

At its core, this isn’t new. Most of us have typed our symptoms into WebMD to see why our throat hurts or taken advantage of the autocomplete options for that text we’re sending. What is new is the scale at which these software options are being implemented. Like most tech advances, it is becoming faster and cheaper to roll out AI upgrades, which means an increasing number of industries and companies are coming to rely on it.

Insurance and healthcare are both areas that have become targets for AI growth. This is no surprise, as both industries heavily utilize data interpretation to make decisions—the foundation of any AI program. However, both insurance and healthcare industries are highly regulated and ultimately rely heavily on human interpretation to decipher the nuances of the situation at hand.

While AI today is increasingly associated with the generation of articles and images, artificial intelligence is a science, not an art. It can interpret things, reorganize them, or combine them, but it cannot create anything completely new. This is where the concerns lie.

There is no doubt that artificial intelligence will continue to make its way into our day-to-day operations, but knowing its limits will be essential to incorporating the tech successfully. In this issue we will explore some of the ways AI is already being used in our industry—and by the healthcare professionals we serve—as well as some interesting ways to explore the quickly growing world of AI.

ProAssurance does not endorse the AI products mentioned in this issue, or specific uses of AI technology. This publication is intended to act as an educational resource on the trends developing with this growing area of technology.

plexus-abstract-network-business-technology-scienc-2023-11-27-05-36-52-utc-2

Can You Spot the AI?

One of the articles in this issue was written by an AI content generator. We edited the verbiage slightly to ensure it matched the ProAssurance style guide but left the information in the text accurate to what the algorithm generated.

Can you spot which article came from the ProAssurance editorial team, and which one is from a robot? We’ll reveal the results later in this issue.

Artificial Intelligence (AI)-Generated Healthcare Content

Understanding the Limitations

Artificial intelligence (AI), including chatbot tools like the popular ChatGPT, has made possible many useful applications in the healthcare sphere. ChatGPT’s ability to generate human-like responses to natural language inputs has made it an attractive tool for professional and student writers.1 The application can help develop quality and informative content in the form of articles, reports, blogs, tweets, and emails.2 This content may be produced in less time than traditional writing, and the burden of arduous research tasks can be reduced. In the fields of medicine and science, healthcare providers, researchers, and academics can access valuable medical education; supplement record documentation; and produce journal articles, clinical studies, and research papers with assistance from the tool.1

ChatGPT’s natural language processing model builds on older technologies of speech recognition, predictive text capability, and deep learning.3 It can function as a search engine, providing direct responses to user queries by applying specific criteria to locate appropriate resources. ChatGPT can aid in topic generation and provide translation for some medical and technical jargon. Because its algorithm is “trained” on a robust dataset of conversational text, the tool can address and generate practical written responses for a broad range of prompts, capturing many of the nuances and variations unique to human speech. It can also present language that is clear, easy to follow, often eloquent, and in the appropriate, specified structure.1

While AI tools like ChatGPT present significant advantages for writers, these applications are not without shortcomings. AI-generated content raises the following concerns4:

  • Authorship and Accountability
  • Inaccuracies and Errors
  • Biases and Prejudices
  • Lack of Regulations, and Privacy and Security
  • Dependence and Job Displacement

Moreover, developing and fine tuning the ChatGPT algorithm necessitates the collection and analysis of huge volumes of text data across the internet. Notably, these data collections have been relatively sporadic, with the last two collections covering information only up to September 2021, then up to April 2023 with its newer model. This may result in the information generated by ChatGPT being erroneous or out of date, or perpetuating an incomplete or distorted picture of the subject matter.1,5 Misinformation may be overlooked or unknown, and inadvertently passed on in published work.2 As AI implementations become even more commonplace, both readers and writers should be mindful to question the validity and reliability of content and familiarize themselves with the functional limitations of chatbots like ChatGPT.1

The Limitations and Concerns of ChatGPT-Generated Content

Authorship and Accountability

AI-generated content invites questions about authorship and accountability and, specifically, whether tools like ChatGPT should be applied in research and writing, including healthcare works. Credit for published material has traditionally been given to the individual contributors for their work in applying intelligence to idea generation, research and analysis, design, and execution. It is suggested that definitions of authorship may need to be revisited and specified, considering use of ChatGPT and other AI tools in the healthcare ecosystem is only growing. However, most journals will not allow designation of ChatGPT as an author, suggesting that although the tool does mimic human thought progression and language, create a logical and well-developed piece of writing in an appropriate format, it may not have the capability to produce information that is 100% reliable. As AI is non-human, it cannot be held responsible for its content in the same way as individuals with intention and legal obligations.1,4

Supporting the argument of accountability is an acknowledgment of the continued need for human intervention with use of these tools, despite their impressive capabilities. Specifically, processes like editing and applying reason and specialized expertise lie beyond the product’s scope of training but are nevertheless essential in writing It may be acceptable, however, and even beneficial for writers to include references to such AI tools along with the other resources they have used in the development of their work. Doing so might establish greater transparency while allowing the author to claim appropriate responsibility for the validity of their content. Further, such citations may bring awareness to the merits of AI resources like ChatGPT as supplemental assistants to the research and writing processes.1,4

While AI algorithms evolve with new and expanding data collections, opportunities for misuse and plagiarism emerge. In one study, plagiarism detection software and detection tools used to identify AI-generated content (“AI output detector”) were applied to 50 research abstracts that were generated solely by ChatGPT. The ChatGPT had created these abstracts following its review of excerpts from journals like JAMA and The New England Journal of Medicine. The plagiarism detection software found no plagiarism by ChatGPT, while the AI-output detector recognized only 66% of the abstracts as being AI-created. It is encouraging that ChatGPT was not found to have plagiarized the journal articles. However, as ChatGPT seemed to be able to pass through the AI-output detector checks with relative ease, it may be deduced that an individual reader would be unable to make the differentiation.1

Inaccuracies and Errors

Accuracy and reliability of text generated by AI models depend on the quality of data used in training the models. ChatGPT, like any AI model, may have errors or biases built into its core algorithm and, as a result, its output based on these inaccuracies will sometimes be incorrect.1 Language models are inherently intricate, complex, and potentially difficult to understand. A user may lack the foresight or knowledge necessary for gauging the correctness of an AI-generated answer or spotting specific errors, especially if the user is not aware how the tool arrived at these conclusions.4 There may be ambiguities in the user’s prompt or question (i.e., vague wording, meandering, or unfocused speech), resulting in an answer that is in turn, also ambiguous.1 In addition, using preset calculations to parse through data and elect the “best” answer in mere fractions of a second—even when there is no clear or easy answer available—can result in incomplete, skewed information. These types of outputs, known as AI “hallucinations,” are presented as factual but are really more of an improvised best guess generated by the chatbot, and have a high potential for inaccuracy.6

ChatGPT has a limited ability to apply deductive reasoning in its approach to answers, or to deconstruct and prioritize answers to layered questions. It can have trouble inferring underlying meanings or handling complex, “niche” topics. This weakness becomes even more challenging in detailed areas of science and medicine, which require subject matter expertise and an acute awareness and ability to analyze the constant changes and developments characteristic of these fields. Though ChatGPT is skilled in performing some language translation and adjustment to make medical conditions and treatment terminology more digestible for the average person, the tool may have a hard time interpreting or “understanding” certain medical phrases or jargon specific to a lesser-known subject or subspecialty.7

Biases and Prejudices

Data used in the development of AI algorithms may be limited to over- or under-represent certain groups, genders, ages, races, and cultures.8 A close examination will reveal that this overgeneralized and unbalanced data base fails to properly include certain populations. Therefore, the results from AI chatbots may be unreliable as applied to those groups. The potential biases and discriminatory attitudes that may be apparent in data collected across the web, and that inform the outputs generated by tools like AI chatbots, reflect not only society’s culture but also the potential culture of technological innovators of the AI-assisted product. A lack of diversity among these teams, as well as collective misconceptions or prejudices can become “embedded” in product development, meaning that product may exclude sizable groups of the population. As well, an unintentional flaw in the product design or in the algorithm’s data input can also yield such biases. These biases perpetuate when AI presents flawed conclusions to users, who may rely upon and pass along that skewed information. Large, varied groups and underrepresented communities should be included in research studies, to effectively create more diverse training sets for new algorithms. Doing so will allow ChatGPT and similar tools to provide a more valuable application that generates more accurate, reliable, and inclusive results.9

Lack of Regulations; Privacy and Security

Training of algorithms for ChatGPT and other chatbot implements incorporates access to extensive datasets, which may potentially include health information, particularly if the AI tool is utilized across healthcare facilities through sharing of patient information. Of course, a concern with utilizing health information is the privacy and security of the details within that gathered data, which may be vulnerable to hackers and data breaches. When the underlying data for an AI algorithm contains health information of an actual person, utilizing only properly de-identified data that does not contain protected health information of any individual will avoid violations of HIPAA and breaches of privacy. With no universal guidelines in place to govern the use, efficacy, implementation, and auditing of newer AI tools like ChatGPT in the healthcare sector, legal and ethical debates circulate around the handling and quality of data, patient consent, and confidentiality. A lack of clarity about data models and algorithms plus inadequate training on the user functions of AI equipment invite warranted skepticism and present a need for greater transparency and education across healthcare organizations. It is suggested that collaboration among AI innovators, security experts, and policymakers, as well as healthcare clinicians and providers, is necessary for the development and implementation of rules, regulations, and guidelines to address these novel issues of transparency and security and provide a smoother integration of AI into clinical practices. Specifications in these guidelines could include restrictions on data usage and the sharing of information and impose quality control measures for de-identification, encryption, and anonymization. These specifications would help ensure privacy and security, while maintaining quality of patient care and compliance with existing national healthcare regulations.8,9

Dependence and Job Displacement

There does exist a concern for dependence and overreliance on AI-assisted tools, especially if their algorithm models are flawed, contain biases, or are simply outdated. Leaning too heavily on these tools can result in missed errors, and a complacency around fact checking and quality assurance for documentation and other important practical applications in healthcare. Regarding the production of healthcare and scientific-related written content, it holds that creativity, personal experience, and an individual voice contribute to quality and originality. The potential for overreliance on AI causes deep concern that these attributes may be lost using a tool like ChatGPT. Content generated through a chatbot should be reviewed and edited for factual merit, quality, grammar, consistency, and timeliness. While AI technology advances in functionality and versatility, researchers and writers may fear job loss or a reduction of employment opportunities. However, the elements common to valuable written pieces illustrate integral contributions that can only come from individual authors: demonstrated depth of knowledge, critical and applied thinking, anecdotes and specific deductive reasoning, and a personal connection to the audience. These are human attributes that cannot be fully replicated or recreated by any technology. ChatGPT and other chatbot tools currently work best alongside humans, serving as resources and tools, making the processes of writing and research smoother and more manageable.4,8

References

1. Tirth Dave, Sai Anirudh Athaluri, Satyam Singh, “ChatGPT in Medicine: An Overview of Its Applications, Advantages, Limitations, Future Prospects, and Ethical Considerations,” Frontiers in Artificial Intelligence 6 (May 4, 2023), https://www.frontiersin.org/articles/10.3389/frai.2023.1169595/full.

2. Jodie Cook, “6 Giveaway Signs of ChatGPT-Generated Content,” Forbes, Dec. 6, 2023, https://www.forbes.com/sites/jodiecook/2023/12/06/6-giveaway-signs-of-chatgpt-generated-content/?sh=10b8c9181e7d.

3. “The Benefits of AI in Healthcare,” IBM Education, July 11, 2023, https://www.ibm.com/blog/the-benefits-of-ai-in-healthcare/.

4. Alexander S Doyal et al., “ChatGPT and Artificial Intelligence in Medical Writing: Concerns and Ethical Considerations,” Cureus Journal of Medical Science 15(8) (August 10, 2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10492634/#:~:text=Some%20suggested%20uses%20of%20ChatGPT,in%20the%20writing%20of%20medical.

5. Aaron Mok, “ChatGPT Is Getting an Upgrade That Will Make It More Up to Date,” Business Insider, Nov. 6, 2023, https://www.businessinsider.com/open-ai-chatgpt-training-up-to-date-gpt4-turbo-2023-11#:~:text=ChatGPT%20users%20will%20soon%20have,at%20its%20first%20developer%20day.

6. Sindhu Sundar and Aaron Mok, “How Do AI Chatbots Like ChatGPT Work? Here’s a Quick Explainer,” Business Insider, Oct. 14, 2023, https://www.businessinsider.com/how-ai-chatbots-like-chatgpt-work-explainer-2023-7.

7. Bernard Marr, “The Top Limitations of ChatGPT,” Forbes, Mar 3, 2023, https://www.forbes.com/sites/bernardmarr/2023/03/03/the-top-10-limitations-of-chatgpt/?sh=5b49a2158f35.

8. Josh Nguyen and Christopher A. Pepping, “The Application of ChatGPT in Healthcare Progress Notes: A Commentary From a Clinical and Research Perspective,” Clinical and Translational Medicine 13(7) (July 2, 2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10315641/.

9. Bangul Khan et al., “Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector,” Biomedical Materials & Devices 1-8 (Feb. 8, 2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9908503/.

Regional Leaders Weigh In On AI

The Question: How do you utilize AI in your work? If it does not impact you now, how do you anticipate it impacting you or your clients in the future?

SOUTHWEST

  • Claims: Laura Ekery
  • Underwriting: John Alexander
  • Risk Management: Jennifer Freeden
Laura EkeryClaims: Laura Ekery

The Claims department is optimistic about the potential to use AI in all aspects of our business, from evaluating and identifying risks, to promoting management of risks, analyzing appropriate reserves, and forecasting claim outcomes. ProAssurance is actively investigating how and where AI can be used to improve service by enhancing our insureds’ digital experience and encouraging retention and enhanced services to each insured. Claims’ efforts will focus on enhanced efficiency and effective evaluation of all aspects of claims. These aspects currently include analysis of attorney fees and outcomes and efficient review of medical records.

AI and its endless possibilities offer great promise for the future. We intend to harness its potential to become better, faster, and more effective.

John AlexanderUnderwriting: John Alexander

It seems like AI could speed up many basic underwriting processes. I am learning and absorbing all that I can about AI so that I understand the specific impacts it will bring to our business. I anticipate AI will produce time savings, and this would enable underwriters to devote more attention to aspects of their work that depend on human expertise. I read how AI will bring improved efficiency to the paperwork involved with medicine; however, I remain guarded because it seems to also present higher risks related to data security.

Jennifer FreedenRisk Management: Jennifer Freeden

Seeing the wave of AI continue to affect the healthcare industry, the Risk Management team just launched a webinar in December 2023 to cover the topic. We are at a place where there are more questions than answers, but AI is not going away. I am just starting to dabble with AI in my own work, mostly to help summarize meetings. However, our insureds are faced with more questions than ever about “if,” “how,” and “when” to implement AI into healthcare practices, and how best to use it if it is already integrated. It is vital that they understand the benefits and risks of utilizing AI and consider what is best for their patients and staff. AI is evolving each day, and we need to stay on top of the safety trends.

MIDWEST

  • Claims: Mike Severyn
  • Underwriting: Debbie Farr
  • Risk Management: Tina Santos
  • Business Development: Doug Darnell

Mike SeverynClaims: Mike Severyn

Currently neither I nor the Claims department use AI significantly in our day-to-day activities with our insureds and/or business partners. However, I do see this changing in the future. I do predict AI will have a significant impact on the MPL industry. I do see AI being used to assist or supplement the following claim activities: jury profiling, medical record tabbing or bookmarking, setting of reserves, and more.

In the future I do see AI dramatically impacting healthcare, which will impact the MPL industry as well. For example, I believe AI will be used to develop diagnoses and treatment plans for patients. I believe AI will be used to assist or replace healthcare providers in certain situations. Who will the patient sue then? The robot who provided the information, the developer who created the software, or the healthcare provider who relied on the information? Or all three?

Debbie FarrUnderwriting: Debbie Farr

When asked about my usage of AI, I initially thought, "I don't use AI." Upon reflection, I realized that I do. My most frequent interaction with AI is through Siri, whom I consult daily for spelling and informational queries. Recently I've also been introduced to ChatGPT, which has proven to be incredibly helpful in various tasks:

  • Project Outlining: ChatGPT can generate outlines for projects, providing a structured framework that can then be filled in with details upon request.
  • Writing Assistance: Whether it's drafting a letter or crafting a message, ChatGPT can refine and enhance the content to better suit my needs. It's capable of adjusting its suggestions based on my preferences.
  • Reviews and Feedback: If I’m seeking alternatives or fresh perspectives, ChatGPT can assist in refining my comments or feedback, offering diverse language choices to avoid repetition.

ChatGPT serves as a versatile and reliable tool for enhancing productivity and communication. And yes, I used ChatGPT to refine this message!

Tina SantosRisk Management: Tina Santos

I have used ChatGPT on a very limited basis, mostly to enhance title creation and program descriptions for educational content. In the future I do see generative AI taking a more prominent role in Risk Management. As our insureds start utilizing different applications for medical record documentation, interpreting images, and even as clinical support, the risk of misinterpretation could escalate. Our department will have to explain the risks versus benefits of this technology. While AI does offer the advantage of providing analysis of complex scenarios and differential diagnoses clinicians may not be familiar with, there is no substitution for someone’s real world experience and education. Utilized as an enhancement in different contexts, AI has the potential to change and improve the healthcare landscape.

Doug DarnellBusiness Development: Doug Darnell

I’m still writing my own reports. I see this as the next EMR, or cyber or privacy wave. There doesn’t seem to be any slowing of this advancement once it gets moving, and its widespread use seems to be a question of “when” rather than “if.” The important thing is to understand the limitations of generative AI before its widespread adoption and implementation into medicine. Those who move too quickly into this area may learn, like today’s school kids, that this isn’t the answer for every exam or problem. I think it will be important for ProAssurance to understand the varied uses of AI along the healthcare spectrum to be able to advise our clients of the myriad risks associated with its use. The benefits to doctor and patient alike are impossible to ignore; however, we must account for bumps in the road to progress.

SOUTHEAST

  • Risk Management: Mallory Earley
  • Business Development: Seth Swanson
  • Claims: Frank Bishop
  • Underwriting: Christine Vaz

MalloryHeadshot_ProVRisk Management: Mallory Earley

The Risk Management department does not utilize AI in our line of business yet, but there is potential in the future. Our insureds are already using AI in the form of scheduling software, summarizing clinical notes and medical records, and even using AI as a tool in diagnosis. Our department recently had a national webinar on Artificial Intelligence, and the main takeaways are to ensure understanding of its role and the implications of utilizing such advances while weighing the risks, benefits, and alternatives.

Although the buzz is around AI, the use of technology and tools is not new to the medical industry. This advancement in particular is another area to understand and validate prior to implementation while also recognizing its limitations. We do not anticipate AI assuming the role of risk manager or provider any time soon, but the advances could improve services and timeliness.

Seth SwansonBusiness Development: Seth Swanson

AI can potentially impact malpractice insurance for doctors by enhancing diagnostics and reducing errors, leading to improved patient outcomes. However, as AI becomes more integrated into healthcare, legal and ethical considerations may arise, influencing insurance policies and premiums based on the evolving landscape of medical technology and its associated risks.

AI’s positive impacts include assisting doctors in diagnostics, personalized treatment plans, and administrative tasks. It can analyze vast amounts of medical data, providing quick and accurate insights and ultimately improving decision-making. However, there are concerns about job displacement and, as previously mentioned, ethical considerations related to the use of AI in healthcare. Striking the right balance between AI assistance and human expertise is crucial for maximizing the benefits for doctors and patients alike.

Claims: Frank Bishop

Frank Bishop

AI presents fascinating possibilities for streamlining and enhancing medical liability claims work. It's still essential to navigate this space with caution and a keen understanding of both its potential and limitations. ProAssurance Claims management is currently reviewing different AI platforms for implementation into our Claims space. Some of the potential areas where AI can currently aid Claims include:

  • Document organization and analysis: AI can sort through vast amounts of medical records and other relevant materials, identifying key information and flagging potential issues faster than manual review. It can also prepare medical chronologies.
  • Predictive analytics: AI algorithms can analyze past cases and identify patterns to predict the likelihood of success, the settlement values, and areas of weakness in the opposing party's case.
  • Causation analysis: AI can help assess the connection between a provider's actions and the patient's harm, providing valuable insights for case evaluation and settlement negotiations.
  • Literature review and evidence gathering: AI can quickly identify and retrieve relevant expert reports, scientific studies, and other resources for use by expert witnesses.
  • Data analysis and visualization: AI can analyze complex medical data and present it in clear, understandable formats for internal review committee presentations.

Even within this response, approximately 75% of it was generated by AI.

Christine VazUnderwriting: Christine Vaz

AI is a hot topic in Underwriting these days, with the most common discussion being, how do we—and can we—utilize AI in the underwriting process? As we have seen in many industries AI can be extremely useful when it comes to automation of processes and data gathering. These are two areas in which I see AI being utilized now at ProAssurance and which will continue to develop in the future to enhance the customer experience. The second uncertainty around AI and underwriting is, how do we cover a provider or facility that is utilizing AI in their operations? Like many things in underwriting, this is a gray area. There are current discussions around who, what, where, when, and how to cover this exposure. Each situation is unique, and the experiences of the providers/organizations differ. The reliance on the technology used to form a diagnosis varies among provider types. Technology is also changing regularly. As we perfect coverage and answer these questions, our insureds will need to understand their own usage and communicate that to their broker/carrier. This will help to ensure that they will be covered in the event of a future claim and incorporate proper risk management to mitigate the risks associated with AI.

NORTHEAST

  • Claims: Mark Lightfoot
  • Underwriting: Tim Pingel
  • Risk Management: Michele Crum

Mark LightfootClaims: Mark Lightfoot

We are not yet using AI in Claims. We are looking into some applications including, for example, medical chronologies. AI may be and probably already is being used in medical care, particularly in the diagnostic setting.

TimPingelUnderwriting: Tim Pingel

While we aren’t using AI currently, we are starting to use data analytics, culling through all the information we have to help us determine trends in specialties. This allows us to make better decisions when it comes to risk selection. As those tools are further developed, we will be able to automate parts of the underwriting process, which will help improve decision-making and the time it takes to make those decisions.

Michele CrumRisk Management: Michele Crum

In Risk Management we are being asked to provide educational information about AI and the risks those using AI need to know. The research and experiences used today become outdated quickly, so our team believes we need to update our work at least every six months due to the rapid growth in use.

WEST
  • Claims: Gina Harris
  • Underwriting: Lucy Sam
  • Risk Management: Katie Theodorakis
  • Business Development: Andrea Linder

Gina HarrisClaims: Gina Harris

We know that computers can look at old radiology films and “find” masses that cannot be seen with the naked eye. We also have a good sense that computers can predict when something unseen might evolve into something sinister. How will AI impact the standard of care for medicine?

Lucy SamUnderwriting: Lucy Sam

An item that I mentioned at the Leadership Elite event was the possibility of AI somehow reducing the number of burdensome administrative tasks for providers and ultimately improving morale (e.g., assisting with medical record documentation and other paperwork).

Regarding the radiology example, ideally the radiologist and AI work in tandem to avoid missing items.

Katie TheodorakisRisk Management: Katie Theodorakis

We do not currently use AI in Risk Management on the West regional team. Our biggest challenge with AI is staying on top of what is out there so we can educate physicians. Our fourth quarter Risk Management national webinar was all about AI. Physicians are already asking for updated education because the environment is changing so fast. There is great potential in AI, but with the potential comes risks.

Andrea LinderBusiness Development: Andrea Linder

At this point, I do not see a use for AI in Business Development. I’m sure there are sales applications out there using AI, but I’ve not seen or heard of any. Our process is unique because our customers (brokers) are not the end users.

AI and Medical Malpractice Insurance:

Navigating Risks and Opportunities

Artificial Intelligence, also known as AI, has revolutionized various industries, and healthcare is no exception. From diagnosing diseases to streamlining administrative tasks, AI applications promise to enhance the efficiency and effectiveness of medical practices. However, with the integration of AI into healthcare comes a new set of challenges, particularly in the realm of medical malpractice insurance. This article explores the impact of AI on medical malpractice insurance, examining both the risks and opportunities associated with the use of AI in the healthcare sector.

Risks of AI in Healthcare

1. Diagnostic Errors: AI systems used for medical diagnoses can sometimes misinterpret information, leading to inaccurate conclusions. Diagnostic errors, if attributed to AI, may pose a significant risk in terms of medical malpractice claims.

2. Lack of Explainability: Many AI algorithms operate as "black boxes," making it challenging to understand how they reach specific decisions. This lack of transparency can complicate the attribution of errors, potentially impacting medical malpractice cases.

3. Data Security and Privacy Concerns: The use of AI in healthcare relies heavily on patient data. Any breach of data security or privacy violations could result in legal consequences, affecting both healthcare providers and medical professional liability insurers.

Opportunities and Mitigating Strategies

1. Improved Diagnostics and Patient Outcomes: Despite the risks, AI has the potential to significantly improve diagnostic accuracy and patient outcomes. Insurers may reward healthcare providers who demonstrate successful AI integration with lower premiums.

2. Advanced Risk Assessment Models: AI enables the development of sophisticated risk assessment models, allowing insurers to better understand and predict potential liabilities. This can lead to more customized and accurate insurance coverage.

3. Regulatory Compliance and Standards: Establishing clear regulatory frameworks and industry standards for AI in healthcare can mitigate risks. Compliance with these standards may serve as a defense in medical malpractice cases, emphasizing adherence to best practices.

As AI continues to reshape the healthcare landscape, the relationship between technology and medical malpractice insurance evolves. While there are inherent risks associated with AI, proactive measures such as improved transparency, advanced risk assessment models, and regulatory compliance can help mitigate these challenges. As the healthcare industry embraces the potential of AI, insurers and healthcare providers must work collaboratively to navigate this evolving landscape and strike a balance between innovation and risk management.

References

"Artificial Intelligence for Medical Diagnosis: Opportunities and Challenges," Journal of the American Medical Association, 2021.

"Explainable Artificial Intelligence for Medical Diagnosis: The Key to Trust, Adoption, and Accountability," Health Information Science and Systems, 2021.

"Machine Learning Applications in Risk Assessment and Decision-Making: A Review," Journal of Risk and Financial Management, 2020.

"Regulatory Aspects of Artificial Intelligence in Health Care," New England Journal of Medicine, 2022.

"Security and Privacy Challenges in Healthcare: A Case Study of Using AI in Smart Hospitals," IEEE Access, 2019.

"The Impact of Artificial Intelligence on Medical Malpractice Liability," Journal of the American Medical Association, 2020.

AI & Medical Malpractice
Patient Privacy in the AI Era:

Healthcare’s New Data Dynamics

The influence of artificial intelligence on the medical field will be significant and rapid. By 2030, it is estimated that the AI healthcare market will reach nearly $200 billion.1 With this breakthrough technology's swift advancement and adoption, lawmakers and courts must address a wide range of novel and challenging issues. At or near the top of this list are questions surrounding the use of patient data, specifically patient medical data.

Patient medical data is crucial for AI healthcare systems because it enables them to learn from diverse health records, improving their accuracy in diagnosing and treating patients. However, integrating this data into AI systems presents unique challenges and requires careful consideration of privacy laws. The recent case of Dinerstein v. Google, 73 F.4th 502 (7th Cir. 2023), highlights many of these associated complexities and provides a potential roadmap for other jurisdictions to follow.

As part of a research partnership, the University of Chicago Medical Center (UCMC) shared anonymized patient records with Google to facilitate the development of AI-driven predictive health models. A former UCMC patient sued both entities, alleging multiple legal violations, including breach of contract, invasion of privacy, and consumer protection breaches. The U.S. District Court for the Northern District of Illinois dismissed the case, finding that the patient failed to state a claim and thus did not have standing to sue.

On appeal, the Seventh Circuit affirmed the dismissal, holding that the patient had not suffered a tangible injury due to the disclosure of his medical records. In reaching this conclusion, the court noted that most patient identifiers had been removed from the medical records, which sufficiently anonymized the data provided to the AI system.

In examining the possibility of future harm, the court acknowledged the theoretical possibility of identifying the patient by correlating his medical data with geolocation information but deemed such identification unlikely. In rejecting this potential “future re-identification” argument, the court noted that the alleged risk of future re-identification was not sufficiently imminent and could neither support monetary damages nor injunctive relief.2

Dinerstein suggests that proper de-identification of medical records will be essential in reducing legal exposure when sharing patient data with AI systems. Conveniently, this approach aligns with the most well-known patient privacy law, the Health Insurance Portability and Accountability Act (HIPAA). Once data is de-identified according to HIPAA standards,3 it is no longer considered Protected Health Information (PHI) and is not subject to HIPAA's use and disclosure restrictions.

While Dinerstein provides welcome guidance on this emerging issue, it remains to be seen if other jurisdictions will adopt a similar approach regarding patient privacy claims involving AI. Until then, providers utilizing AI systems should consider the following risk management steps to reduce legal risks when using patient medical data in AI systems:

1. Implement Robust Data De-identification: Use rigorous methods to deidentify patient data before using it in AI systems.
2. Ensure HIPAA Compliance: Adhere strictly to HIPAA guidelines for handling and sharing patient data.
3. Obtain Patient Consent: Inform patients how their data will be used and obtain their consent.
4. Develop Clear Policies: Establish and enforce clear policies for data handling and usage in AI applications.
5. Perform Regular Audits and Assessments: Conduct regular audits of data use and AI algorithms to ensure compliance with legal and ethical standards.
6. Stay Informed about Legal Developments: Keep abreast of changes in laws and regulations related to patient data and AI.

BradByrneWText

 

References

1. “AI in Healthcare Market Size Worldwide 2021–2030,” Statista, September 28, 2023, https://www.statista.com/statistics/1334826/ai-in-healthcare-market-size-worldwide.
2. Dinerstein v. Google, LLC, 73 F.4th 502 (7th Cir. 2023)
3. HIPAA recognizes two methods for de-identification: the Expert Determination Method and the Safe Harbor Method. The former involves a qualified expert confirming the risk of re-identification is minimal, while the latter requires the removal of specific identifiers as outlined by the regulation.

FebruaryArt_Arm

Meet Phyllis: Eastern's AI-Powered Mailroom Associate

EasternAlliance_Logo 2017-RGBWe all have aspects of our jobs that are routine, even tedious. If you could delegate the boring, manual parts of your day-to-day, would you? That’s exactly what Eastern Alliance, part of ProAssurance Group, has done in its mailroom.

Meet Phyllis. She is an AI-powered, digital team member that can think, read, and intuit just like a human. She understands how to interact with insurance documents, systems, and processes so well that she indexes more than 80% of Eastern’s Claims mail.

Roots Automation
Phyllis was named by two of Eastern’s long-time mailroom associates, Ellen Leiphart and Rose Muchmore, but the idea for her came from a company called Roots Automation. They created the Roots Autonomous Workforce Platform—the world’s first language-oriented platform that enables insurance-knowledge workers to create AI-powered “digital coworkers” using speech and text.

Eastern began looking at the platform in September 2022 and decided to “hire” one of these digital coworkers for its mailroom. After an extensive training period led primarily by Jennifer Zimmerman, Eastern’s Business Systems Manager, Phyllis went live on April 26, 2023.

How to Train Your Robot
Over the course of several months, Roots Automation worked with Zimmerman as she explained exactly how the mailroom associates did their job, step by step, indexing thousands upon thousands of emails—primarily vendor bills, correspondence, state forms, and medical reports. She narrated how all of them needed to be reviewed and indexed to the right document type and sent to the right person.

As Zimmerman painstakingly explained the work, Phyllis began to learn.

She came programmed with key capabilities needed to effectively manage the documents and systems commonly found in an insurance mailroom. She holds an extensive amount of insurance-specific data and process knowledge through Roots Automation’s proprietary InsurGPTTM AI model—a large language model (LLM) created specifically for insurance.

She is built with AI that is capable of reading screens, documents, forms, and data like a human. She can dynamically identify and interact with objects on a computer screen, such as buttons, icons, form fields, and field names.

Leveraging advanced Natural Language Processing (NLP) to collaborate and communicate with people around document exceptions, Phyllis is continually adapting and learning. She captures transactional and metadata through interactions with people. If she is not “confident” in her ability to process an item, she can send that item for “review,” where one of our human team members can handle it for her.

Phyllis undergoes monthly performance retraining, with the goal of increased confidence and ongoing improvement—meaning she gets smarter every month. Currently Phyllis is successfully processing more than 80% of the items she encounters. By December 2023 Phyllis had processed more than 64,000 items.

phyllis laptopAdvantages of Automation
The benefits of an AI-powered team member are many. Phyllis can automate repetitive tasks, improve processes and workflows, enable quicker decision making, reduce human error, and be available 24/7.

“Prior to Phyllis, the average turnaround time for Claims mail coming into the mailroom was always three days,” said Zimmerman. “Occasionally, around the holidays or if someone was on vacation, we may have pushed to four or five days. With Phyllis, we are at same day processing. The obvious benefit of a digital coworker is that she can process incredibly quickly and do it all the time. She never gets sick or takes a day off.

Most importantly though, she eliminates the mundane, manual part of the work. Our associates can now take on new tasks because they have more time.”

Phyllis may do the work of three people, but she doesn’t replace our valuable human team members. The concern about AI replacing humans is a real one. But according to Harry Talbert, Eastern’s Senior Vice President, Information Systems, we needn’t worry. “We had no staff reductions with Phyllis,” he said. “The idea is to automate inefficiencies and reallocate people to other aspects of their work where they can provide more value—value that only a real person can provide.”

AI-powered digital coworkers are designed to streamline processes, not replace jobs.

Will She Replicate?
Phyllis has received a glowing performance evaluation. Is it possible that a Phillip might join our team in the future? Talbert says it’s likely.

“I think we’ll continue to seek opportunities to utilize the platform across the company to improve efficiencies and maximize our resources,” he said. “We’re budgeting for future projects, and asking, what makes the most sense?”

Clearly digital team members can save real people time, but we don’t want our service levels to suffer. Utilizing machine intelligence is only smart if it improves our customer interactions and frees people up to perform the work that needs a human touch.

About Eastern
Since its inception in 1997, Eastern has been committed to creatively serving the specialty workers’ compensation insurance needs of a diverse population of businesses and organizations. Through its primary operating subsidiaries—Eastern Alliance, Eastern Re, and Inova Re—Eastern is a specialty underwriter of workers’ compensation insurance products with an unwavering focus on delivering innovative solutions and services to the specialty workers’ compensation market. Learn more at EasternAlliance.com.

Retention Campaign Update

Each year we prioritize our marketing efforts to acquire and retain customers by looking for ways to further develop relationships. One of our most popular strategies is the complimentary book we offer our insureds as a thank you for being a loyal customer. The December 2023 issue of ProVisions introduced our latest retention campaign offering—The Laws of Medicine: Field Notes from an Uncertain Science by Dr. Siddhartha Mukherjee.

2023 Campaign Results

The 2023 campaign has officially ended and featured the popular book Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, by Dr. Eric Topol. The book explored the role artificial intelligence will play in the future of medicine and how deep learning algorithms can further a physician’s ability to create custom treatment plans.

We are pleased with the campaign’s success, showing an overall response rate of nearly 14%, with additional replies expected.

To receive any of our complimentary books, email AskMarketing@ProAssurance.com letting us know your mailing address. We appreciate everything you do to place and retain business with ProAssurance. If you have any questions about the retention campaign, please reach out.

AdobeStock_504264412_Layout

 

Learn More: AI in Medicine

iStock-1182774087
New On-Demand CME Course

AI in Healthcare: Examining the Matrix of Risk & Opportunities

A thorough understanding of artificial intelligence, machine learning, and deep learning is important as we examine the risks and opportunities of their utilization in healthcare. We will define these terms as we explore current and future uses of AI in healthcare. We will also help your clients to identify potential risks to patient safety that could arise as they incorporate AI into their practices. These risks could potentially lead to medical professional liability claims. We hope the risk management strategies provided help decrease liability risk and improve patient safety when AI is utilized.

Objective
This enduring educational activity will support your clients’ ability to:

  • Understand the current and future uses of AI in healthcare.
  • Identify patient safety issues and liability risks associated with incorporating AI into healthcare.
  • Implement strategies to decrease liability risk and improve patient safety when utilizing AI in a practice.

Physicians interested in taking this seminar can sign in at ProAssurance.com. Select “Physician Online Seminars” from the “Seminars” menu to gain access to our CME course library.

What to Look For When You Read

Identifying Red Flags in ChatGPT-Generated Content

A careful examination of a piece of writing, whether authored by human, bot, or unknown, is recommended to help ensure quality and reliability of content. Because of the advancements in AI technology and the accompanying prevalence of ChatGPT and other chatbot tools in the production of articles, research studies, and other written forms, it can be difficult for readers to differentiate a traditionally written work from one composed by a chatbot. Below is a basic guide for identifying some of the red flags you may note in content generated by ChatGPT. The list is not exhaustive but should serve as a starting point for noticing these potential problem areas. It is also advised that readers conduct their own research to verify content as well as cited sources.

Inaccuracies and fabricated references

These include factual errors or incomplete information that fail to adequately address or support the topic or subject matter of the article or piece. These errors may be due to complexity of subject matter, the wording of the query, or the quality of the ChatGPT’s input data (i.e., it may be limited, biased, or outdated). The piece may also contain improperly formatted citations and references that are irrelevant, inadequate, unreliable, or simply made up. ChatGPT may have difficulty differentiating what sources are fitting and timely for the topic, and which are not.1,2

Misunderstanding of context

This scenario includes difficulty grasping or prioritizing the objective of a work, using an appropriate tone for the intended audience, or staying on topic, leading to impertinent information and inaccuracies, especially with longer or more complex content. The AI tool may take a long time to arrive at the main point, using a wordy introduction and presenting added information that does not support the aim of the piece. ChatGPT is not able to apply common sense or reason and may lack the ability to show discretion to omit extraneous information, recognize a lack of information, or keep content focused.3,4

Missing emotional intelligence

This potential error also relates to the misunderstanding of context. ChatGPT has an inability to feel or interpret nuances of emotion. As a result, its writing may feel unintentionally detached, lack personality, or seem unempathetic where the subject matter calls for sensitivity or tact. As chatbots are essentially robots that mimic human learning and speech, there are no personal stories, humor, or anecdotes, nor mentions of real people or life experiences that may create connection with an audience.4

Lack of expert knowledge

ChatGPT may have difficulty with specific topics or niche areas of study. When it may not “know” about new developments or changes in the field, the results are often generic thinking or advice. The writer, ChatGPT, shares no strong opinion or position on a topic one way or another, and there is no reference or application of self-reflection or personal knowledge that might lend itself in the interpreting of subject matter. Further, because it is trained on millions of pieces of data, ChatGPT software is inundated with a multitude of viewpoints and does not have a foolproof capacity to discern what may be true or false, right or wrong. While it works on an answer, a bot may apply trendy catch phrases to boost a statement, such as “unleash,” “buckle up,” or “surging ahead,” but these phrases do little to deliver new information or to develop an argument or conclusion about the subject matter.3

Biased language

Content produced by a ChatGPT may have information collected from limited data sets that over- or underrepresent certain races, genders, cultural and age groups, presenting information that is incomplete or distorted, and therefore unreliable. Latent biases perpetuate as they are rooted in culture and the flow of information across the internet, and discriminatorily skewed data may be weaved into the development of the ChatGPT product. These biases may not be obvious to the reader relying on the data. However, readers may notice language that contains stereotypes, lacks an appropriate level of sensitivity, or comes across as generalized. Content might also reference specific population groups to the exclusion of others or mischaracterize an underprivileged community. ChatGPT tools, like other AI products, need monitoring and regulations that optimize bias detection, while widening the pool of participants represented in training data, so that results are more accurate, expansive, and inclusive.4,5

Predictable structure

ChatGPT’s content may have a template feel to it, consisting of five or six sections, with a consistent number of sentences to each paragraph. There is an introduction, with actionable, topic sentences as headings, as well as development (with a summarizing statement at the end of the paragraphs). As stated, ChatGPT has difficulty generating longer and more complex content, so simpler, brief pieces like short summaries or bullet points may be easier for it to manage. The ChatGPT article will usually have an ending statement or disclaimer as well, reminding you to “take ethics into consideration.”4,6

Something seems … not right

“Humans are clever,” says Forbes Senior Contributor and AI coach Jodie Cook. “If something feels off with something someone has written, your instinct is probably right.”3

References

1. Tirth Dave, Sai Anirudh Athaluri, Satyam Singh, “ChatGPT in Medicine: An Overview of Its Applications, Advantages, Limitations, Future Prospects, and Ethical Considerations,” Frontiers in Artificial Intelligence 6 (May 4, 2023), https://www.frontiersin.org/articles/10.3389/frai.2023.1169595/full.

2. Giri Viswanathan, “ChatGPT Struggles to Answer Medical Questions, New Research Finds,” CNN, Dec. 10, 2023, https://www.cnn.com/2023/12/10/health/chatgpt-medical-questions/index.html.

3. Jodie Cook, “6 Giveaway Signs of ChatGPT-Generated Content,” Forbes, Dec. 6, 2023, https://www.forbes.com/sites/jodiecook/2023/12/06/6-giveaway-signs-of-chatgpt-generated-content/?sh=10b8c9181e7d.

4. Bernard Marr, “The Top Limitations of ChatGPT,” Forbes, Mar 3, 2023, https://www.forbes.com/sites/bernardmarr/2023/03/03/the-top-10-limitations-of-chatgpt/?sh=5b49a2158f35.

5. Bangul Khan et al., “Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector,” Biomedical Materials & Devices 1-8 (Feb. 8, 2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9908503/.

6. “AI Writing Detection: Red Flags,” Montclair State University, accessed Feb. 6, 2024, https://www.montclair.edu/faculty-excellence/teaching-resources/clear-course-design/practical-responses-to-chat-gpt/red-flags-detecting-ai-writing/.

Red Flags
News & Updates
Prepare for Insurance Industry Disruption

You’ve seen it happen before. New entrants arrive and disrupt whole industries that once seemed impervious to change. Google and Amazon jump to mind, but let’s not forget how Progressive disrupted auto insurance by adding a new data source—actual driving behavior—and analyzing its associated risks. While personal auto insurance led the analytics transformation, commercial insurers are successfully applying big data and advanced analytics to underwriting and claims management for small- and medium-sized enterprises. (MPL Association)

Read more →

DOJ’s Healthcare Probes of AI Tools Rooted in Purdue Pharma Case

Prosecutors have started subpoenaing pharmaceuticals and digital health companies to learn more about generative technology’s role in facilitating anti-kickback and false claims violations, said three sources familiar with the matter. It comes as electronic health record vendors are integrating more sophisticated artificial intelligence tools to match patients with particular drugs and devices. It’s unclear how advanced the cases are and where they fit in the Biden administration’s initiative to spur innovation in healthcare AI while regulating to promote safeguards. (Bloomberg Law)

Read more →

How AI Will—and Won’t—Change Healthcare

Why it matters: The scale of change that AI could bring to healthcare not only impacts patients but also the millions of people the system employs—who will ultimately shape how widely it’s adopted.

The big picture: Recent breakthroughs in AI technology are coming up against a healthcare system that is very resistant to change, in no small part because of how heavily it’s regulated and the trillions at stake. (Axios)

Read more →

AI’s Big Test: Making Sense of $4 Trillion in Medical Expenses

Hospitals and insurers are racing to find new artificial intelligence tools to give them an edge in billing and processing their part of the $4 trillion in medical expenses Americans accrue each year. As one of the largest parts of the U.S. economy undergoes perhaps its biggest transition in decades, billions of dollars are at stake—not only for healthcare providers and insurers, but also for the government, which handles millions of Medicare and Medicaid claims every year. (Politico)

Read more →

Measuring the Impact of AI in the Diagnosis of Hospitalized Patients

Question: How is diagnostic accuracy impacted when clinicians are provided artificial intelligence (AI) models with image-based AI model explanations, and can explanations help clinicians when they are shown systematically biased AI models? (JAMA Network)

Read more →

Most Doctors Have Not Yet Tried AI but Are 'Cautiously Optimistic' About the Benefits

The hype around healthcare artificial intelligence has reached a fever pitch, but most doctors are holding back from trying it out in their medical practice, for now.

A recent survey by Elation Health found that 67% of primary care physicians have not yet tried an AI-powered medical scribe solution and are looking to electronic health record vendors to guide them to the best option that integrates with their system. (Fierce Healthcare)

Read more →

Ties that Bind updated Banner

Staking Your Leadership Position in AI Medical Liability Protection

TTBFeb24

My brilliant and innovative friend Peter called one evening to discuss some sales and marketing strategies for his latest brainchild—an ingenious app he created to revolutionize dermatology. Drawing from his success in implementing artificial intelligence into telephone communications and airline scheduling software, Peter unveiled a cutting-edge screening tool to combat malignant skin cancers.

After talking to Peter, I phoned my cousin Jennifer, a practicing dermatologist, to get her thoughts about his creation. She dismissed it almost immediately. During residency one of her attending physicians opined that AI could never diagnose as accurately as a human. I countered with Peter's assertion that an experienced dermatologist may witness thousands of "data points" over time, but an AI database has millions. Jennifer responded, "That may be, but I'm steering clear of AI until the technology is proven and mainstream."

As an MPL insurance agent, your role is not to convince healthcare stakeholders of AI's benefits but to help them understand and effectively manage the risks. However, HCPs like my cousin Jennifer may dismiss the topic until AI technology is commonplace. Don't be deterred! Commit to a leadership position in AI medical liability protection starting now.

Awareness Never Comes Too Soon
Artificial intelligence is a hot topic, and its increasing presence in healthcare is undeniable. While some HCPs may shy away from implementation, keeping up with liability issues is essential.

Showcasing how analogous providers have embraced AI while managing the risks is an effective way to draw attention. Given that HCPs might not relate to technologies and protocols they don't use, drawing parallels with their past experiences, such as when they adopted new clinical protocols, surgical techniques, and medical technologies, reminds them that adoption is closer than it appears.

Engaging the AI Agnostics

When dealing with AI-agnostic individuals (like my cousin), the focus should shift to education. Establishing awareness about medical liability and artificial intelligence is crucial, even for healthcare professionals who may not currently utilize AI technologies in their practice. Physicians don't always know what they don't know, and agents who keep them informed about the subtle aspects of liability provide invaluable support and insights.

Most HCPs find case studies irresistible. Look for real-life examples illustrating the value of a comprehensive MPL policy that includes AI coverage. This gives a practical understanding of how coverage safeguards clinicians and healthcare entities in real clinical scenarios.

Position for Success as AI Evolves
Positioning ProAssurance, your agency, and yourself as thought leaders at the crossroads of healthcare, AI, and MPL insurance presents a chance to carve out a distinctive presence in the marketplace.

Start by continuously educating yourself about healthcare AI issues and related liability. This ensures you have the knowledge and fluency to discuss relevant topics as the technology integrates more deeply into patient care. Prospects and clients will recognize your expertise, fostering a sense of recognition and trust.

Highlighting ProAssurance's educational resources on artificial intelligence showcases the role of MPL insurance in mitigating AI-related risks. Combined with your personal insights about policy coverage and ProAssurance's risk assessment services, you'll portray yourself as an agent who covers all aspects of MPL liability.

Seizing the Opportunity
While AI introduces new levels of risk, it also presents an unparalleled opportunity for MPL insurance agents to stand out and lead in an industry undergoing transformative changes. By showcasing your expertise and ProAssurance's educational and risk management resources, you can position yourself as a trusted advisor ready to help HCPs navigate the complex landscape of AI risk in healthcare (even the resistant ones like my cousin Jennifer).

Headshots10

 

Written by Mace Horoff of Medical Sales Performance.

Mace Horoff is a representative of Sales Pilot.  He helps sales teams and individual representatives who sell medical devices, pharmaceuticals, biotechnology, healthcare services, and other healthcare-related products to sell more and earn more by employing a specialized healthcare system.

Have a topic you’d like to see covered? Email your suggestions to AskMarketing@ProAssurance.com.

 

Spot Our AI

Having a Bit of Fun

Half the fun of the new AI trend is poking the publicly available tools to see what they can do. We’ve rounded up a few of our favorites.

PoemUpdate

Cleanup.pictures
Need to remove an ex from the family wedding photos? Wish there wasn’t a garbage can behind your son in your photo of his epic soccer victory? Or simply want to eliminate someone’s name badge to make a photo less identifiable? Say no more! Cleanup.pictures helps you delete unwanted items from photos with no Photoshop skills required.
https://cleanup.pictures/

Interior AI
Want to redecorate your living room, but have no idea where to start? Interior AI lets you try a variety of styles on a photo of your space. Just swap a few things around and send off to the decorator.
https://interiorai.com/

February ProVisions