In November 2022, artificial intelligence (AI) got a lot of attention with the launch of ChatGPT, a “chatbot” developed by OpenAI. Users can converse with the chatbot in a fairly sophisticated way via questions that steer conversations toward answers of any desired type, length, style and format.

Although AI has the potential to transform many fields, including health care, there’s still some controversy about its use. One worry is that AI will eventually eliminate some types of jobs. But the possible positive impact of AI may outweigh this concern and others.

Applications to health care
A 2023 article published in Frontiers in Artificial Intelligence noted that, though AI has been used for some time in customer support and data management, its use in health care and medical research has been relatively restricted.

In health care, the article suggests, the potential applications range from assistance in selecting research topics to helping professionals in clinical and laboratory diagnosis. But many limitations and ethical problems haven’t yet been resolved — including credibility, plagiarism and potential medico-legal complications. Most problematic, it can give inaccurate and unreliable results.
Even so, using AI might benefit physicians in some areas of health care, including:

  • Integrating basic data and drafting correspondence to, for example, other physicians or insurers (the physician would still need to proofread the results for accuracy),
  • Translating medical jargon for patients,
  • Recording physician-patient conversations and summarizing them into reports,
  • Dictating notes and providing summaries with key details, such as symptoms, diagnoses and treatments,
  • Pulling relevant information from patient records — for example, lab results or imaging reports,
  • Providing conversational approaches to collecting information from patients,
  • Helping with appointment scheduling, and
  • Providing medicine dosage and prescription renewal reminders to patients.

AI also can suggest appropriate treatment options and identify possible drug interactions. In addition, it can assist in data analysis received from patients using wearables, sensors and other monitoring devices. And it may be able to help keep you up to date on new developments in your areas of expertise.

Garbage in, garbage out
It’s also important to keep in mind some of the potential downsides of AI use in health care. Patients almost certainly will try to use chatbots to self-diagnose. This is already a trend with internet searches and platforms such as WebMD. But chatbots that quickly respond to specific requests might make it even more likely patients will use and trust the results — even when they shouldn’t. If your patients use a chatbot and mention their findings to you, use your expertise and experience to manage their expectations and express appropriate levels of skepticism in the personal use of AI for medical purposes.

Patients need to understand that how questions are organized and presented to chatbots influences the response. There’s a famous computer science term that isn’t heard as much these days: garbage in, garbage out. In short, computers — even AI — can work only with the data received. Chatbots are “trained” by exposing them to various data and scenarios. Invariably, biases and conflicts will be part of both the training and actual use. That’s why it remains important that a human being, especially an expert such as a physician or nurse, verifies the results of anything related to health care.

Many possibilities
Currently, it’s unlikely that AI tools will entirely replace physicians. But these tools may help create more efficiencies in your medical practice, which may result in cost savings for both you and your patients. Explore the many possibilities carefully.

Content from Thomson Reuters