Advances in artificial intelligence (AI) are dominating recent headlines. Although hospital systems and private practices are not likely to make use of ChatGPT and similar forms of this technology that the general public hears about in the news, there are types of AI that play an increasing role in the delivery of care.
Some of the more common examples include:
- Scheduling. Analytics can help predict busier times and adjust scheduling needs appropriately.
- Dictation. Forms of AI are used during a visit to listen to the patient/doctor conversation and convert it into the text to save in the patient’s file.
- Diagnostics. Software can look for patterns and symptoms that may warrant additional testing for a specific illness.
Proponents argue that the use of AI can help to reduce the rate of physician burnout. As we demand more and more of our healthcare professionals, AI could serve as a tool to help ease administrative burdens. If successful, this could allow for physicians to focus on their patients instead of other tasks.
What happens when something goes wrong?
AI is a tool — not just any tool, a tech tool. Anyone that has tried out new tech knows that it often comes with issues in the early stages. The odds of a mistake are likely. Unfortunately, in this scenario, a mistake means more than just frustration getting a computer to work. It could mean life and death for a patient.
Like any tool used in practice, the one who wields the tool is likely responsible for the results. As such, physicians are wise to tread forward carefully with this new tech. When using dictation tools, for example, review the notes to check for accuracy before moving on to the next patient. A failure to do so could serve as evidence in the event of an investigation by the Texas Board of Medical Examiners.
Attorney John Rivas is responsible for this communication