Rammya Mathew: Could artificial intelligence be the key to transforming general practice?
BMJ 2024; 384 doi: https://doi.org/10.1136/bmj.q661 (Published 19 March 2024) Cite this as: BMJ 2024;384:q661
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Dear Editor,
Dr Mathew writes a very compelling article regarding the potential use of AI in automating triaging within primary care. Effective triaging is a very complex task requiring a lot of clinical experience in ensuring patients are sign-posted to the best healthcare team to address their health needs. It is therefore not surprising that experienced GPs are perhaps best suited to triage within primary care setting. Any effective AI technology would require a lot of training and testing before it can be deemed safe enough to be deployed in triaging patient symptoms.
Having said that, I do see other emerging roles for AI within primary care. Primary care records which are already in electronic format can be diverse and overwhelming to a GP not familiar with the patient. I see a role for machine learning and natural language processing in generating concise and insightful analytic medical summaries that adapt to the particular symptom the patient presents with. This could drastically reduce the amount of time spent sifting through years of medical records and also reducing errors from missing key information relevant to management of a patient.
We have seen recommendation engines used by corporate giants such as Amazon and Netflix which has transformed online shopping. Within primary care, recommendation engines may have a variety of roles including recommendations from clinical guidelines and based on age, risk profile or previous clinical decisions. For example, patients with allergies to first and second line medications may be offered third line medication. This could be helpful in automating repetitive prescriptions which GPs routinely carry out when reviewing patients.
Artificial intelligence can be used in the early detection of diseases through predictive modelling. There are lots of data sitting within electronic records that can be utilised by Al in predicting individuals at higher risks of developing certain diseases. Data formats can range from numerical, textual, clinical codes, abbreviations, etc. Some of this predictive modelling is already taking place though initiatives such as QRISK algorithm but it really ought to be automated and fully integrated with clinical records software such as EMIS and systmOne. Similar predictive models have proven useful in identifying high risk groups and targeting them for preventive interventions [1]. A more detailed article on applications of AI within primary care can be found from [2].
All in all, I feel we have barely scratched the surface when it comes to potential for AI in primary care. In reference to the recent announcement of £3.4bn investment by Jeremy Hunt, I do hope GPs are encouraged to participate in innovative projects that could see AI become an adjunct to clinical decision making.
References
1. Adedeji O Majekodunmi 1, Claire Thorne, Ruslan Malyuta, Alla Volokha, Robin E Callard, Nigel J Klein, Joanna Lewis; European Paediatric HIV/HCV Co-infection Study group in the European Pregnancy and Paediatric HIV Cohort Collaboration and the Ukraine Paediatric HIV Cohort Study in EuroCoord
Modelling CD4 T Cell Recovery in Hepatitis C and HIV Co-infected Children Receiving Antiretroviral Therapy, Pediatr Infect Dis J. 2017 May;36(5):e123-e129. doi: 10.1097/INF.0000000000001478
2. Adedeji Majekodunmi,. Artificial intelligence: current and future impact on general practice. InnovAiT, September 2021, Vol 14, Issue 12, https://doi.org/10.1177/17557380211045101
Competing interests: No competing interests
Dear Editor
Dr Mathew writes of her hope that AI could triage "better than a doctor". I have already written a response saying that using AI for triage has medicolegal risks, and Dr Wake has also responded saying that triage needs "the human touch".
However, it now seems to me there is a deeper conceptual reason that it is logically impossible for AI to triage "better than a doctor" - at least if we are using any conception of AI that is not a science-fiction "miracle robot"
To understand why, consider the basis of most currently-used AI systems: machine learning. (The logic in fact applies to all currently-conceived AI systems equally, but I will focus on the most popular type.)
Machine learning works like this: the machine studies multiple cases involving inputs and outputs and works out which combinations of inputs correspond to which outputs.
This works well when the output is, say, "biopsy confirms cancer" vs "biopsy rules out cancer". In that case inputs might be a symptom list, lab results and radiology images. After studying cases, the machine learning program "knows" which inputs predict which outputs, and then can tell you which patients are likely to have cancer confirmed at biopsy.
So let's imagine taking this machine to a triage scenario. Suppose for the sake of argument triage has hitherto been done by doctors. The machine will study already-completed triage cases to work out which inputs predict which outputs. The inputs will largely focus on demographics, reported symptoms, and perhaps the patients' actual words.
Fine. But what are the outputs? Presumably they are "doctor triages patient to same day face-to-face consultation"; "doctor triages patient to routine telephone consultation", etc.
Great, so our machine now "knows" that presenting with crushing central chest pain predicts an output of "doctor triages patient to 999 call". But now the reason it cannot be better than a doctor is evident: the machine has learned to predict / reproduce the doctors' decisions.
It may now be faster than a triage doctor. It may now be cheaper than a triage doctor. But surely it cannot be "better than a doctor"? After all, the degree to which its programming has been successful IS the degree to which its outputs match the doctors' decisions it studied.
By definition, for this machine to make a different decision to what a triage doctor would have chosen would be for this machine to have made an error.
There is no concept of the "correct triage outcome" beyond what a doctor might deem correct. Whether or not the machine has triaged well or badly can only be judged by a doctor reviewing its decisions. So the conclusion seems inescapable that AI triage cannot be better than a doctor.
Competing interests: No competing interests
Dear Editor
Dr Mathew discussed the use of AI to help triage in general practice. Our practice started using a human GP to do this using a standardised form for the patient to complete which is then looked at and sifted appropriately.
Having the most senior member of the team assess and assign management of every clinical problem that present to GP seems to be a sensible move. It stops the 8am rush for appointments, and reduces the issue of people who shout the loudest getting seen to first. The system also takes the clinical responsibility away from non clinical team members and gives it to those who are trained to make a decision.
Its still early days, but overall this has been well received by both our team and patients alike. But, here is the catch, you need GPs to make the decisions. Their clinical experience and training allows for assessment of the whole clinical picture including both PMH, social history and an understanding of the specific skills of the team around them. This improves efficiency, allocates appointments to the right person at the right time and also helps with continuity of care. As a triage system, I'm not sure how AI could improve on this, because the human touch is built into this process, something AI cannot offer.
Dr Anna Wake, GP partner.
Competing interests: No competing interests
Dear Editor
Dr Mathew has great hopes for the use of AI in GP triage. The question I would like her to consider is: is she willing to rely on it unsupervised?
At the practice I work at, we have AI (specifically, LLM) embedded in triage through the Klinik platform. However every form submitted here is also read by a human in case the LLM has missed key information or (more commonly) reacted with unnecessary alarm to an innocuous history.
If every form is to be reviewed by a human regardless, the benefit of LLM is modest - it just chooses in what order forms are looked at.
For LLM to have a "transformative" effect on access, we would have to be using it "unsupervised" without human oversight. I would be interested to know if other practices are doing so, as I believe this would be taking a significant medicolegal risk.
In my opinion, LLM technology is not in a condition now to operate unsupervised in such a high-stakes scenario as medical triage, and I have grave doubts that it ever will be.
Competing interests: No competing interests
Re: Rammya Mathew: Could artificial intelligence be the key to transforming general practice?
Dear Editor,
The benefits of AI will be unimaginable I am sure but I would like to make a couple of comments.
Dr Mathew points out that AI could make insightful suggestions that would helpful with complex electronic records. I regularly use GP records but frequently find them wrong. Surely it would be better for AI to help in correcting GP notes before making incorrect insights. Once this is no longer a problem, then it would be time to consider my second point.
People want to be cared for by humans who are fallible but can answer why they did and feel the way they do and take professional responsibility. This is fundamental to care, not just repairing mistakes. Asking a programmer to do these things is not currently possible and not appropriate.
However, AI providing insights could well be used as a way to absolve programmers from this responsibility. But whatever route we decide to go down, patients will need to be fully informed and they should ultimately decide.
Kind regards,
Dr Oliver Sykes
Competing interests: No competing interests