You have to ask the right question when evaluating complex interventions
Some of the conclusions in the report and editorial are not justified by the evidence presented. The problem is inherent to any evaluation of a complex intervention and raises big questions about how to report evidence about such interventions.
The editorial summarises the results like this: "There was distinct variation among practices in how well the system functioned. Some noted large reductions in workload while others experienced big increases." But then goes on to say; "Although the delay to see or speak to a doctor is greatly reduced by the introduction of telephone first systems, overall workload for doctors increases. The marked reduction in time spent consulting in surgery is more than compensated for by an increase in time spent on telephone consultations."
The problem is that the second part uses the average result across all trials to make a statement about whether the intervention works or not but this isn't consistent with the degree of variation which shows, at the very least, that the benefits are sometimes large. Unless we understand the reason for the variation we can't draw a clear conclusion that the average result is really measuring whether the intervention works.
The problem is caused because there are two major sources of variation here: one is the population variation among GP practices (this is analogous to the metabolic and physiological variation among patients in an RCT of a drug). But unlike a drug trial where the actual intervention can be carefully controlled (we test whether the pills are all the same etc.) you can't control the variation in the way telephone triage was managed by the practices. If this was a drug trial we would be testing pills with unknown amounts of variation in their content and strength). What this means in practice is that the outcomes cannot be attributed to the simple use of telephone triage but may have large factors dependent on how the practices managed their use of telephone triage.
An analogy might help (this one was used by the respected economist John Kay to illustrate a common error in business strategy). Imagine you have the ambition to be a great concert violinist. You observe that the great performers have certain characteristics in common: they all wear evening dress, they all have expensive violins, for example. You reason that to become a great violinist you must copy those visible characteristics so you buy yourself expensive clothes and an Amati violin. You are still a useless violinist.
The lesson is that copying the visible characteristics is not the important thing that makes you a great violinist: that would require you copy the things the great violinist did that you cannot see (practising for 10 hours a day for a decade and having a good teacher, for example).
The same is true for complex healthcare interventions in general and telephone triage in particular. The easy-to-see part is that some form of telephone triage is done. What is more difficult to tell is what else a practice has to get right to get any benefit from this. How well do they manage the process? How do they learn what can be resolved over the phone and doesn't require a face-to-face appointment? How do they redesign the practice workflow to exploit the benefits of the intervention? It is worth noting that a similar problem arises in healthcare (and every other industry for that matter) when trying to get benefits from the use of IT. Unless the work is reorganised to exploit the benefits of the technology, there won't be any. So many studies report that computers are a waste of time despite the fact that they could yield benefits if the complicated additional task of redesigning the work was also undertaken.
In telephone triage we can tell whether the system will work using a handful of characteristics of the system: what proportion of calls can be resolved over the phone? how long does the average call take? how long are face-to-face appointments? It is easy to construct a simple model showing, from these numbers, whether there is an overall time saving. But all those numbers depend on skill and experience and can be improved with good management. Those factors are far from being visible when testing telephone triage. If the practice doesn't have the skill and doesn't monitor the numbers to drive improved skill then there won't be significant benefits (and this isn't made easier by the fact that one major vendor of GP clinical systems appears to be unable to record how long telephone calls last). It isn't just adding telephone triage to the system that yields the benefits: it is getting those other factors right.
So to argue from the average benefits across a large number of practices that telephone triage doesn't work is a logical mistake. It is obvious from the report that some practices saw big benefits and time savings. It is also obvious that most practices saw much faster access for patients. But a general claim that triage systems don't work is a bit of a logical leap when the data shows that sometimes do.
We should not be claiming that this study shows phone triage doesn't yield benefits to GPs; we should be asking what factors mattered to give benefits in some places and not others.
The report and editorial are right, though, that policy-makers have an endearing but dangerous belief in "magic-bullets". Pushing telephone triage won't help unless the complicated set of supporting management practices that make it work are understood. And they are rarely understood by the people who make policy. Unfortunately this is a common error that greatly inhibits many efforts to improve quality and efficiency across the whole NHS.
It would also be good if the vendors of the systems were clearer about the need for fine-tuning the way the systems are operated to maximise the benefits (full disclosure, I currently support one of them and one of my goals is to develop better ways of providing analysis to practices to support them to improve the way they operate).
In summary, the right conclusion from the study should not be "phone triage doesn't work" but "phone triage sometimes yields big benefits and we really need to start asking which factors make it work in some places and not in others".
Competing interests:
GP Access is a client of mine, though I'm commenting in a personal capacity.
Rapid Response:
You have to ask the right question when evaluating complex interventions
Some of the conclusions in the report and editorial are not justified by the evidence presented. The problem is inherent to any evaluation of a complex intervention and raises big questions about how to report evidence about such interventions.
The editorial summarises the results like this: "There was distinct variation among practices in how well the system functioned. Some noted large reductions in workload while others experienced big increases." But then goes on to say; "Although the delay to see or speak to a doctor is greatly reduced by the introduction of telephone first systems, overall workload for doctors increases. The marked reduction in time spent consulting in surgery is more than compensated for by an increase in time spent on telephone consultations."
The problem is that the second part uses the average result across all trials to make a statement about whether the intervention works or not but this isn't consistent with the degree of variation which shows, at the very least, that the benefits are sometimes large. Unless we understand the reason for the variation we can't draw a clear conclusion that the average result is really measuring whether the intervention works.
The problem is caused because there are two major sources of variation here: one is the population variation among GP practices (this is analogous to the metabolic and physiological variation among patients in an RCT of a drug). But unlike a drug trial where the actual intervention can be carefully controlled (we test whether the pills are all the same etc.) you can't control the variation in the way telephone triage was managed by the practices. If this was a drug trial we would be testing pills with unknown amounts of variation in their content and strength). What this means in practice is that the outcomes cannot be attributed to the simple use of telephone triage but may have large factors dependent on how the practices managed their use of telephone triage.
An analogy might help (this one was used by the respected economist John Kay to illustrate a common error in business strategy). Imagine you have the ambition to be a great concert violinist. You observe that the great performers have certain characteristics in common: they all wear evening dress, they all have expensive violins, for example. You reason that to become a great violinist you must copy those visible characteristics so you buy yourself expensive clothes and an Amati violin. You are still a useless violinist.
The lesson is that copying the visible characteristics is not the important thing that makes you a great violinist: that would require you copy the things the great violinist did that you cannot see (practising for 10 hours a day for a decade and having a good teacher, for example).
The same is true for complex healthcare interventions in general and telephone triage in particular. The easy-to-see part is that some form of telephone triage is done. What is more difficult to tell is what else a practice has to get right to get any benefit from this. How well do they manage the process? How do they learn what can be resolved over the phone and doesn't require a face-to-face appointment? How do they redesign the practice workflow to exploit the benefits of the intervention? It is worth noting that a similar problem arises in healthcare (and every other industry for that matter) when trying to get benefits from the use of IT. Unless the work is reorganised to exploit the benefits of the technology, there won't be any. So many studies report that computers are a waste of time despite the fact that they could yield benefits if the complicated additional task of redesigning the work was also undertaken.
In telephone triage we can tell whether the system will work using a handful of characteristics of the system: what proportion of calls can be resolved over the phone? how long does the average call take? how long are face-to-face appointments? It is easy to construct a simple model showing, from these numbers, whether there is an overall time saving. But all those numbers depend on skill and experience and can be improved with good management. Those factors are far from being visible when testing telephone triage. If the practice doesn't have the skill and doesn't monitor the numbers to drive improved skill then there won't be significant benefits (and this isn't made easier by the fact that one major vendor of GP clinical systems appears to be unable to record how long telephone calls last). It isn't just adding telephone triage to the system that yields the benefits: it is getting those other factors right.
So to argue from the average benefits across a large number of practices that telephone triage doesn't work is a logical mistake. It is obvious from the report that some practices saw big benefits and time savings. It is also obvious that most practices saw much faster access for patients. But a general claim that triage systems don't work is a bit of a logical leap when the data shows that sometimes do.
We should not be claiming that this study shows phone triage doesn't yield benefits to GPs; we should be asking what factors mattered to give benefits in some places and not others.
The report and editorial are right, though, that policy-makers have an endearing but dangerous belief in "magic-bullets". Pushing telephone triage won't help unless the complicated set of supporting management practices that make it work are understood. And they are rarely understood by the people who make policy. Unfortunately this is a common error that greatly inhibits many efforts to improve quality and efficiency across the whole NHS.
It would also be good if the vendors of the systems were clearer about the need for fine-tuning the way the systems are operated to maximise the benefits (full disclosure, I currently support one of them and one of my goals is to develop better ways of providing analysis to practices to support them to improve the way they operate).
In summary, the right conclusion from the study should not be "phone triage doesn't work" but "phone triage sometimes yields big benefits and we really need to start asking which factors make it work in some places and not in others".
Competing interests: GP Access is a client of mine, though I'm commenting in a personal capacity.