Friday, 12 July 2013

Does failure to respond to antipyretic drugs tell you anything useful about how serious an infection is? Answer no

This was a question that we considered when drawing up the NICE Guidelines on the treatment of fever in children; and we concluded the answer is no.  A recent review in Archives of Disease in Childhood has looked at 8 papers which have each answered the same question, and concluded the same thing.

It is always good when other people confirm your opinion, particularly so in this case because this is a real 'old chestnut' - "if the temperature comes down quickly they are likely to be ok".  Because serious infection is rare this appears to be true, most children whose temperature comes down quickly are indeed ok; but then so are most children whose temperature does not come down quickly.  They are all likely to be ok.  The key when reading papers which assess diagnostic tools is not to look at the sensitivity and specificity; but rather the positive and negative predictive values (PPV and NPV respectively).  This is not the time or the place to go into the difference, but essentially the predictive values tell you what you want to know.  For example, the PPV answers the question "if I test positive am I likely to be ill?", where as the sensitivity answers the question "if I am ill am I likely to test positive?"  So take care when you are reading or being given sensitivities and specificities.

Back to this paper.  So response to antipyretics is not useful in identifying serious illness in children.  Another couple of observations: Firstly the papers are old - does this matter?  Not really, apart from the appearance of aspirin and the absence of ibuprofen.  Probably not important, there is no particular reason to think ibuprofen should be different.  Secondly, two of the papers are by the same authors, and crucially use the same patients.  Duplicate publication can be a big problem, and including the same patients in meta-analyses where the studies are combined statistically can lead to erroneous conclusions, one study finding that including duplicated data led to a 23% overestimation of the efficacy of one drug.

The problem when reading reviews is that you often don't know this, in this case the authors noticed it and discussed it.  However, when all you have is a forest plot of results it takes a sharp eye to spot sometimes.

No comments:

Post a Comment