Review articles offer the chance to put together the work of a lot of clever people to try and get better answers. But there are holes in the literature they just can’t cover. Andrew Weatherall looks at a recent review on dexmedetomidine with the headline figure of “better outcomes”. Hooray, right? Well …
How much do you like new stuff? I love new stuff. Looks better. Smells better. Feels better. Is better, right? Stuff.
I don’t think I’m alone here either. Most people I know like new stuff. Even after it’s been new for too long to be called new, people like the word new. There was a radio station in my neck of the woods that called itself new for pretty much the whole of my schooling. That’s a long time to pretend you’re fresh. It never even sounded new when it rebranded to new in the first place.
Sometimes medicos are just as weird about things that are pretty specific to our field. For anaesthetists drugs fall right into this category and that’s probably partly because genuinely new drug options are few and far between. I’m sure it comes from a genuine desire to find ways to do things better too.
So here’s an example a lot of people will recognise – dexmedetomidine. Adding to its durability as being considered as new is that it hasn’t rolled out everywhere at the same time. Papers keep hitting the literature exploring the many benefits of this apparent wonder drug. Better hearts, natural sleep, pain relief, lack of delirium, perfect breathing, neuroprotection. Bit disappointing that it doesn’t make a coffee in the operating room really.
While agents are relatively new you also tend to see some pretty positive coverage in the literature although often none of those early papers have big enough numbers to reach firm conclusions. Enter the review article.
Recently Pan et al produced a paper with the catchy title “Outcomes of dexmedetomidine treatment in pediatric patients undergoing congenital heart disease surgery: a meta-analysis” which hit the presses at Pediatric Anesthesia. What they were looking to see is whether there was evidence, as there is in adults, that there are better cardiac outcomes for those given dexmedetomidine in their perioperative juice. The mixing exercise aims to get past the “please Sir, I want some more”.
This paper even has a big box up the top saying that it adds some new information: that “Perioperative dexmedetomidine treatment improves the outcomes in children undergoing congenital heart disease surgery, including more stable haemodynamics, shorter ventilation duration, and lesser incidence of postoperative agitation, and rescue analgesia.”
Now it looks like the mixing was done pretty rigorously here. They are certainly thorough in their description of the methodology. The bit that interests me are the choices made along the way, because every choice obviously impacts what you can say.
For this review, the papers included had to be in kids having heart surgery, they had to receive dexmedetomidine and have a comparison arm (other anaesthetics or placebo) and then they specified some pretty specific outcomes – blood pressure, heart rate, duration of ventilation/ICU/hospital stay, post-operative opioid requirement, perioperative blood glucose and serum cortisol and the incidence of postoperative delirium, bradycardia and hypotension.
Phew. That is quite a number of outcomes, but not many of them look really that, well, cardiac. Really they’re mostly looking at general outcomes in cardiac patients I guess. Sure there are a few outcomes looking at haemodynamics. but they won’t be able to say anything about mortality of course, nor reoperations, pacing issues or inotrope use you’d think.
The thing to keep in mind with such a review is that when it says “it improves outcomes”, they can only mean “it improves the outcomes we chose to focus on”. You can’t assume those outcomes were the ones you were hoping to see.
The next step of interest is in the sorting. 353 initial studies. 88 out for being duplicative. 237 kicked into a ditch on the basis of their titles or abstracts. That leaves 28. 14 more were kicked out because they reported on other outcomes, weren’t full text or used the same patients. I can’t tell you what those other outcomes were by the way. It’s not mentioned.
So here’s one challenge when we look at this methodologically well conducted review: most of the available info is on the floor. The researchers have to set down rules of engagement and that excludes lots of information. What would that stuff have shown? No idea. Maybe nothing, but maybe something that we can’t sift out as an excluded item. Even with the impressive numbers included (1055 in the dex group and 1174 in the controls), we can’t know half of the stuff that is already published and we’re not going to see it included here.
Even with the ones they do report, there are differences in what they were reporting in the first place so in the analysis it doesn’t always feel like you’re comparing apples with another colourful fruit. It might be more like apples and oysters.
Five studies look at mean blood pressure. Five look at systolic blood pressure and only nine of the fourteen present heart rate information. Any conclusions on haemodynamics seem to get weaker and weaker. I can see that tachycardia (and variability with surgical stimulation) happens less with dexmedetomidine but can I really be sure that one of the bazillion papers lying on the cutting room floor isn’t a bit more circumspect?
That’s not to say that the reported outcomes aren’t of interest. Less ventilator days and lower rates of delirium are interesting. The facts that the overall ICU stay or admission to hospital isn’t reduced are also interesting.
It’s just that like all reviews there are holes in the middle. There is a hole created by those choices to arrive at the chosen few. And the authors have to make choices in the most rational way they can.
Another hole is put there by all those people who have never written down anything about their experience. There’ll be more clinicians in that category than there are those who went to the effort of producing the research. You can’t possibly blame the reviewers for all those silent anaesthetists. All you can do is acknowledge that they are out there.
Of course a bunch of those silent clinicians, had they put their keyboards to use, would likely have mentioned a few negatives because there is certainly a tendency for positive findings to find a lot more space in the literature. We just can’t know.
Then even beyond that big hole there’s other issues. That’s the context in which the individual study authors work at the time of their research and and the bit that comes later. For that we should get on to talking about radiology too.
Then and Now
When you read that cardiac review, try and figure out which cardiac patients got dexmedetomidine, and which ones didn’t. Was there an underlying set of conditions that pretty much didn’t get it? Were the exclusions in each study such that only particular types of patients got into the randomisation algorithm? I can’t really pull it off so I’d be pretty keen for someone to point it out.
Once it’s in the review what you sometimes lose is context and why clinicians chose a particular treatment for a particular group of patients. Sometimes knowing that sort of background is very informative as to how applicable that particular bit of publishing is to your practice. Which is where we should sidestep into radiology.
If you take the trouble to do a bit of literature searching on uses for dexmedetomidine, you’ll come across a few papers headed by Dr Keira Mason who works at Boston Children’s Hospital. They’re an interesting read because they explore the use of the agent in a slightly different setting – sedation in the radiology suite, particularly for imaging.
Something that leaps out at you when you read them is that the use of dexmedetomidine described is a bit different if you’ve only read the cardiac literature. Where you might be used to loading doses of 0.5-1 mcg/kg and infusions around the 0.5 mcg/kg/hr mark in the CT paper mentioned below you’ll see loading doses of 2 mcg/kg over 10 minutes (with an aim to target a particular Ramsay sedation score) then an infusion of 1 mcg/kg/hr. For the MRI work it is up at 3 mcg/kg loading with a 2 mcg/kg/hr infusion.
Now if you put that into a more general review on the ways dexmedetomidine is used that might seem strange. Why such big doses for a simple set of internal happy snaps? Can those doses really be the best? (Hint in passing: used as a sole agent, doses change.) Context is the key.
A lot of this context is available in the papers, but I actually had the chance to chat with the very generous Dr Mason about this and many related things recently. I am pretty sure it was generosity because otherwise Boston is seriously quiet and devoid of entertainment on a Friday evening.
When you consider these publications, it’s worth taking the time to appreciate that context. The radiology sedation set-up at Boston was, back in the day, delivered almost entirely by nurses. People, including hospital administrators, sometimes forget how much hard work goes into sedation. Delivering good sedation care asks different things of clinicians and is heavily influenced by the setup you’re doing it in.
Over the years they’d worked through a variety of options such as chloral hydrate, benzodiazepines and pentobarbital. Each agent has its own issues of course, and meeting the requirements of a radiology service where several sedations may be happening at once and you’re trying to guarantee stillness for the picture while not really requiring any respiratory support is challenging.
In this context, developing a plan with dexmedetomidine worked well. The doses were modified over time and experience showed that the respiratory status really did stay very stable. Yes the blood pressure decreased along with heart rate but not to a worrying degree in their experience. There was still a rate of movement requiring a bolus dose of adjuvant, but it was pretty low. Some kids were sedated afterwards and took a while to get wakeful. They were safe from a respiratory point of view while that happened though. A lot more wins than losses on that ledger.
Which is why the other reason I followed up with Dr Mason might seem a bit surprising. Boston doesn’t really use dexmedetomidine in this way in the radiology suite any more. Where did I hear that? Not in the literature.
I heard that story from another practitioner out of Boston, Dr Barry Kussman, late last year. He was also very generous (must be something in the tea over there) and he mentioned as a passing comment that in radiology they’d moved away from it and everyone was using propofol. The reasons? Well some of the kids still moved a bit. Some of those kids really did stay sleepy. Plus it is just a bit slower.
So that was a key part of the reason I followed up with Dr Mason. Why the shift when the papers suggested it worked so well?
It’s context again. The whole service finally got swept up by anaesthesia and given resources to do things with the benefit of full anaesthesia teams. In this context, propofol or whichever other option to essentially produce general anaesthesia suddenly has a lot of things going for it. You don’t have to slowly load to get to a good sedation level. You can administer it in a fashion to pretty much remove the risk of patient movement because you have anaesthetists there 1-on-1 to deal with any respiratory or other issues that might arise. Alternatively you can just turn it into general anaesthesia. The kids tend to wake up quickly either way.
So change the context from one where only sedation is supported and even then with not many staff, to having a full general anaesthetic option and of course you’d review how you did it. That’s how you continue to develop your high level care. You can’t really compare dex as a sole agent aiming never to exceed sedation with a full anaesthetic care option. Apples and oysters. Context.
Of course I wouldn’t have known there’d been a change in practice at all if I hadn’t bumped into Barry in a conference then made a phone call to go to the source. Go to the literature and I’d easily get the impression that the system in that particular place hadn’t changed. Our literature fails us when it comes to later story developments. There’s no real space for updates down the track. Where’s the journal that shares the “where are they now” moments?
I don’t just want to hear from clinicians when they’re passionate enough about exploring the use of an agent to run a whole study on it. I want to hear from them five years later so I can hear which lessons have stuck in their head from experience and how their practice has changed over that time. What’s the experience of those whose papers contributed to our original literature review now that they’ve been using it for a while? That sort of work is very hard to find.
That problem lurks in the background for any review. How many of these authors still do things like they reported way back when? What has experience taught them in the meantime? Was Yoda cooler when he was kind of mysterious before we saw his battle moves in the prequels? Alright, I guess it’s acceptable for the literature not to cover Yoda in detail.
It’s impractical to think every clinical group who does research will repeat the topic on a 2 yearly basis of course. A facility for authors to add follow-up information down the track would be a welcome addition though.
I think the key message from this cardiac review paper isn’t even about the dexmedetomidine. If you don’t use dexmedetomidine, it doesn’t have enough to make you change course. If you do, you’ll find encouragement.
This paper drives home the problems with interpreting reviews. The best systematic review can’t answer all your questions on a topic. On my reading the rigour here is pretty impressive but the reviewers can never look at the papers the less enthusiastic practitioner didn’t publish. That knowledge is lost.
We’re left with some classic problems from review articles:
- Positive papers tend to be published, so the literature available to you can be skewed in unknown ways from the start.
- The review article authors make decisions about which papers to exclude. Obviously they report in a transparent fashion and provide reasons, but the reader of the review article doesn’t get to know which of those decisions might remove patients relevant to their own clinical practice.
- Not everyone was looking for the same thing in the first place. Analysis is difficult.
- People modify and change their practice over time, but any of those changes are harder to find in the literature, a little like the radiology story covered above. The review will be great at informing you about what they were doing a while back, without knowing how that has changed.
For the longest time I thought review articles offered more than the rest of the literature. I figured that most times the adjustments you made allowed you to deal pretty reasonably with the challenges of combining all those slightly disparate bits of research to make a whole.
I didn’t quite understand the scope of all the missing bits. And if you don’t bump into someone who knows a person who knows a thing, you might never get the updates you need to hear. I guess it’s an example of the need to look beyond that little grey box that talks about the better outcomes.
As always it is strongly recommended you should check out original literature and form your own opinion. The aim here is to foster friendly chat amongst people who like kids’ anaesthesia.
If you like the stuff on this site you can always fill in the bit where you’ll get e-mails when post pops up.
I need to extend a very sincere thanks to Dr Keira Mason, who really was amazingly generous with her knowledge, experience and time when called by some random anaesthetist from Sydney. It was a pretty amazing case of finding a new colleague in kids’ anaesthesia all the way around the world just from a shared clinical interest.
A thanks also to Dr Barry Kussman for his time and clever thinking.
Of course you should always go to the source literature and see what you think. The Pan review which seems well conducted is here:
And here’s something from Keira Mason on CT scanning and dexmedetomidine:
And again on higher doses for MRI sedation:
That image of the black hole is from the Creative Commons area of flickr and was posted just like this by the NASA Goddard Space Flight Center (yes that hurt to spell “Centre” that way).
Another thing … You’re probably more used to comparing apples to oranges. It turns out that’s thought to be a later variation on the original proverb from John Ray’s collection of 1670, which talks about comparing apples to oysters. It’s funny that etymology texts mention that as the source though, when you consider that I think The Taming of the Shrew predates that by about 80 years and contains the following exchange in Act IV, scene ii:
… First, tell me, have you ever been at Pisa?
Ay, sir, in Pisa have I often been,
Pisa renowned for grave citizens.
Among them know you one Vincent?
I know him not, but I have heard of him;
A merchant of incomparable wealth.
He is my father, sir; and, sooth to say,
In countenance somewhat doth resemble you.
[Aside] As much as an apple doth an oyster,
and all one.
That Biondello. Always with the cheeky asides.
Anyway, come for the paeds anaesthesia, stay for the Shakespeare.