I had the opposite reaction: how on earth has no one looked into this!?
But if you dig into it, it's horrifying how little "evidence" goes into treating a lot of common medical conditions. For another trivial example, the stuff in Sudafed PE (phenylephrine) http://www.sciencedirect.com/science/article/pii/S1081120610... is much worse than the psuedophedrine it replaced.
I wish the NIH (and other funding agencies) would consider running big, confirmatory trials on a lot of "obvious" things that affect people every day, the money clearly isn't there and it's politically difficult to get it there:
There are a fair number of ignorant Congressman in the US who love to rant about 'wastes' of government money on studies like these.
What are the phases in this study? There's no explanation of the phases in the abstract:
"Phase 1 results showed a difference between phenylephrine and placebo that was 64% of the difference between pseudoephedrine and placebo, substantially greater than the 17% difference observed for all phases. "
Edit: Phase 1 may mean "Testing of drug on healthy volunteers for dose-ranging", from wikipedia:
That study uses a crossover design, which means that everyone gets all three treatments (in this case, phenylephrine, pseudoephedrine, and placebo), but in different phases. You might get the placebo this week and the pseudoephedrine next week, while I get the opposite. Each one of these time steps is a "phase".
A similar study might use a "batch design" instead, where the subjects are each tested once, after being given a single treatment. They might put you in the pseudoephedrine group, while I get the placebo.
Crossover designs have a big advantage in that are more robust to individual variability, and thus have more statistical power to detect differences. In essence, the crossover design allows you to compare the average of each subjects' difference between conditions while the block designs force you to examine the difference between the average scores of each group.
This power comes at a price--you have to be careful that the different treatments do not interact so you can correctly associate causes and effects. None of the treatments remain in the subjects' system for a week, but they are worried that people might remember* how well pseudoephedrine worked and "downgrade" the other treatments. One way to check this is to analyze the first phase of your data, where each subject has had only one treatment, as a batch design, which is what they reported there.
* The measurement here is a self-reported scale of nasal congestion. Note that they don't do this for most of the quantitative/physiological measurements.
But if you dig into it, it's horrifying how little "evidence" goes into treating a lot of common medical conditions. For another trivial example, the stuff in Sudafed PE (phenylephrine) http://www.sciencedirect.com/science/article/pii/S1081120610... is much worse than the psuedophedrine it replaced.
I wish the NIH (and other funding agencies) would consider running big, confirmatory trials on a lot of "obvious" things that affect people every day, the money clearly isn't there and it's politically difficult to get it there: There are a fair number of ignorant Congressman in the US who love to rant about 'wastes' of government money on studies like these.