This month’s Arthritis and Rheumatism is now available, on line at least. I’m not sure why I didn’t get the TOC email, but I found out through the grape vine and have taken a preliminary look through it. Sorry, you can only read it if you have a prescription.

Jumping Jehosephat! It’s packed with good stuff this month! I’ve downloaded about 15 articles, at least half of which I’d like to read and maybe review in some detail! That’s more than average for me. I’ll try to do the quick summary sometime this weekend, and try to review at least a few of these articles over the next few weeks, but I can’t make any guarantees.

In the meantime though, there’s a letter to the editors that really caught my eye. If you have a subscription, take a look. If not, here’s the heavily edited version. (I’m really not sure of my legal rights to reproduce this, so I’m going to err on the side of caution by editing out exactly what study and drugs are being referred to, and treat it in the abstract.)

To the Editor:
We read the article by [some guys] about the 2-year results of the [X Trial]. It is reported that the data presented are the continuing double-blind followup of the same cohort reported in the 2004 article in the [Y Journal] by [some other guys], in which 24-week and 52-week data had been presented.

There are important problems with the [X Trial] 2-year report. These are discussed below.

1. There is no indication in the previous articlethat the [X Trial] was designed as a 3-year, double-blind, randomized study as stated in the current report…

2. Related to the above, it is not clear how blindedness was maintained after the first year…

3. A more disturbing issue is the fact that the authors knew that one modality of treatment, [medication A], was superior to the others while the patients continued into the second year…

4. Both the previous and the current reports underplay the fact that 40% of the study patients had received [medication B] in the past, albeit during the period prior to the 6 months leading to trial enrollment. We understand from the initial report that these patients specifically had not been inadequate responders to [medication B] and had not had clinically important toxic effects from [medication B]. On the other hand, we still do not know why they had specifically discontinued [medication B] treatment.

5. … It must be noted that the primary clinical efficacy end point, on which the power calculation for the study was based … have been changed in the current report…

6. We note that 9 of the 14 authors who had written the initial report or the [Trial X] investigators were not among the authors of the second report…

Wow! This is about as strongly worded a letter as you get in the medical literature. The authors of it are basically accusing the authors of the study of scientific misconduct and possibly ethical misconduct. Let me explain, point by point, why:

1) If this was designed as a three year study, that should be outlined in the initial report. There are methodologic issues here about when you analyze data and how you monitor safety, and if this was supposed to be a 3 year trial it should be clear that they were presenting interim data from the 1st year. If it was designed to be a 1 year trial, then where did all this extra data come from? Was there IRB (institutional review board) approval for the extension?

2) Double blind-ness is the keystone of clinical trials. If it ain’t double blind (i.e. neither the subjects nor the investigators know who’s getting what treatment), then you really have to question the validity of the results. The point here is that blinding is usually broken after the end of the study. So if the results were presented the first time then the authors presumably had access to who was taking what. And if the second trial was also supposed to be ‘double blind’, then the authors have to account for how they maintained blindness after reporting the results of the first study. Otherwise, we have to presume that they knew who was getting what, which casts doubts on the results of both.

3) The first study showed that treatment with medication A was superior to treatment with medication A plus B. If the authors knew this, shouldn’t they have stopped before the second year of the trial and offered everyone taking just medication B both medications A and B? Scientifically, information from continuing the study might be valuable, but it is very hard to justify ethically. If you know one treatment is better than the other, you ethically have to tell patients that this is the case and offer them the better treatment. I think this is a major accusation.

4) All the subjects in both studies were taking medication B. The difference is that half of them also got medication A, and the group taking both did better. The study authors apparently were trying to study people who had NOT taken medication B in the past. This is because they were trying to see how both A and B compared to B in treatment-naive patients. To be cynical for a second, drug companies want patients with new-onset disease to take their drugs; that way they earn more money. But in the rheumatology world, it’s much more common to take a ‘standard’ or proven-effective drug first (here B) and only add the ‘new’ drug (A) if the patient is not responding adequately to the first. Thus the drug companies don’t make maximum profits. If they can prove that there’s a distinct advantage to patients by taking both drugs up front, more docs prescribe their drugs earlier.

So the point here is that if these meds were supposed to be given to naive patients, why were these 40% treated with drug B before, and if they were treated with drug B before why did they stop? Mabye B was really ineffective after all, or these patients had to stop B because of side effects. What’s going on here? This may not be a big deal practically (may not be, but might be too – it’s hard to know) but it’s another question.

5) Changing the primary outcome: This is a big no-no. The method by which investigators measure outcome should stay the same from the beginning to the end of the trial. If this is really a 3 year trial, and was planned that way, then why isn’t the end point the same in the two trials? It suggests the authors may be fishing for some measure that shows that the drug works because the originally designed endpoint failed to show that the drug works at this point.

If it was originally a one year trial, then why isn’t it clear that this is an extension. And why aren’t they using the same endpoint. What the hell is going on that they’re not telling us?

This is the major reason why the FDA now requires clinical trials to be registered before investigators can enroll patients. It forces scientists to stick to the original protocol. Otherwise there may be an incentive to change what you’re doing in mid-stream (or change what you said you did at the end) so that you can find some kind of positive result, even though the one you hoped to find turned out to be negative. Changing what you’re going to call your primary result after the fact is no longer considered acceptable.

6) If Trial X was designed and implemented as a 3 year trial, then you’d presume that most of the investigators should be the same. So where did these 9 investigators go? Did they leave? Did they refuse to sign off on the paper? What happened?

So that’s why I find this such an interesting letter. I should point out that I have read neither of the two studies referred to above and therefore do not know whether what the letter-writers claim is true, partly true, or completely false. There may be a completely rational explanation for all of their ‘charges’.

But you know the other interesting thing? There is no response from the authors of the second study! It’s standard practice in the medical literature for a journal to post a letter asking for more information or criticizing a published study, but the study authors are then given a chance to respond. This happens in both the letter published above and below this one. So where’s the response? These are pretty big accusations! I can’t explain this …

Whew! So that turned into a long post, and I was intending a short one! But this is a good A&R issue, so hopefully you’ll hear about more of the studies. C’mon back now, y’hear?