NOTE: I had originally planned on posting Part II of a series on cancer screening. However, something came up on Friday that, in my estimation, requires a timely response. I should also inform readers that, because next Monday is a holiday here in the U.S., I haven’t yet decided whether I will be doing a post next week or not. Stay tuned and check back.
I get e-mail.
Sometimes the e-mail is supportive. Other times, as you might imagine, given some of my posts, it is anything but. On Friday afternoon, I happened to notice an e-mail from an “admirer” of mine that said something like this:
You are a complete jack-ass.
– Generation Rescue
Appended to the e-mail was a link to .
, as you may recall, is an organization that promotes the idea that vaccines cause autism, and this e-mail almost certainly came from the founder and head of GR, a man named J.B. Handley. In case you don’t know who he is, Handley is a man who is, even by the standards of antivaccinationists, incredibly boorish and possessed of a bull-in-a-china shop manner that alienates even some potentially sympathetic people, although parents who believe that vaccines cause autism seem to love him. He is also quite–shall we say?–flexible in his notions of how vaccines cause autism. Until about a year ago, the Generation website :
Generation Rescue believes that childhood neurological disorders such as autism, Asperger’s, ADHD/ADD, speech delay, sensory integration disorder, and many other developmental delays are all misdiagnoses for mercury poisoning.
About a year ago, it :
We believe these neurological disorders (“NDs”) are environmental illnesses caused by an overload of heavy metals, live viruses, and bacteria. Proper treatment of our children, known as “biomedical intervention”, is leading to recovery for thousands.
The cause of this epidemic of NDs is extremely controversial. We believe the primary causes include the tripling of vaccines given to children in the last 15 years (mercury, aluminum and live viruses); maternal toxic load and prenatal vaccines; heavy metals like mercury in our air, water, and food; and the overuse of antibiotics.
The kind interpretation is that GR was changing its hypothesis given that the data being published consistently and strongly refuted the myth that mercury in vaccines somehow cause autism. In reality, though, it’s fairly clear that GR was pivoting effortlessly to a hypothesis that not only was nearly completely unfalsifiable but also allowed GR to continue to blame vaccines for autism, which is what it’s really about. More recently, as I have pointed out before, antivaccinationist rhetoric has also pivoted even further and equally as effortlessly to blame unspecified “toxins” or “combinations of toxins” in vaccines. Be that as it may, having felt the love, I have to admit that Mr. Handley sure does know how to charm a guy. When he draws my attention to some abstracts so politely, abstracts that he clearly considers to be very important evidence, how can I refuse to take a look? After all, Mr. Handley himself apparently very much wanted to point me in the direction of these three abstracts, and it would be downright churlish of me to deny him and refuse to look at the studies with as open a mind as possible.
Besides, I was curious; after I had received Mr. Handley’s e-mail, I asked around and found that other believers of the claim that vaccines cause autism had been sending precisely this same link describing precisely these same three abstracts to other bloggers who frequently write about autism and vaccines and who also share my conclusion that the science just doesn’t support the idea that vaccines somehow cause autism. In addition, the article linked to was written by , a very credulous reporter who claims that the Amish don’t vaccinate and don’t get autism when and who conveniently a special needs clinic in the heart of Amish country that treats Amish with some forms of autism. Although not quite as endearingly overwrought in his rhetoric as Kent Heckenlively‘s verbiage, Mr. Olmsted’s articles are usually still instructive as excellent lessons in how not to interpret scientific studies. So onward, we go! Here’s :
The first research project to examine effects of the total vaccine load received by children in the 1990s has found autism-like signs and symptoms in infant monkeys vaccinated the same way. The study’s principal investigator, Laura Hewitson from the University of Pittsburgh, reports developmental delays, behavior problems and brain changes in macaque monkeys that mimic “certain neurological abnormalities of autism.”
The findings are being reported Friday and Saturday at a major international autism conference in London.
Although couched in scientific language, Hewitson’s findings are explosive. They suggest, for the first time, that our closest animal cousins develop characteristics of autism when subjected to the same immunizations – such as the MMR shot — and vaccine formulations – such as the mercury preservative thimerosal — that American children received when autism diagnoses exploded in the 1990s.
The first thing I noticed here before even reading more is just how much antivaccinationists have “sculpted,” if you will, their “hypothesis” (if you can call it that). No longer do they write about “it’s the mercury, stupid” or how “autism and autism spectrum disorders are all all ‘misdiagnoses for mercury poisoning.'” They’ve now added the new wrinkle of blaming vaccines in general and the “vaccination schedule” for….well, it’s not entirely clear what, although they do clearly believe it’s bad and frequently go on and on about “too many, too soon.” That brings us to the second thing that I noticed, which is that they are no longer claiming that vaccines “cause” autism. They are touting this study as showing that vaccines somehow induce behavioral and brain changes that mimic “certain neurological abnormalities of autism.” Couple that with their efforts to imply that there are so many out there with a rare mitochondrial disorder that makes them susceptible to being turned autistic by vaccines, and I marvel at how fluid antivaccinationists are in their use of language, as long as it can be somehow twisted into implying that vaccines cause autism.
I’m not going to discuss Olmsted’s commentary on these abstracts any more because Olmsted has demonstrated unequivocally time and time again that he wouldn’t know a good scientific study if it bit him in his hindquarters. Besides, in response to , the editors of AoA kindly posted the actual text of that were presented at the International Meeting for Autism Research (). Why bother with a secondhand description colored with biased commentary when I can cut through his obfuscation and go straight to the source? The three abstracts were:
- Pediatric Vaccines Influence Primate Behavior, and Amygdala Growth and Opioid Ligand Binding, Friday, May 16, 2008: IMFAR. L. Hewitson , Obstetrics, Gynecology and Reproductive Sciences, University of Pittsburgh, Pittsburgh, PA B. Lopresti , Radiology, University of Pittsburgh, Pittsburgh, PA C. Stott , Thoughtful House Center for Children, Austin, TX J. Tomko , Pittsburgh Development Center, University of Pittsburgh, Pittsburgh, PA L. Houser , Pittsburgh Development Center, University of Pittsburgh, Pittsburgh, PA E. Klein , Division of Laboratory Animal Resources, University of Pittsburgh, Pittsburgh, PA C. Castro , Obstetrics, Gynecology and Reproductive Sciences, University of Pittsburgh, Pittsburgh, PA G. Sackett , Psychology, Washington National Primate Research Center, Seattle, WA S. Gupta, Medicine, Pathology & Laboratory Medicine, University of California – Irvine, Irvine, CA D. Atwood , Chemistry, University of Kentucky, Lexington, KY L. Blue , Chemistry, University of Kentucky, Lexington, KY E. R. White , Chemistry, University of Kentucky, Lexington, KY A. Wakefield , Thoughtful House Center for Children, Austin, TX.
- Pediatric Vaccines Influence Primate Behavior, and Brain Stem Volume and Opioid Ligand Binding, Saturday, IMFAR. A. J. Wakefield , Thoughtful House Center for Children, Austin, TX C. Stott, Thoughtful House Center for Children, Austin, TX B. Lopresti , Radiology, University of Pittsburgh, Pittsburgh, PA J. Tomko , Pittsburgh Development Center, University of Pittsburgh, Pittsburgh, PA L. Houser , Pittsburgh Development Center, University of Pittsburgh, Pittsburgh, PA G. Sackett , Psychology, Washington National Primate Research Center, Seattle, WA L. Hewitson , Obstetrics, Gynecology and Reproductive Sciences, University of Pittsburgh, Pittsburgh, PA.
- Microarray Analysis of GI Tissue in a Macaque Model of the Effects of Infant Vaccination, Saturday, May 17, 2008 IMFAR. S. J. Walker , Institute for Regenerative Medicine, Wake Forest University Health Sciences, E. K. Lobenhofer, Cogenics, a Division of Clinical Data E. Klein , Division of Laboratory Animal Resources, University of Pittsburgh, A. Wakefield, Thoughtful House Center for Children, Austin, TX L. Hewitson , Obstetrics, Gynecology and Reproductive Sciences, University of Pittsburgh, Pittsburgh, PA
But before I dive in, readers should understand that these three abstracts were poster presentations. In the biomedical field, poster presentations are the lowest form of “publication” of one’s data, with the highest being publication in a good quality, high impact, peer-reviewed journal. Indeed, several meetings that I go to fairly regularly accept nearly every abstract that is submitted as a poster. It is from this pool that reviewers decide which abstracts are good enough and/or interesting enough to be oral presentations. That’s not to say that many posters, especially at (a meeting I attend almost every year), aren’t excellent. Many are very impressive, but that’s because, at least at the AACR meeting, the ratio of posters to presentations is quite high (which is why it would be foolish of me to disparage posters in general, given that I personally have presented several posters at various meetings, including AACR). Few abstracts make the cut to be oral presentations, though, relatively speaking. That being said, I do notice that the standards IMFAR appears to subject posters to do not seem to be . Thus, given that these are only abstracts being submitted as posters, I consider them less seriously than I would an oral presentation and much less seriously than a research article in a good, high-impact peer-reviewed journal. More importantly, the publication of these abstracts as full papers in such a journal would also allow me to examine in detail the methodology, which is only sketchily described in these abstracts. So, even though they are not full scientific papers, discussing these posters is justified because clearly anti-vaccine activists are touting them as some sort of compelling evidence that Vaccines Are Evil Baby-Destroying Weapons of Mass Destruction.
So, on to the abstracts themselves! The first thing that became apparent when I read the abstracts is that they really all appear to describe aspects of one study, the results of which have been split into three different abstracts. In the science biz, this is known as divvying up one’s data into MPUs (minimal publishable units). It’s an unfortunately common practice, but not in and of itself necessarily an indicator of bad science. Because all too many granting agencies and tenure committees seem to be better bean counters than judges of quality and significance when examining a researcher’s publication record, lots of researchers do it. However, we as scientists do tend to look askance at the practice when it is done too blatantly, especially if duplicate or replicative data is included with the MPUs to pad them.
What next leaps to mind in looking at the abstracts themselves is that there are 13 monkeys in the “vaccine” group and only three in the control group. The authors do not explain or justify why there are such unequal numbers of subjects in the two groups or why they didn’t simply assign eight monkeys to each group. Doing so would have required the same number of animals. Similarly, there is no mention of how the monkeys were assigned to one group or the other (randomization, anyone?), whether the experimenters were blinded to experimental group and which shots were vaccine or placebo, whether the monkeys were weight- and age-matched, or any of a number of other controls that careful researchers routinely do when setting up animal experiments. Considering these factors, right off the bat from the small numbers (particularly with only three monkeys in the control group), I can fairly safely conclude that the study almost certainly doesn’t have the statistical power necessary to find convincing evidence of an effect of vaccination on any of the parameters measured. Let’s put it this way. I do experiments with mouse tumor models, and if I put such a large mismatch in terms of the number of controls relative to the experimental group, I would be highly unlikely to get any results I could have any confidence in.
On the other hand, maybe it’s a good thing that there weren’t more monkeys in the study, given the questionable ethics of subjecting infant monkeys to so many repeated procedures in the service of a dubious hypothesis. This concern does not even take into account the number of injections given to the monkeys as vaccines or placebos. Let’s take a look to see what I mean:
- From Pediatric Vaccines Influence Primate Behavior, and Amygdala Growth and Opioid Ligand Binding: “Amygdala growth and binding were measured serially by MRI and by the binding of the non-selective opioid antagonist [11C]diprenorphine, measured by PET, respectively, before (T1) and after (T2) the administration of the measles-mumps-rubella vaccine (MMR).” In other words, these monkeys were subjected to repeated PET scans and MRIs. That means they had to be restrained and anesthetized each time these studies were done.
- From Microarray Analysis of GI Tissue in a Macaque Model of the Effects of Infant Vaccination: “Infant male macaques were vaccinated (or given saline placebo) using the human vaccination schedule. Dosages and times of administration were adjusted for differences between macaques and humans. Biopsy tissue was collected from the animals at three time points: (1) 10 weeks [pre-MMR1], (2) 14 weeks [post-MMR1] and, (3) 12-15 months [at necropsy]. Whole genome microarray analysis was performed on RNA extracted from the GI tissue from 7 vaccinated and 2 unvaccinated animals at each of these 3 time points (27 samples total).” So these same monkeys were also subjected to at least two colonoscopies as infants. At least, I assume it was colonoscopies; the abstract doesn’t say.
That’s a lot of procedures and injections for these poor infant monkeys to undergo for what appears to be an experiment in which the distribution between control and experimental groups is such that the study is highly unlikely to produce meaningful results. Where on earth was the University of Pittsburgh’s ? (IACUC at our institution requires us to justify the number of animals requested with a statistical power analysis or, in the cases of preliminary experiments, with a strong scientific rationale for the number requested.) I can’t help but wonder whether these experiments were unnecessarily painful. Worse, these monkeys were clearly euthanized at a very young age, which strikes me as particularly unethical unless there is a really good reason. Primates are not mice.
Ethics and animal treatment aside, let’s get back to the studies. After all, you never know. They may actually be decent science after all in spite of the problems with the allocation of research animals; we have to judge the studies on their methodology, data analysis and conclusions and whether their conclusions are justified by the data. True, this is hard to do from just abstracts (and not particularly informative or quantitative abstracts at that!), but fairness demands that we give the investigators the benefit of the doubt, at least for the moment. Moreover, there is sufficient information in the abstracts to allow some fairly firm conclusions about the quality of these studies.
Once again, perhaps the most critical variable that isn’t discussed is whether proper blinding was used. Someone named Kelli Ann Davis, who acts as though she has inside information, . I have my doubts. Of course, perhaps the investigators and caregivers were blinded, but where they correctly blinded? Proper blinding of investigators is particularly important for behavioral studies, but it’s also important for any sort of imaging study or examination of histopathology of biopsies. For behavioral studies, the investigators absolutely needed to be blinded to which monkeys had received placebo and which had received vaccine, both when administering the injections and especially when observing and measuring behavior. Similarly, the radiologists who interpreted the MRI scans had to be blinded as to which scans came from which group, as did the pathologists who interpreted the biopsy results; otherwise subtle (and not-so-subtle) biases can creep in and affect the results. Moreover, it’s unclear whether a saline injection is an adequate placebo for vaccination. Saline injections don’t hurt very much; some vaccines can. This problem alone could potentially account for differences in behavior. If the monkeys receiving vaccines hurt more, they might become more afraid and withdrawn.
Here’s another question that I had: How long is the life expectancy and developmental time to maturity of these monkeys? In other words, were the investigators scaling down the time between injections proportionally to the difference in development between humans and these monkeys? So I looked it up. Rhesus Macaque monkeys live around 25 years and males reach sexual maturity by around four years of age, approximately 1/4 of the time it takes humans males to reach sexual maturity and one third of the lifespan of an average human male. That means, if I interpret correctly the methodology claiming to “adjust for age” that these monkeys could have received a lot of shots in a really short period of time.
Another issue is that it is not clear how many different behaviors were examined in total, and many of the behaviors reported as being different between the vaccinated and unvaccinated monkeys are not particularly “autistic”-seeming. In any case, given the extremely small sample size, it is not at all surprising that there were positive findings just by chance alone. Indeed, just to satisfy my curiosity, I took a very simple model, in which the examination of a single trait in the monkeys was a “yes-no” question and then ran a Fisher’s Exact Probability test for a control group of 3 in which zero exhibit the trait and the experimental group of 13, in which I modeled different numbers of monkeys exhibiting the trait and checked resulting the p-value. To achieve statistical significance at the usual p=0.05 level, 10/13 of the vaccinated monkeys, or 77%, would have to exhibit a given trait, with none of the control monkeys exhibiting it. That’s a huge number, and that’s for a hypothetical trait that is a noncontinuous variable with two possible values; i.e., a “yes-no” question, as in “yes, the monkey exhibits that behavior” or “no, the monkey does not exhibit that behavior.” If we start looking at traits that rely on quantitative measurements of a behavior, in other words a continuous variable such as behavior frequency, then a control group of three is clearly totally inadequate; even a relatively small variance would make it very difficult to achieve statistical significance for any trait other than by random chance. The differences would have to be quite large and the variances, particularly in the control group, quite small.
Finally, let’s look at the microarray. I’ve actually done gene expression profiling before using microarrays and am experienced with PCR and quantitative real time PCR. I can’t say I’m an expert at cDNA microarrays, but I’ve picked up a few principles over the years. Let’s see what the investigators say about this study in the last abstract:
Whole genome microarray analysis was performed on RNA extracted from the GI tissue from 7 vaccinated and 2 unvaccinated animals at each of these 3 time points (27 samples total).
Results: Histopathological examination revealed that vaccinated animals exhibited progressively severe chronic active inflammation, whereas unexposed animals did not. Gene expression comparisons between the groups (vaccinated versus unvaccinated) revealed only 120 genes differentially expressed (fc >1.5; log ratio p<0.001) at 10 weeks, whereas there were 450 genes differentially expressed at 14 weeks, and 324 differentially expressed genes between the 2 groups at necropsy.
One thing that leaps right out at me immediately is the question of why on earth specimens from only slightly more than half of the vaccinated monkeys and only two out of the three unvaccinated monkeys were evaluated. What happened to the other specimens? No explanation is given for why specimens from all the monkeys weren’t studied. This alone makes me suspicious of the results. Why were samples from six vaccinated monkeys and one unvaccinated monkey not used?
The second thing that leaps right out is the cutoff the investigators were using for identifying differnetially expressed genes. It’s not entirely clear from the abstract, but they appear to have used a cutoff of a 1.5 times increase or decrease in the level of a given gene’s transcript. In other words, a gene qualifies as being “differentially expressed” if its messenger RNA (mRNA) level in the vaccinated group is at least 1.5 times control or less than 1/1.5, or 0.67 times its level in the unvaccinated group. If I’m interpreting correctly how they did this, that’s a pretty loose standard for deciding on whether a gene is differentially expressed in the vaccinated group or not, especially in a first pass experiment with very low sample numbers. Let’s put it this way: In one of the microarray experiments I did, all the genes of interest that I looked at had ratios of over 6, and one had a ratio of over 200, which, not surprisingly, really got our attention. Although sometimes we will accept 1.5-fold differences, in general in doing a microarray experiment, on the first try we ignore any gene with less than a two-fold change, and we prefer to see three-fold or greater changes in expression levels of the messenger RNA. This is especially true when one uses a log ratio to calculate each gene’s relative expression level, given that the log ratio is prone to large changes due to error at low expression levels. This would be particularly true in a dataset that includes only two control samples, which is the absolute minimum number that any sort of statistics can be done on and totally inadequate for an experiment like this, except as a very preliminary and exploratory study. The selection of this liberal cutoff strongly suggests to me that the investigators might have been trying to pad the number of differentially expressed genes. Of course, it is possible that the investigators were referring to a true log ratio of 1.5 (i.e., a 21.5-fold change, or 2.8-fold change), which they may very well have been doing). This would still be fairly liberal for an experiment with only two samples in the control group. Remember, cDNA microarray expression profiling looks at thousands of genes at the same time; without truly rigorous statistics, there will be dozens (if not hundreds) of false positives if no correction is made for multiple comparisons. That’s one reason why in microarray experiments it is absolutely critical to verify any “positive” findings for genes that are up- or downregulated by doing at minimum:
- Reverse transcriptase quantitative real time PCR
- Western blot or immunoprecipitation to verify that the difference observed in the mRNA level is also seen at the protein level (assuming, of course, a suitable antibody is available)
Indeed, in the aforementioned experiment that I did, we used the microarray as a discovery tool and then validated several of the genes that were interested in that were indicative of inhibition of a signaling pathway using several different modalities.
Did the authors validate any of the genes they found with differential expression levels using these techniques? I see no mention that they did. If they didn’t, their results mean very little, except perhaps for genes with differential expression with a ratio of greater than 10 (a very small number, I’d bet). Once again, cDNA microarray experiments are tricky and often prone to producing false positives in terms of finding genes that are differentially expressed between a control and a test group. Any findings must–and I can’t repeat this enough–must be validated, or at the very least changes in mRNA levels a subset of the genes must be validated by other techniques, particularly when there are so few replicates upon which to do statistics. Another curious thing about this abstract is that the investigators only report in the abstract raw numbers of mRNA transcripts that were up- or downregulated; usually investigators report a few of the specific genes that were most differentially regulated and the ratio by which they were different. All they say is that they are genes consistent with “inflammation,” but that could mean a lot of things, and we don’t diagnose inflammation through the use of cDNA microarrays. Also, in a time course experiment like this, it would be of great interest to know if it was the same genes that were elevated, whether they were continuing to increase or whether they peaked and came back down. If there is no consistent pattern, chances are that what the investigators were observing was noise. Raw numbers of genes going up or down mean little. The identities of the genes, how much they go up and down, and the pattern are all what matter.
Overall, judging from the abstracts so helpfully provided by AoA, I am, alas, underwhelmed. These three studies appear to be nothing more than v.2.0, except this time with monkeys. I suppose it’s possible, albeit unlikely, that the science in the actual study will turn out to be better than what is represented in the abstracts. For that, we will have to wait for the actual papers to be published–if they ever are published, which is by no means certain, given what I can glean of the quality (or, more correctly, the lack thereof) of the science presented in these abstracts. If anyone actually attending IMFAR and saw the posters, I’d love to hear your account of what was actually reported. Please post it in the comment or e-mail me.
Finally (and I saved this for absolutely last because I wanted to address the substance of the abstracts first without being accused of basing my criticism primarily on ad hominem attacks), who did this experiment? One name stands out: Andrew Wakefield. Yes, it’s the same Andrew Wakefield whose incompetent science (p!) done with led to a scare that caused MMR vaccination rates to plummet in the U.K. ten years ago. In addition, a blogger named informs us a bit about some of the other authors:
The primary author seems to be of Pittsburgh University. She is registered on that page as a Doctor. She (I think its the same person) also and .
Also listed as an author according to AoA is one AJ Wakefield. Enough said about that!
Lastly, is Steve Walker who did a poster presentation at an IMFAR in the past (can’t recall which one) which also appeared to offer support for the MMR hypothesis. Oddly, that poster presentation never made it into any kind of peer reviewed journal.
Also oddly enough, Hewiston appears to have a and has presented multiple times at the meeting of the , an observation that makes me wonder how she got roped into these studies. Apparently she has an , and that may be coloring her decisions. Unfortunately, Dr. Hewitson wouldn’t be the first researcher whose personal brush with autism led her down the path of questionable science. It appears that such may be the case with her.
Indeed, having learned that she has an autistic son, I really, truly wanted to give Dr. Hewitson the benefit of the doubt as I read these abstracts, assuming that perhaps her love of her son was affecting her scientific judgment and that she might not know what she was getting into when she collaborated with Andrew Wakefield. Sadly, I then discovered what seems to be a very serious and apparently undisclosed conflict of interest, as a has informed me. Not only is Dr. Hewitson married to Dan Hollenbeck, a (which would not in and of itself be a major conflict of interest), but she and her husband are in the Autism Omnibus proceedings (see #437):
437. Laura Hewiston and Dan Hollenbeck on behalf of Joshua Hollenbeck, Dallas, Texas, Court of Federal Claims Number 03-1166V
If this conflict of interest was undisclosed, it is in violation of :
INSAR requires authors to disclose their sources of contributed support (commercial, public, or private foundation grants, and off-label use of drugs, if any). INSAR also requires authors to signify whether there may be a real or perceived conflict of interest. Any potential for financial gain that may be derived from reported work may constitute a potential conflict of interest.
Note that the instructions say “any” potential financial gain and “…real or perceived conflict of interest”! I’d say that being a plaintiff in a massive legal action being heard before the National Vaccine Injury Compensation Program that alleges that vaccine injury, specifically some combination of mercury and other factors (I’m never quite clear which) caused autism in the litigants’ children qualifies as a rather major conflict of interest, wouldn’t you? This conflict of interest is not listed on the AoA posting of the abstracts, which means either that AoA left it out when republishing the abstracts or Dr. Hewitson did not report them to INSAR when submitting or finalizing the abstracts. The first possibility would not surprise me, as AoA is a font of misinformation in service of antivaccination ideology; the second possibility saddens me, because, if true, it would indicate that an apparently once talented researcher has taken a major step down the road to academic and professional ruin. I really hate to see that.
But it goes beyond even that. has also figured out that not only is Dr. Hewitson married to Dan Hollenbeck, but that Dan Hollenbeck works for Dr. Wakefield at Thoughtful House as and that his website is also part of Thoughtful House. :
So, here we are with three poster presentations from a woman who has an autistic son, affiliated with DAN!, is married to the Thoughtful House IT guy (who also happens to be on the Board of Directors of SafeMinds) and these afore-mentioned poster presentations are also co-authored by Andrew Wakefield. I wonder just how impartial this science can be?
It’s hard not to answer: Not very.
Again, I find it very sad to see an apparently once-promising young researcher fall to these levels, but it’s irritating as well because of the double standard involved. Indeed, the reaction of antivaccination activists will be instructive. They often lambaste, for instance, Dr. Paul Offit, a staunch defender of the vaccine program, because he has in the past received grants and consulting fees from vaccine manufacturers. It is not at all a bad thing to point out potential conflicts of interest and ties to big pharma, although it is unfortunate that antivaccinationists go overboard with bordering on calls for violence. However, Dr. Offit has never tried to hide this potential conflict of interest, as far as I’m aware, and that’s key. Undisclosed conflicts of interest are far more damaging to science than disclosed ones, because once a conflict is disclosed skepticism can be ratcheted up appropriately. Undisclosed COIs hide the knowledge necessary to judge how much skepticism is appropriate. In any case, I wonder whether the crew at AoA will be as harsh on Dr. Hewitson as they are on Dr. Offit and any other vaccine scientist who has ever had ties with a pharmaceutical company now that they have been informed of her multiple conflicts of interest. Somehow I doubt it.
Unfortunately, this discovery about Dr. Hewitson goes a long way towards answering the question: Why are these particular abstracts being touted now by antivaccinationists? In other words: Cui bono? The answer is probably two-fold. First, the is up and running again. Thus far, the plaintiffs haven’t done so well in terms of supporting science; they certainly desperately need more “ammunition” in the form of studies, no matter how poor quality they are. Taken together, these observations suggest that litigants in the Autism Omnibus may be producing data and studies to be used in favor of the litigants but obfuscating the origins of that data. Taken together, along with the questionable treatment of the animals in this study, these observations lead me to conclude that not only are these studies almost certainly bad science, but they are bad science in the cause of winning the Autism Omnibus. Lastly, ex-Playmate and comedienne-turned-antivaccinationist activist Jenny McCarthy, along with Generation Rescue and other antivaccinationist groups, will be descending upon Washington, DC on June 4 for a protest rally, and this study is useful to them. True, abstracts had to be submitted to IMFAR months ago, but McCarthy had been talking about a protest at the CDC late last year before changing plans earlier this year to a march on Washington. In any case, even if the original intent wasn’t to boost the efforts to blame “toxins” in vaccines for all sorts of evils, the way these abstracts are being distributed now, even to those of us most likely to be skeptical about any research claiming to support the contention that vaccines cause autism, suggests plans for a publicity campaign based on these studies. probably in concert with McCarthy’s march. Look for them to be there.
I will conclude this lengthy post by taking this opportunity to thank Mr. Handley profusely. If he hadn’t sent me such a gloating e-mail, I might never have learned of these abstracts or bothered to review them. He led me to this analysis of the bad science and conflicts of interest behind this research, and I sincerely hope that his fellow travelers in the antivaccination movement will accord him all the accolades he deserves for it. I never could have done written this post without him.