Archive | Legal RSS feed for this section

An open letter to Jim Carrey

22 Apr

Today on The Huffngton Post, actor Jim Carrey posted his thoughts about autism and vaccines. With his very first paragraph it became apparent how little Carrey understood the issues involved:

Recently, I was amazed to hear a commentary by CNN’s Campbell Brown on the controversial vaccine issue. After a ruling by the ‘special vaccine court’ saying the Measles, Mumps, Rubella shot wasn’t found to be responsible for the plaintiffs’ autism, she and others in the media began making assertions that the judgment was in, and vaccines had been proven safe. No one would be more relieved than Jenny and I if that were true. But with all due respect to Ms. Brown, a ruling against causation in three cases out of more than 5000 hardly proves that other children won’t be adversely affected by the MMR…

Point one Mr Carrey. The vaccine issue is only controversial to adherents of your belief system. Within scientific, medical, legal, autistic and parental circles its not even slightly controversial.

Point two, the three cases chosen were chosen – by the plaintiffs legal team – to represent their absolute best chance of winning. If they had won, there was an excellent chance all the cases that were suggesting MMR as causation would have just ‘won’ automatically. Thats why its called an Omnibus.

Point three, regarding the MMR, it has been firmly established that:

a) The data supporting the MMR hypothesis was fixed.

b) The science supporting the MMR theory was badly wrong – both badly done and exposed to contaminants.

You might also note that the court was not attempting to see if the children were ‘adversely affected by the MMR’, it was looking to see – using the three cases the legal team representing the families thought were the absolute best – if MMR caused autism. It didn’t. Thats probably why your Campbell Brown found it easy to say the MMR hypothesis was dead and buried.

You go to say Mr Carrey that:

Not everyone gets cancer from smoking, but cigarettes do cause cancer. After 100 years and many rulings in favor of the tobacco companies, we finally figured that out.

Yes, we did – and do you know how? With _good science_ – just like the science that established in the three MMR test cases that the MMR didn’t cause autism. And its fascinating that you bring up this parallel to the smoking issue and then later in your blog post invoke the name of Bernadine Healy. Healy – who’s ‘more sensible voice’ you say you’d rather listen to. Did you know Healy used to be a member of TASSC:

TASSC was created in 1993 by the APCO Worldwide public relations firm, and was funded by tobacco company Philip Morris (now Altria)….

According to Sheldon Rampton and John Stauber in their article How Big Tobacco Helped Create “the Junkman”, one of the forerunners of TASSC at Philip Morris was a 1988 “Proposal for the Whitecoat Project,” named after the white laboratory coats that scientists sometimes wear. The project had four goals: “Resist and roll back smoking restrictions. Restore smoker confidence. Reverse scientific and popular misconception that ETS (passive smoking) is harmful. Restore social acceptability of smoking.”

[own inserts]

Is that what you consider a sensible voice Mr Carrey? Someone who supported the tobacco agenda?

Moving on, you say:

If we are to believe that the ruling of the ‘vaccine court’ in these cases mean that all vaccines are safe, then we must also consider the rulings of that same court in the Hannah Polling and Bailey Banks cases, which ruled vaccines were the cause of autism and therefore assume that all vaccines are unsafe. Clearly both are irresponsible assumptions, and neither option is prudent.

First and foremost, the vaccine court did not rule at all in the Hannah Poling case. HHS conceded. And what they conceded was that Hannah Poling was damaged by vaccines resulting in ‘autism like features’. In fact, when we look at the the one piece of medical science carried out on Hannah Poling (co-authored by her own father), we see that only three of the symptoms described as being the result of vaccine injury appear on the DSM (IV) diagnostic criteria for autism.

As for Bailey Banks, this is a perfect illustration of both how the vaccine court in the USA was designed to work and also how terrible the evidence was in the three MMR test cases.

The Banks ruling (subtitled ‘Non-autistic developmental delay’ by the way) drew a line of causation from vaccine to PDD-NOS. It is able to do this as the burden of proof for any science presented to the vaccine court is ‘50% plus a feather’. In other words, it just has to be plausible, no causation needs to be shown.

What doesn’t seem in doubt is that Bailey was injured by a vaccine which resulted in a condition called ADEM. The judge in the case then went on to accept the plaintiffs position that the ADEM in turn caused PDD-NOS. He did this seemingly because there was no evidence to the contrary – e.g. no evidence that ADEM *doesn’t* cause PDD-NOS.

In any scientific situation – including civil court in the US – this would never have been accepted. The plaintiff would have had to have demonstrated that ADEM *did* cause PDD-NOS. And a search of PubMed reveals nothing for ‘ADEM autism’ or ‘ADEM PDD’.

So, in the Banks case, because there was no evidence that ADEM does not cause PDD-NOS, they won. In every situation bar the vaccine court, the Banks’ would not have won their case. There is no science to support the idea ADEM causes autism.

Bearing this ‘50% plus a feather’ concept in mind it is clear just how utterly dreadful the evidence was to support the idea MMR caused autism. Not only could plaintiffs not provide any evidence that MMR causes autism, respondents produced reams of evidence to show it clearly doesn’t.

You carry on Mr Carrey to say:

I’ve also heard it said that no evidence of a link between vaccines and autism has ever been found. That statement is only true for the CDC, the AAP and the vaccine makers who’ve been ignoring mountains of scientific information and testimony. There’s no evidence of the Lincoln Memorial if you look the other way and refuse to turn around. But if you care to look, it’s really quite impressive. For a sample of vaccine injury evidence go to http://www.generationrescue.org/lincolnmemorial.html.

Your analogy is ridiculous. I could go to any library and find evidence for the Lincoln Memorial without ever seeing it. In fact, what your analogy does is demonstrate exactly how blinkered and able to only face one direction at one time you and your colleagues are.

The evidence you present as that being supportive of evidence between a link between vaccines and autism is equally ridiculous and blinkered. I simply don;t have the time to tackle the mountain of misinformation presented on the page you link to suffice to say there’s not a single section that doesn’t have a major error. Most of them have been tackled on this and other blogs over the years.

Next you say:

In all likelihood the truth about vaccines is that they are both good and bad. While ingredients like aluminum, mercury, ether, formaldehyde and anti-freeze may help preserve and enhance vaccines, they can be toxic as well. The assortment of viruses delivered by multiple immunizations may also be a hazard. I agree with the growing number of voices within the medical and scientific community who believe that vaccines, like every other drug, have risks as well as benefits and that for the sake of profit, American children are being given too many, too soon. One thing is certain. We don’t know enough to announce that all vaccines are safe!

Mr Carrey, *vaccines do not contain anti-freeze* – for goodness sake, even Jay Gordon, Evan’s Paediatrician knows that! Did you also know that (to quote myself):

There’s also Aluminium in breast milk so lets compare the two.

According to this paper (which is from 1990 – any more up to date papers welcomed) the amount of Aluminium in breast milk is 49 ?g/L. The average amount of breast milk expressed per day is 0.85 liters.

This means that 41.65?g Aluminium per day is in breast milk.

Now, according to this paper, there is between 125 – 850?g of Aluminium per dose in a vaccine.

So, for a 6 year old, total Aluminium is between 2,125 – 14,450?g.

In real terms this means that after between 51 and 346 days breast feeding, a 6 year old will have taken onboard the same amount of Aluminium as from the total US vaccine schedule.

Now I couldn’t find out what vaccines contained the lower amount or which contained the higher amount. Even so, this means that if every vaccine a 6 year old has that contains Aluminium contains the highest possible amount, within a year of breast feeding they will have matched that.

Or to put it another way, an anti-vax tree-hugger soccer mom who doesn’t vaccinate her baby will have given him the same amount of Aluminium he would’ve had in six years after one year of breast feeding.

And thats of course, not even touched on the fact that:

In the Earth’s crust, aluminium is the most abundant (8.13%) metallic element, and the third most abundant of all elements (after oxygen and silicon)

And is found naturally occurring in sea water, fresh water, the human body etc etc.

[Regarding Formaldehyde]…There’s also Formaldehyde in Apples, Apricots, Banana’s and….ah, I lost interest. Lots of stuff. Including the human body.

So – how much is in vaccines?

According to this and using it in combination with the US vaccine schedule referenced above, we can see that the total amount of Formaldehyde in vaccines from the vaccine schedule for a 6 year old child is 1.2016mg (again, do your own maths, correct me if I’m wrong).

For comparison to that 1.2mg in all vaccines for a 6 year old, 1 (one) banana contains 16.3mg Formaldehyde.

Mr Carrey, you’ve got to stop throwing these scaremongering nonfacts around. Its damned irresponsible for a start.

Lastly Mr Carrey, you say:

If the CDC, the AAP and Ms. Brown insist that our children take twice as many shots as the rest of the western world, we need more independent vaccine research not done by the drug companies selling the vaccines or by organizations under their influence. Studies that cannot be internally suppressed.

In terms of autism, if you want to make a big deal out of the fact that ‘our children take twice as many shots as the rest of the western world’ then please consider this – the UK has less shots than you. We also have a higher prevalence than you. 1 in 100 vs 1 in 150.

And please also don’t invoke silly conspiracy theories. Think about how science works. A study is done, funded by Eli Lily for example. It is peer reviewed and found to be good quality and it is published in, lets say NEJM. Now, *every single reader of that study* can see exactly what methods and means were used to reach the studies conclusions. I ask you Mr Carrey, how much more independent can you get? How much more transparent? Basically anyone, anywhere can try and replicate that same studies results. If they can and a few others can – the results are good. If nobody can (think Andrew Wakefield) then the results must be bad.

And for goodness sake man, grow up, who is ‘suppressing’ what study exactly? Have you _any_ evidence at all that any study ever has been internally suppressed? Or are you just throwing this stuff out to scare people?

Mr Carrey, I loved the Truman Show but this isn’t it. There’s no god like figure overseeing every aspect of your life and wanting to control it. I ask you – get in contact with an actual scientist and go through your concerns with them. At the very least they’ll be able to stop you saying silly things like there’s anti-freeze in vaccines.

Thrown under the bus…but for a good cause, right?

21 Apr

America is a wonderful place. Where else can someone publish absolute garbage, refuse to retract it, accuse the government of being involved in a massive conspiracy–and still end up on a government committee?

I am speaking of Lyn Redwood. She is one of the coauthors on ‘Autism: a novel form of mercury poisoning’. This was ‘published’ in Medical Hypotheses. I put ‘published’ in quotes because Medical Hypotheses is a pay-to-publish pseudo-journal that has no review (peer or otherwise) at all. OK, the editor does check that the authors are talking about something medical, and makes sure that some sort of narrative is put together. But, scientifically? No review. Too many people, especially those parents with new autism diagnoses for their children, are unaware that “Medical Hypotheses” ‘papers’ have no place next to actual research papers.

If that piece of junk science wasn’t enough, Ms. Redwood was also a co-author on another less-than-worthless Medical Hypotheses ‘paper’, Thimerosal and autism? A plausible hypothesis that should not be dismissed. The first author on that “paper” was Mark Blaxill. Truly, one of the scary moments in the Omnibus proceeding came when the research head of ARI (Autism Research Institute) referred to Mark Blaxill as “brilliant”. No exaggeration–that was a frightening thought to this listener. Mr. Blaxill is probably rather bright and likely good at whatever he does professionally. But the idea that the information is traveling from him to the research head of the Autism Research Institute rather than the other way around is just scary.

The time to pay-to-publish retractions of these papers was years ago. Yet, both papers are still out there, and new parents usually won’t find out for a long time that those papers a junk.

Besides promoting bad science, what do Ms. Redwood and Mr. Blaxill have in common? Well, the Interagency Autism Coordinating Committee, for one thing.

Ms. Redwood sits on the Interagency Autism Coordinating Committee. This group helps coordinate the US Government’s research efforts on autism. Rather that fight for better understanding and services for, say, adults, the poor, or minorities with autism, Ms. Redwood filled meeting after meeting (after meeting) with struggles to get the wording of the Strategic Plan as close as possible to a government admission that vaccines cause autism.

Mark Blaxill sits on one of the working groups for the IACC, probably placed there by Ms. Redwood. Mr. Blaxill, also a co-author on a number of papers that any reasonable person would have retracted by now, has wasted considerable meeting time with long, insulting ramblings. I know there are people who appreciated Mr. Blaxill’s speeches, but I consider likening the other people on the committee to holocaust denialists insulting. Maybe I misinterpreted his repeated use of the phrase “Epidemic Denialists”. If so, I bet I’m not the only one. Somehow, I don’t think I’m wrong. It appears to be an insulting and deliberate choice of phrases.

Unfortunately for the undercounted communities like adults with autism, the poor with autism, minorities with autism–a number of our own–they present an “inconvenient truth” to people like Mark Blaxill and Lyn Redwood. They demonstrate that the numbers groups like SafeMinds use to promote the faux autism epidemic are terribly flawed. If we are still under counting people with autism in the U.S., how can we use the counts from the California Regional Centers or from education data so far as “evidence” of an “epidemic”?

I know I wrote about this issue recently. But, reading the expert report by Dr. Rodier, and writing about it, I realized anew that a few individuals have caused this harm. And, those few individuals could (and should) work hard to correct that harm.

So, in place of calling on the IACC to fund research that could help the under counted, Ms. Redwood and Mr. Blaxill got this paragraph:

Research on environmental risk factors is also underway. An Institute of Medicine workshop held in 2007 summarized what is known and what is needed in this field (Institute of Medicine of the National Academies, 2007). Numerous epidemiological studies have found no relationship between ASD and vaccines containing the mercury based preservative, thimerosal (Immunization Safety Review Committee, 2004). These data, as well as subsequent research, indicate that the link between autism and vaccines is unsupported by the research literature. Some do not agree and remain concerned that ASD is linked or caused by vaccination through exposure to Measles Mumps Rubella (MMR), imposing challenges to a weakened immune system, or possibly due to mitochondrial disorder. Public comment to the Committee reflected opposing views on vaccines as a potential environmental cause. Those who are convinced by current data that vaccines do not play a causal role in autism argue against using a large proportion of limited autism research funding toward vaccine studies when many other scientific avenues remain to be explored. At the same time, those who believe that prior studies of the possible role of vaccines in ASD have been insufficient argue that investigation of a possible vaccine/ASD link should be a high priority for research (e.g., a large-scale study comparing vaccinated and unvaccinated groups). A third view urges shifting focus away from vaccines and onto much-needed attention toward the development of effective treatments, services and supports for those with ASD.

Let’s just pull that last sentence out for emphasis, shall we?

A third view urges shifting focus away from vaccines and onto much-needed attention toward the development of effective treatments, services and supports for those with ASD.

It’s odd to me–I would have fought that language if I were Lyn Redwood. I would have pointed out that I have a broader perspective than just vaccines, and that I also care about development of effective treatments, services and supports. Isn’t it just a little sad that the people who are pushing the vaccine connection don’t have the view that effective treatments, services and supports for those with ASD’s are a top priority?

But, it wasn’t their top priority. It still isn’t. In the end, Lyn Redwood and Mark Blaxill, people who are on the IACC to represent the interests of the entire stakeholder community, threw the underrepresented autistic communities under the bus.

Bernadine Healy gets it wrong

17 Apr

Following Bernadine Healy’s April 14th post in USNews, Orac dealt her a dollop of respectful insolence which is a very good read, as are the comments.

However, I wanted to do a kind of accounting on Healy’s post, to see just how firm a grasp on the whole situation she has. So, lets start.

McCarthy and Carrey and two colleagues from the autism advocacy group she founded, Generation Rescue…

Oops. Sentence two, first error. McCarthy did not found Generation Rescue, JB and Lisa Handley did.

…and parents are raising legitimate concerns, yet unanswered…

I have been on the front line of this debate for the last six years. Once upon a time the question ‘do vaccines cause autism’ _was_ a legitimate one to ask. But that question has been asked and answered. Since about 2003/4 there have been _no_ legitimate concerns raised by parents or anyone else. The MMR question has turned out to be both a con and the result of bad science. The thiomersal question is just a defunct hypothesis, given that thiomersal was largely removed from vaccines by 2002 and yet autism rates continue to climb. Despite desperate attempts to rebrand the autism/vaccine question (aka when you know you’re right and yet turn out to be wrong, know you’re right with something else) into questions about greening vaccines when simple searching reveals that newborns contain most vaccine ingredients either naturally or via breast feeding. Or the hellacious vaccine schedule despite the fact that the UK for example has a higher rate of autism (1 in 100 vs 1 in 150) but a lower amount of vaccinations.

This controversy might be resolved if we can focus on a few big questions, with an open mind…

Mistake number three. There is no controversy. In the field of _science_ asking the _scientific question_ ‘do vaccines cause autism’, there is no controversy at all. What there is is a very good and well executed media campaign to manufacture one. However, the facts remain the facts – no vaccine, no vaccine ingredient and no vaccine schedule either solely or together cause autism. There is simply no sound science to support that set of ideas. If there is a controversy it is how the media continue to let people stoke the fire of this idea.

Influenza vaccine, mandated here starting at age 6 months…

Mistake number four. As far as I can tell, the flu vaccine is not mandatory in the US. Certainly this article covering the 2008/09 flu season states:

It will not be mandatory for every child to have the flu shot…

Onward.

…a study from Canada last year found that delaying the diphtheria, tetanus, and pertussis vaccination just a few months decreased by 50 percent the risk that a child develops asthma…

Mistake number five. This has absolutely no bearing an autism. The article is entitled ‘The Vaccines-Autism War: Détente Needed’. Not ‘vaccines, asthma, maybe other stuff as and when I think of it-autism war’. As such this strawman argument has nothing to do with autism.

(Side note: Healy says we should read two doctors thoughts on the pros and cons of a flexible vaccine schedule. It maybe will come as no surprise that the doctor who thinks the US needs a flexible vaccine schedule is ‘Vice chair, Section on Complementary and Integrative Medicine’ of the AAP).

The goal is to get all kids appropriately vaccinated…

Mistake number six. The organisation Healy references at least twice, Generation Rescue, have this on the front page of their Facebook Group

“I found that the whole vaccine business was indeed a gigantic hoax…” –Dr Kalokerinos MD June 1995

“There are significant risks associated with every immunization and numerous contraindications that may make it dangerous for the shots to be given to your child…” — –Dr. Robert Mendelsohn MD, pediatrician

Onward again.

…Hannah Poling, for example, who has an underlying mitochondrial disorder and developed a sudden and dramatic case of regressive autism after receiving nine immunizations, later determined to be the precipitating factor…

Mistake number seven. Nowhere, repeat, nowhere has it been published that Hannah Poling’s vaccines were the ‘precipitating factor’ in her autism. If anyone thinks that it has been published I would like a link to that document. I’ve been asking for this for over a year now and no one has ever managed to show me where this is stated.

What _has_ been said is that following her vaccines hannah showed ‘features of autism’. As I have said numerous times, ‘features’ of autism is not interchangeable with autism. If it was, then the medical report co written by four doctors including Hannah Polings father Jon Poling would have simply said ‘autism’. In fact, this medical case study listed a number of symptoms (over 20) of which only three were found on the DSM (IV) (the official diagnosis for autism). She may well have been autistic and she was determined to have been vaccine damaged but that does not automatically mean one caused the other and in fact by the lack of any of the many other symptoms needed to reach a diagnosis of autism, we can see that they were not.

Amd again, onward:

Other children may have a genetic predisposition to autism, a pre-existing neurological condition worsened by vaccines, or an immune system that is sent into overdrive by too many vaccines, and thus they might deserve special care. This approach challenges the notion that every child must be vaccinated for every pathogen on the government’s schedule with almost no exception…

Not exactly any mistake here but this is very misleading. Its well know _already_ that some kids _do_ have conditions that are not amenable to vaccines. Less than 30 seconds of searching the CDC website led me to the appropriate information. I think it is incredibly disingenuous and very ignorant of Healy to comment in the manner she has.

Onward we trudge through the morass.

Paul Offit, an infectious-disease expert from the University of Pennsylvania who has been a frequent spokesman and adviser on vaccine policy (and by his admission has become wealthy by developing the now mandated rotavirus vaccine)

Mistake number eight. The Rotavirus vaccine has never been mandated anywhere that I can see.

So this is Dr Bernadine Healy, a scientist with 125 records in PubMed. Impressive until you realise that, just like this, they are 125 blog entries from US News. That means we can say that on average Healy has got 1,000 mistakes into PubMed.

Good going Bernadine.

Speaking clearly

12 Apr

The Canadian Journal of Medicine have an interesting article by MacDonald and Picard pleading for clearer language to be used by academics. In
particular they note the potential for different audiences to make
different conclusions from the 2001 Institute of Medicine report
looking at the relationship between MMR vaccine and autism.

The conclusion in the executive summary of the 2001 Institute of Medicine report about the relation between the measles–mumps–rubella vaccine and autism provides an excellent example of potentially confusing academic language. This conclusion was carefully crafted in precise academic language — so precise that, depending on the reader, different conclusions can be drawn. The academic involved in vaccine research, familiar with the scientific principle that the null hypothesis cannot be proven, would conclude that this report does not find evidence that the measles–mumps–rubella vaccine causes autism. A health care worker reading this same statement may not be sure what to make of it: maybe the vaccine causes autism in some children, just not often. The politician may wonder whether supporting public programs for measles–mumps–rubella vaccination at this time is justified. The journalist may reasonably interpret the conclusion as saying that the vaccine is a cause of autism, albeit an infrequent one. The antivaccine lobby, for its part, would be delighted that this respected academic body has given support to its claims that the vaccine can cause autism.

Go and have a read, and compare the actual IOM conclusion with
MacDonald and Picard’s proposed conclusion. Their arguments have
merit, particularly when it comes to the media interpretation of study or report findings. However, I’m not so sure about the anti-vaccine movement, who will deliberately misrepresent studies to prove the complete opposite of what they say, but we can at least stop giving them easy ammunition with which to dupe others.

Omnibus Expert: Patricia Rodier

10 Apr

Autism just plain isn’t mercury poisoning. When can we move on?

Even some of the people who loudly promoted the mistaken idea that “autism is just a misdiagnosis for mercury poisoning” have backed off. But, the groups that promote autism as vaccine injury are packrats: once they’ve collected an idea, bad or not, they won’t ever let it completely go.

Some of you will be thinking, dang, another mercury post. I agree, there are a lot of good arguments against blogging about the mercury-autism connection any more. For one, it gives the idea press that it just doesn’t deserve.

I do think this is worth posting about, though. “This” is the expert report from Dr. Patricia Rodier, submitted to the Autism Omnibus Proceeding. In a single document, we now have an expert on both mercury toxicology and autism. Not faux experts, or worse, businesspeople and public relations people, but an actual, bone fide expert in both fields. I.e. we have a good document to give to people who are being snowed under by the misinformation campaign promoting autism as mercury poisoning.

When Patricia Rodier testified in the Autism Omnibus Proceeding, I was very impressed–and I blogged it right away. I remember at the time telling a friend that it was good to finally see someone officially debunking things like Sally Bernard et al.’s paper, Autism: a novel form of mercury poisoning. My friend pointed out that any college freshman in science (and most not in science) should be able to tear that “paper” apart.

Unfortunately, “should be able to tear the paper apart” isn’t enough. Many people don’t have the time and/or energy. So, many people still think that paper is valid. Let’s face it, that “paper” should have been retracted by the authors long ago, but they still soldier on with the “autism is mercury poisoning” message.

Dr. Rodier’s qualifications are quite good. Her summary is quite good:

As a research scientist who has studied both the toxic effects of methylmercury in animals and autism in children and animal models, I believe I am qualified to evaluate the scientific merit of the allegation.

She may be the only person in the world who has studied both mercury toxicity and autism.

What does she think? In a nutshell:

My conclusion is that the allegation has no scientific support and is highly improbable

Dr. Rodier notes that the comparison that autism and mercury poisoning appear similar isn’t even close.

In othcr words, because the symptoms of methylmercury poisoning
are not similar to those of autism, the authors have tried to construct a new, hypothetical kind of mercury poisoning from symptoms of toxicity of other mercury species and symptoms never reported for any kind of mercury exposure. The hypothesis is not based of facts; instead, the facts are being selected, manipulated, and shaped to fit the hypothesis. The hypothesis is then offered as evidence. But hypotheses are not evidence.

Ouch. Ouch, that is, if you are someone promoting autism-as-mercury-posinong.

Dr. Rodier can back up her words, as we discussed in the previous blog post. But, let’s say that again, Dr. Rodier uses research based facts, not manipulated hypotheses, to come to her conclusion.

I need to get a clean copy of that document, one that looks as good as the information it contains. That document needs to get into the hands of people being lured by the pseudo scientists promoting autism as mercury poisoning.

Thank you, Dr. Rodier for putting yourself on the line to testify. Thanks also to the HHS for allowing these reports to be made public.

More Hot Air about HBOT

6 Apr

A few weeks ago, BMC Pediatrics published an article that purports to show that Hyperbaric Oxygen Therapy (HBOT) can produce “…significant improvements in overall functioning, receptive language, social interaction, eye contact, and sensory/cognitive awareness..” in autistic children. This study (Rossignol et al, 2009) is billed as a “…multicenter, randomized, double-blind, controlled trial.”

It’s all that and much, much less.

Let’s start by looking at the six “centers” where this research was carried out.

The Centers

The International Child Development Research Center (ICRDC):

This imposing name is attached to a rather less imposing edifice. The ICRDC, brainchild Dr. Jeffrey Bradstreet, is located in a strip mall in Melbourne, Florida, where it not only carries out “cutting-edge research” but also sells a complete line of “supplements” and treats autistic children with a dizzying array of “alternative”, “biomedical” and “integrative” therapies, including HBOT.

Daniel Rossignol MD (Family Practice), Lanier Rossignol (Nurse Practitioner) and Scott Smith (Physician’s Assistant) were the authors from the ICDRC.

The Center for Autism Research and Education (CARE):

This “center” is located in Phoenix, Arizona and has – according to its website – a single practitioner, Cynthia Schneider, MD (OB/Gyn), who is also an author on this paper. One of the “integrative” therapies this “center” offers is HBOT.

One of the other authors, Sally Logerquist, is a PhD psychologist who – according to the paper – is also associated with CARE, but also appears to run social skills therapy groups for autistic children using the “Logerquist Excellent Attitude Program” (LEAP).

True Health Medical Center:

It’s rather difficult to find anything about this “center”, apart from the fact that it is located in Naperville, Illinois – in what appears to be an office complex. Anju Usman, MD (Family Practice) is the author associated with this location.

Neubrander Center:

Although not officially called a “center”, the office of James Neubrander, MD (Pathology) is apparently one of the “centers” of this study. His office is located in the Menlo Park Mall (near Macy’s) and offers – you guessed it! – HBOT as a treatment for autism.

Princess Anne Medical Associates:

A Family Practice medical group in Virginia Beach, Virginia, this “center” is the home of Eric Madren, MD (Family Practice). It’s not clear if this four-physician practice offers HBOT.

The Rimland Center for Integrative Medicine:

A small, one-physician “center” in Lynchburg, Virginia, this is practice location of author Elizabeth Mumper, MD (Pediatrics). Not surprisingly, this “center” sells HBOT services for autistic children.

So, of the six “centers” involved in this study, five are single-physician operations. The remaining “center” has two physicians (three, if you count the naturopath).

I’m underwhelmed.

Well, what about the research itself? Maybe that’s better than the “facilities” might suggest. Let’s take a look.

The Subjects

This study initially enrolled 62 children (33 treatment; 29 control), but only 29 of the treatment group and 26 of the control group finished all 40 sessions. For reasons that pass my understanding, one treatment subject who only finished 9 sessions was included in the analysis. The authors stated that including this subject did not alter results, which begs the question: “Why did they include this subject if it made no difference?”

Outcome measures

The authors used the Aberrant Behavior Checklist (ABC), the Clinical Global Impression (CGI) scale and the Autism Treatment Evaluation Checklist (ATEC) as their outcome measures. All except the ATEC are widely accepted for use in autism treatment trials.

The ABC is a 58-question checklist of – surprise! – aberrant behaviors which are each given a score from “0” (“not at all a problem”) to “3” (“severe problem”). This test has been use – and validated – in a number of disorders, including autism. It gives a global score as well as five subscales: a total of six measures.

The CGI is a generic rating scale used in a variety of clinical trials. For each parameter (e.g. “overall functioning”, “sleep pattern”), the rater gives a score of between “1” (“very much improved”) and “7” (“very much worse”). The authors had both the treating physician and the parents rate the subjects on overall improvement and eighteen discrete parameters: a total of 38 measures in all (19 by the physician and 19 by the parents).

The ATEC was developed by Bernie Rimland and Stephen Edelson and has not been validated. In fact, it has only been used in two published studies – one by Rossignol et al. The ATEC has 25 questions on which the evaluator rates the subject on either a three-point (“not true”, “somewhat true”, “very true”) or four-point (“not a problem”, “minor problem”, “moderate problem”, “serious problem”) scale. It provides a total score and four subscales: a total of five measures.

In all, each subject had a total of 49 evaluation measures (CGI scores and the change in ABC and ATEC scores), of which 47 are independent. The importance of this will become apparent in the section on statistical analysis.

Analysis

As I mentioned above, the decision to include one treatment subject who only completed nine sessions was curious. Why they included this subject and not any of the other three treatment subjects and three control subjects who also failed to complete the entire course of the study is concerning. The smart thing – and the proper response – would have been to drop this subject from analysis.

The authors’ method of analyzing the CGI scales was also curious. Rather than simply using the scores as they were provided, they took the scores and subtracted them from four (the “no change” score). There are a few problems with this.

For starters, the scores are not linear – the difference between “much improved” and “very much improved” is not necessarily the same as between “no change” and “minimally improved”. Nor is the difference between “no change” and “much improved” twice the difference between “much improved” and “very much improved”. For that reason, these types of numerical scores are often referred to as “pseudo-numbers”.

This may seem like nit-picking, but it is a serious concern. Imagine, if you will, that the numbers were replaced by colors. Is the difference between green and orange twice the difference between orange and red? If half of a population of birds are blue and the other half are yellow, is the “average” bird green? The simple fact is that it is not appropriate to treat these “scores” as though they were real numbers, to be added, subtracted and averaged.

Secondly, it appears that the authors used parametric statistics for their analysis of the CGI scores. This is a problem since – as I indicated above – it is nonsensical to do math on pseudo-numbers. I don’t have the raw numbers, so it isn’t possible for me to calculate the absolute impact of this mistake for all of the CGI subclasses, but I can figure out the raw numbers for one group, so let’s look at that one.

It took a little work, but the authors gave enough clues to tease out the raw numbers in the physician “overall functioning” CGI score. The treatment group had an “average” of 2.87 and the control group’s “average” was 3.62; using the unaltered data, a t-test [Note: not an appropriate use of the t-test] gives p-value of 0.0006, not far from what the authors report. When a more appropriate statistical test [Mann-Whitney U-test] is used, the p-value is 0.002, very different from the reported 0.0008. While this is still less than the threshold p-value of 0.05, see below for a discussion of multiple comparisons.

All of these statistical analyses of the CGI scores ignore the fact that these are pseudo-numbers and need to be treated as discrete groups rather than as actual numbers. In truth, even the ABC and ATEC scores should have been treated this way, as well, although it is fairly common practice to treat such multi-factor scores as real numbers. A Chi-square test or Fisher Exact test would be the ideal test, but the problem with that is that the treatment group has one score of “1” (very much improved) and the control group doesn’t. Likewise, the control group has two subjects with a score of “5” (minimally worse) and the treatment group has none. This prevents a Chi-square or Fisher test from comparing each score independently.

One solution is presented by the authors themselves, although they apparently didn’t use it. In their discussion of the CGI, the authors said:

“Children who received a score of ‘very much improved’ or ‘much improved’ on the physician CGI overall functioning score were considered to be ‘good responders’ to treatment.”

If we “bin” the scores into “good responders” and “others”, we find that there were 9 (out of 30 – 30%) “good responders” in the treatment group compared to 2 (out of 26 – 8%) in the control group. Unfortunately, this is not a statistically significant difference (p = 0.08) in the (Yates) Chi-square test and barely reached significance (p = 0.05, but see below) in the Fisher Exact test.

An even bigger problem in the statistical analysis was the failure to correct for multiple comparisons. This problem was brought up by one of the reviewers, and the authors responded by eliminating a table. They did not make the appropriate corrections.

The reason that multiple comparisons are a problem is that the analysis for statistical significance is based on probability. If the probability (the p-value) that the differences between the two groups (treatment and control) is due to random chance is equal to or less than 5%, that difference is considered to be “statistically significant” and accepted as real. That means that there is still a 5% (or less – look to the p-value) chance that the difference is due to chance and not real.

If multiple comparisons are made on the same group of subjects, the probability that one (or more) of them will be “statistically significant” by chance starts to climb. If 14 comparisons are made, the chance of an erroneous “statistical significance” is over 50%. If 47 independent comparisons are made – as in this study – the chance of an erroneous “statistical significance” is over 90%.

For this reason, it is standard procedure to apply a correction for multiple comparisons. The most well-known (and simplest) of these is the Bonferroni Correction, which changes the threshold for statistical significance by dividing it by the number of comparisons. In the case of this study, the threshold (normally p less than or equal to 0.05 or 5%) is reduced to 0.001.

Applying the appropriate correction for multiple comparisons changes the results of this study significantly. Only the physician CGI scores for overall functioning and receptive language reach significance – and these numbers are already suspicious because they were improperly handled to begin with. In fact, as I have shown above, the CGI “overall functioning” p-value wouldn’t reach significance. It is possible that – if the proper statistical tests were used – that the CGI score for “receptive language” would also not reach significance.

Another curious thing. The authors asked the parents after the study whether they thought their child was in the treatment or the control group. Rather than say that the parent’s guesses were no better than random chance (i.e. 50%), the authors stated:

“…there was no significant difference between the two groups in the ability of the parents to correctly guess the group assignment of their child.”

As I said, this was a curious way to put it. As I read this, all it says is that each group of parent were equally able to guess which group their child was assigned to. That could be a 50% accuracy (which would be equal to chance), but a 90% or 99% accuracy – if both groups were that accurate – would also fit that description.

Now, this could simply be an clumsy phrasing by the authors, or it could be a way to make it sound like their blinding was successful when it actually was not.

Summary

This study may have collected some useful data, but its analysis of that data rendered it useless. The CGI scores – where the only statistically significant result was (possibly) seen – were improperly manipulated and the wrong statistical analysis was used.

The other issue is that there is no discussion of why HBOT is thought to be superior to providing the same partial pressure of oxygen at room pressure. This study used 24% oxygen at 1.3 atm, which gives the same partial pressure of oxygen as 31% at sea level. This concentration of oxygen can be easily attained with an oxygen mask or simple oxygen tent – both of which are vastly less expensive than HBOT.

If the authors are arguing that the mild pressure of their inflatable HBOT chambers contributes to the treatment effect, they need to look at the literature on cell membrane compressibility. For those who want to do the calculations at home, the bulk modulus of water (the major component of cells) is 21,700 atm. This means that a 0.3 atm increase in pressure will reduce the cell volume by 0.0014%. The bulk modulus of the lipid bilayer in cell membranes is around 30,000 atm. This means that an increase of 0.3 atm pressure causes a 0.0010% reduction in membrane volume. These are well below the threshold for any clinical effects.

Real pressure effects on the central nervous system are seen at pressures over 19 atm. These effects are:

dizziness
nausea
vomiting
postural and intention tremors
fatigue and somnolence
myoclonic jerking
stomach cramps
decrease intellectual and psychomotor performance
poor sleep with nightmares
increased slow wave and decreased fast wave activity in EEG

None of these effects could be construed as “improvements”, even in autism.

So, this study fails to answer the following questions about HBOT and autism:

[1] Does HBOT improve any feature of autism?
[2] If so, is HBOT any better than supplemental oxygen (which is much cheaper)?

The only real effect of this study was to give a cover of legitimacy to practitioners who are already using HBOT to “treat” autism.

Prometheus

Mild hyperbaric therapy for autism – Shh!…don’t say it’s expensive

30 Mar

When I recently wrote about the new HBOT-for-autism study (Rossignol et al. 2009)1, I took issue with unlikely claimed treatment pressures for at least one of the study locations. While a potential methodological weakness, this is probably a fairly small problem in light of potential issues with blinding and interpretation of the results as quantitatively and objectively meaningful with respect to autism. But let’s set those potential issues aside for a moment.

Let’s assume that treatment with slightly enriched air (24% vs. 21% oxygen) in an inflatable hyperbaric chamber pressurized to 4 PSI2,3 above ambient atmospheric pressure, could confer some sort of benefit to an autistic child.

I’m not suggesting assumption that it does confer benefit. I’m asking readers to set aside any knowledge of hemoglobin’s role in oxygen transport, as well as any knowledge of real hyperbaric oxygen therapy (breathing 100% oxygen at greater than 1 ATA)4, and evaluate a simpler proposition. Accept the proposal that some sort of benefit is scientifically possible, but then ask yourself a fairly simple question:

Compared to 24% O2 at 4PSI above ambient atmospheric pressure in an inflatable hyperbaric chamber, equivalent oxygen delivery can be achieved with simple oxygen therapy (an oxygen mask) at a fraction of the cost5 – why is a study of the hyperbaric version of this increased oxygen important?

One possibility: studying what’s already for sale

While some might call it being on the “cutting edge”, others may consider it putting the cart before the horse. No matter how you see it, it’s no secret that some Defeat Autism Now practitioners were already selling this type of hyperbaric oxygen therapy well before this study came out. It should be noted that this study’s authors did disclose this conflict of interest with respect to derivation of revenue in their clinical practices from HBOT.

DAR, LWR, SS, CS, AU, JN, EMM, and EAM treat individuals with hyperbaric treatment in their clinical practices and derive revenue from hyperbaric treatment.

Lisa Jo Rudy over at autism.about.com6 had additional comments about the subject:

Dr. Rossignol is “the” proponent of HBOT, and has been speaking at conferences all over the world in support of the treatment. Clearly, he has a personal and professional stake in seeing that the outcomes of a research study are positive.

The present study was funded by the International Hyperbarics Association, a trade group of private hyperbaric therapy centers. Clearly, they have a similar stake in seeing positive outcomes.

While there may certainly be an aspect of genuine scientific interest in understanding if this type of hyperbaric oxygen therapy is beneficial for autistic kids, I think there may also be a certain degree of assumption that it is. After all, why would a practioner already be selling something if they didn’t “believe” it worked? Given the stated conflicts of interest, it doesn’t seem implausible that the authors might have an interest in seeing a long-term revenue stream that could come from additional, and deeper pockets than those of parents willing to “believe” and pay – despite the lack of really convincing scientific evidence at this point.

Consider the following portions of an interview with Dr. Dan Rossignol7:

We chose 1.3 ATA because a lot of children with autism are currently receiving this dose and we are hoping to prove that it works.

“Hoping to prove that it works.”

Dr. Rossignol’s point does not seem unclear. HBOT is popular, and he is, in his own words, “hoping to prove that it works”. This is a valid reason, I suppose, if he is also open to the possibility that it may not, or that it may be a completely moot point if something on the order of one tenth of the cost can do the same thing. Following Dr. Rossignol’s communication about the hope to “prove that it works”, the interviewer asks:

How is the insurance situation coming along?

Insurance situation? Coming along? Was this situation already a well-known “work in progress” back in 2006 (e.g. had it been decided by some, prior to the science, that “mild” HBOT for autism does work, and that insurance reimbursement is really the goal now? Let’s see if we can get Dr. Rossignol’s take on this.

Well, obviously, HBOT is not approved for autism, but we hope to get there. Interestingly, if you take the ABC scale and look at the lethargy subset score, we saw a 49% improvement in symptoms at 1.5 ATA with a p-value of 0.008. If you look at the New England Journal of Medicine study on risperidone from 2002, there was a 56.9% improvement on the ABC irritability subscale with a p-value < 0.001. So the results we had on these 6 children with 1.5 ATA approached the percentage improvement seen with a drug approved for the use in autism. We just need to be able to reproduce these type of findings in a placebo study.

Hopefully when we finish these studies and show that hyperbaric therapy works, then insurance reimbursement will follow.

I don’t necessarily see a geniune scientific perspective here, but that could just be me. I get more of a vibe (at least from this interview), that the interest may lie more in “finishing” the studies and showing “that hyperbaric therapy works”, rather that actually finding out, with really good quality scientific methodology, whether or not it really does work. I’ll acknowledge that I could be wrong about this. Do you think readers will have noticed that the study result mentioned for comparison, was from 1.5 ATA, and probably totally irrelevant to the 1.3 (or less) studies?

Is it just me, or would it seem naive to wish that a few studies like the recent one, are really going to catalyze insurance reimbursement in the long run? I get the impression that many parents may believe this. Insurance companies work to achieve cost efficiencies. One of the ways they do this is by reimbursing at higher rates for equivalent things at lower costs – hospital stays in contracted facilities, generic drugs as compared to name-brand versions, etc. Why on earth would an insurance company reimburse for a 4-5% increase in blood oxygen content for a couple of hours at a time, in an inflatable hyperbaric chamber (at a few thousand dollars a month), when the identical oxygen increase could be delivered with a simple oxygen mask (for under $200 a month)?

You don’t have to take my word for this comparison of oxygen delivery, you can take Dr. Rossignol’s acknowledgement in that same interview:

Some people have criticized using mild hyperbarics at 1.3 ATA because they state that when compared to this pressure, you can get just as high an oxygen concentration in the blood with oxygen by face mask without a chamber. And this may be true in some cases.

In fact, it’s true in most (if not all) cases. The physics of partial pressures does not discriminate. But there may be more to the story.

Squeeze in some hope

After acknowledging the reality of the partial pressure comparison problem, Dr. Rossignol continues:

However, we must remember we are dealing with 2 separate components with HBOT — the oxygen and the pressure. So it appears that many of the effects of HBOT are from the increased oxygen, but we cannot dismiss the pressure effect. I think we need more studies on this as well.

So “many of the effects” are from the oxygen increase, but we can’t dismiss the pressure effect? What pressure effect? Is there a demonstrated significant clinical effect for autism from a very slight, and very temporary, increase in atmospheric pressure alone?

Although I suppose it is possible, a clinically significant effect for autism at such low pressures doesn’t seem likely at all. If it turns out that I am incorrect, this may be good news for some of the parents of autistic children in several U.S. cities: Albuquerque, NM (5312′ AMSL), Aurora, CO (5471′ AMSL), Colorado Springs, CO (6035′-7200′ AMSL), Denver, CO (5280′ AMSL), Reno, NV (4505′ AMSL), and Salt Lake City, UT (4226′ AMSL), to name a few. Something as simple as a move to a closer to sea-level city might provide increases in atmospheric pressure not a lot unlike those provided by the inflatable hyperbaric chambers. If there were some beneficial effect of slight additional atmospheric pressure for autism, certainly there would have been some observations (anecdotal or media reports) over the years, of families with autistic children who moved from states like Colorado to lower elevation states like California – and noticed. Who knows? Perhaps this is something to yet be uncovered.

So, aside from the fact that an identical oxygen increase can be achieved with simple O2 therapy without a hyperbaric chamber at all (and at a fraction of the cost). And, aside from the point that the minute pressure increase (while certainly possible in a strict scientific sense) isn’t known to be a likely candidate to significantly clinically impact autism, is there anything else about this newest HBOT-for-autism study that may merit some critical thought? Maybe, but it’s really just a side-note (perhaps interesting to some, but not terribly relevant to the science itself).

Who farted in the HBOT chamber? (Shh!…Don’t say it’s expensive)

The original manuscript8 for this study contained what I thought was an appropriately realistic comment from the authors in the conclusion. This comment has value in terms of practical knowledge that readers who are not familiar with hyperbaric oxygen therapy would probably find useful. What follows is the first-draft conclusion of this study with that comment emphasized.

Hyperbaric treatment is a relatively time-intensive treatment and can be costly. However, given the positive findings of this study, and the shortage of proven treatments for individuals with autism, parents who pursue hyperbaric treatment as a treatment for their child with autism can be assured that it is a safe treatment modality at the pressure used in this study (1.3 atm), and that it may improve certain autistic behaviors. Further studies are needed by other investigators to confirm these findings; we are aware of several other planned or ongoing studies of hyperbaric treatment in children with autism.

Again, Lisa Jo Rudy over at autism.about.com notes:

No insurance company will cover the very high cost of HBOT for autism, as it is considered an experimental and unproven therapy.

But the above conclusion is not the conclusion that appeared in the peer-reviewed, edited version. Here it is:

Given the positive findings of this study, and the shortage of proven treatments for individuals with autism, parents who pursue hyperbaric treatment for their child with autism can be assured that it is a safe treatment modality at the pressure used in this study (1.3 atm), and that it may improve certain autistic behaviors. Further studies are needed by other investigators to confirm these findings; we are aware of several other planned or ongoing studies of hyperbaric treatment in children with autism.

Why would the authors remove that valuable bit of practical knowledge about time requirements and high cost? Apparently due to a comment from referee #3 for this paper.

Discretionary Revisions

Page 24 In view of the highly positive findings of this study and the fact that no other trial has demonstrated such benefits under strictly controlled conditions to open the conclusions with negative comments demeans the study. Many other inventions used for ASD children are equally time consuming and hyperbaric treatment need not be expensive.

Authors: “The negative comments were removed from the conclusion.”

Opening the conclusion with negative comments demeans the study? Such comments don’t really touch the content of the study itself, and what the now absent comment did do, was provide some practical perspective – quite likely, very accurate practical perspective. Why would it be suggested by referee #3 that the practical comments demean the study? Perhaps it was meant that the comments demean the use of mild hyperbaric oxygen therapy as an autism treatment (therefore actually demeaning a desired interpretation of this study)? That would seem a real possible concern, since the justification offered, has absolutely nothing to do with the study itself, and doesn’t amount to much more than logical fallacy and simple assertion.

“Many other inventions used for ASD children are equally time consuming…”

This is about as basic an example of the “two wrongs make a right” fallacy as can be presented. Two wrongs don’t make right. Just because other interventions are also time consuming, does not mean a researcher is unjustified, or shouldn’t add the point about practicality that HBOT is relatively time consuming. Further, if the authors are aware of such a potential practical issue, it could be argued that ethics would dictate that it is mentioned. Other treatments presenting similar impracticalities do not automatically relieve any potential ethical responsibility in this regard.

“…hyperbaric treatment need not be expensive.”

Compared to what? Hyperbarics in a gold-plated hyperbaric chamber? If there is no significant effect for autism from the brief, and small increase in added pressure in one of these inflatables, the increased oxygen delivered by providing 24% O2 at 4 PSI above ambient atmospheric pressure, is easily matched (or exceded) with simple O2 therapy. In short, this type of hyperbaric treatment would be the hard way, and the expensive way to achieve the results.

Referee #3 also added the following comment:

The reviewer has a preference for the word treatment rather than ‘therapy’. In view of the proven changes that relate to increased inpsired fractions of oxygen it is suggested that treatment would be preferable.

Authors: “The word “therapy” has been replaced with “treatment” throughout the paper.”

The “T” in the acronym “HBOT” does, in fact, represent the word “therapy” in medical usage. I happen to think the terms “treatment” and “therapy” are fairly interchangeable in the context of drug delivery, but I do wonder if there is any significance to such a preference. Is this a semantics issue that has the potential to impact perceptions of those who make decisions about insurance coverage for autism? But I digress. So what’s up with these comments from referee #3, comments with a little fallacious reasoning, that express possible concern about the perception of a high price tag for mild hyperbaric oxygen therapy, and a commment that communicates a preference for the word “treatment” over “therapy”?

I honestly don’t know. What I can tell you is that referee #3 was Philip James, MD. Dr. James is a professor in the field of hyperbaric medicine and hails from the U.K. He appears to have published quite a bit in the field of hyperbaric medicine as well.

According to the International Hyperbarics Association website:

Dr. James is responsible for founding the Hyperbaric Trust in the United Kingdom which promotes the treatment of cerebral palsy and the brain injured child and was responsible for having the National Health Service pay for this therapy.

Dr. James (Referee #3) appears to have been categorized (with a doctor profile) as a medical advisor to International Hyperbarics Association back in February of 2006 (shortly before this study9 began). Hey wait a minute, there’s that name again – International Hyperbarics Association. Where have I seen that before? Oh yeah, in the study itself:

We are grateful for the work of Shannon Kenitz of the International Hyperbarics Association (IHA) for an unrestricted grant which funded this study, which included use of hyperbaric chambers and funding for all hyperbaric technician salaries during the study. The IHA had no involvement in the study design, collection, analysis, interpretation of data, writing of the manuscript, or in the decision to submit the manuscript for publication.

I’m not sure how the International Hyperbarics Association defines itself exactly – are its listed medical advisors excluded from that definition? That would seem likely.

As of this writing, Dr. Rossignol is listed as a medical advisor at the IHA website. 10
As of this writing, Dr. Neubrander is listed as a medical advisor at the IHA website. 10
As of this writing, Dr. James (referee #3), is categorized as a medical advisor at the IHA website with a physician profile page.11,12

Side notes aside, where to, from here

So all in all, it seems that “mild” HBOT-for-autism researchers may have their work cut out for them. Although probably not very likely, it is possible that a small temporary change in atmospheric pressure could do something for autism, and that should be studied next, then, better replications should follow.

In the long run, it will be difficult to ignore the scientific fact that simple oxygen therapy alone can easily provide identical increases in blood oxygen content, at a fraction of the cost of mild hyperbaric oxygen therapy (as it currently being studied for autism). If HBOT-for-autism proponents think insurance companies should step up to pay for an expensive treatment that provides a 4-5% increase in blood oxygen (without scientifically establishing benefit of the small and temporary pressure increases), they might do well to consider these famous words (most recently from Barack Obama) – “You can put lipstick on a pig. It’s still a pig.”

Notes:

1 Hyperbaric treatment for children with autism: a multicenter, randomized, double-blind, controlled trial

Click to access 1471-2431-9-21.pdf

2 Medical device pre-market notification (FDA-cleared)

Click to access K001409.pdf

3 Manufacturer product sheet

Click to access vitaeris-lowres2007-8.pdf

4 Definition of Hyperbaric Oxygen Therapy
http://www.uhms.org/ResourceLibrary/Indications/tabid/270/Default.aspx

5 Hyperbarics and Hypotheses

Schooling and Statement

6 Hyperbaric Oxygen as a Treatment for Autism: Let the Buyer Beware
http://autism.about.com/b/2009/03/14/hyperbaric-oxygen-as-a-treatment-for-autism-let-the-buyer-beware.htm

7 Interview with Dr. Dan A. Rossignol: Hyperbaric Oxygen Therapy Improves Symptoms in Autistic Children

Click to access Rossignol%20HBOT%20Medical%20Veritas%202.pdf

8 Pre-publication history
http://www.biomedcentral.com/1471-2431/9/21/prepub

9 ClinicalTrials.gov Identifier: NCT00335790
http://clinicaltrials.gov/ct2/show/NCT00335790

10 Medical Advisors
http://www.ihausa.org/

11 Index of /docs
http://www.ihausa.org/docs/

12 International Hyperbarics Association Medical Advisor – Professor Philip B. James, M.D.
http://www.ihausa.org/docs/james.html

Features of autism

29 Mar

I was planning on writing something about this for the 1 year anniversary of when the Department of Justice concession to Hannah Poling was leaked.

Why wait until now? Because it was basically impossible to discuss this last year. Immediately after the leak, the phrase “features of autism” was made into a running joke. The vaccines-cause-autism people all made great fun of how the government coined the phrase, presumably to avoid using the simple word, autism.

Anyone want to go back and look at the document now? Search for the word “features”.

First hit:

Dr. Zimmerman observed that [Hannah Poling] watched the fluorescent lights repeatedly during the examination and would not make eye contact. Id. He diagnosed [Hannah Poling] with “regressive encephalopathy with features consistent with an autistic spectrum disorder, following normal development.”

Note that that’s in quotes: “features consistent with an autistic spectrum disorder”. That’s right, Andrew Zimmerman, Hannah Poling’s own neurologist used the phrase “features of autism” about her, long before the Department of Justice ever did.

This is the same Andrew Zimmerman who submitted an expert report on Hannah Poling. This is the same Andrew Zimmerman who wrote an expert report, for the government side, in the Autism Omnibus Proceeding.

Not the only place “features” is mentioned in the Rule 4(c) report, either:

Second Hit:

[Hannah Poling] was evaluated by Alice Kau and Kelley Duff, on May 16, 2001, at CARDS. Pet. Ex. 25 at 17. The clinicians concluded that [Hannah Poling] was developmentally delayed and demonstrated features of autistic disorder.

So, why is it surprising that the Department of Justice would write:

In sum, DVIC has concluded that the facts of this case meet the statutory criteria for demonstrating that the vaccinations [Hannah Poling] received on July 19, 2000, significantly aggravated an underlying mitochondrial disorder, which predisposed her to deficits in cellular energy metabolism, and manifested as a regressive encephalopathy with features of autism spectrum disorder.

That’s the third place that “features” is used in the concession document. But, hey, it isn’t funny to talk about Hannah Poling’s own specialists describing her as having “features” of an autistic spectrum disorder.

It is very easy to make more out of this than is warranted by the scant information we have available. We don’t know what is in the rest of the documents that were provided as part of the case. What we do know is that the U.S. government did not create the phrase “features of autism” to describe Hannah Poling.

Pardo letter on neuroinflamation

26 Mar

Neuroinflamation was a big subject in the Autism Omnibus. This was especially true in the “second theory of causation” hearings, which concentrated on thimerosal containing vaccines as a possible causative factor in autism.

Here’s a page from Dr. Aposhian’s presentation at the Omnibus (click to enlarge), which shows the basic logic flow.

aposhianslide76

Or, to put it simply–thimerosal gets changed to ethyl mercury which deposits mercury ions in the brain, causing neuroinflamation which causes autism.

Yes, there are a lot of of missing steps in order to prove this idea.   But, for now, let’s just think about neuroinflamation.  The term (neuroinflamation), as Dr. Aposhian makes clear in his report and slides, is somewhat new, having been coined in the 1990’s. Dr. Aposhian and others spent a lot of time discussing neuroinflamation, astrocytes and glial cells.

The research on neuroinflamation in regards to autism comes mainly from researchers at Johns Hopkins. In particular, Dr. Aposhian cites (on slide 79 of his presentation) the Vargas paper, pulling a quote:

Vargas et al., Neuroglialactivation and neuroinflammation in the brain of patients with autism. Ann Neurol57, 67-81, 2005,

“Our findings indicate that innate neuroimmunereactions play a pathogenic role in an undefined proportion of autistic patients…”

It is important to note that one of the authors on that paper was Dr. Andrew Zimmerman, whose expert report Kev recently blogged. That’s right, Dr. Zimmerman prepared an expert report for the government. The anchor author on the Vargas paper was Dr. Carlos Pardo.

It turns out that a letter from Dr. Pardo is included in the Omnibus docket as well. Here’s the introduction paragraph from Dr. Pardo’s letter:

As per our conversation last year, I would like to clarify some of the concepts regarding the role of neuroimmune response in the brain of patients with autism and the potential significance of such findings in the pathogenesis and pathobiology of the disorder.

Good–he’s trying to clarify some points of his paper. Just the sort of letter we want to read. It is rather thick on the science. Let me cherry pick one sentence, if I may:

These findings are inconsistent with the hypothesis of a potential toxic effect on astrocytes by neurotoxins or toxic material.

It strikes me that the families with claims in the vaccine court are in a really difficult position. Their lawyers and experts are arguing the thimerosal causation issue largely on the idea of neuroinflamation. The problem being that the key people in neuroinflamation and autism are experts for the other side.

The thimerosal cases depend on neuroinflamation. Does anyone else see this as a really tough battle to fight, given that the few world experts on the subject disagree with the contention that neuroinflamation in autistic brains is due to neurotoxins?

Is there an autism epidemic – the latest science

25 Mar

A new paper from Eric Fombonne is in electronic print at the journal Pediatric Research. It will apparently be published in the paper version of the journal some time after April.

The title is ‘Epidemiology of pervasive developmental disorders’ and as the name suggests, Fombonne looks at all the available quality epidemiology he can find relating to PDD’s.

This article reviews the results of 43 studies published since 1966 that provided estimates for the prevalence of Pervasive Developmental Disorders, including Autistic Disorder, Asperger Disorder, Pervasive Developmental Disorder Not Otherwise Specified, and Childhood Disintegrative Disorder.

Combining all these categories together Fombonne presents a prevalence of 60-70/10,000.

For autistic disorder, Fombonne says:

The correlation between prevalence and year of publication was statistically significant and studies with prevalence
over 7/10,000 were all published since 1987. These findings point towards an increase in prevalence estimates in the last 15-20 years.

For PDD-NOS, Fombonne explains that it is next to impossible to get accurate prevalence rates as:

This group has been much less studied in previous epidemiological studies…

Again, for Aspergers, Fombonne says that AS specific epidemiological studies are sparse but, in something of a surprise:

By contrast, other recent autism surveys have consistently identified smaller numbers of children with AS than those with autism within the same survey. In 9 out of 10 such surveys, the ratio of autism to AS prevalence in each survey was above unity, suggesting that the prevalence of AS was consistently lower than that for autism. How much lower is difficult to establish from existing data, but a ratio of 3 or 4 to 1 would appear an acceptable, albeit conservative, conclusion based on this limited available evidence. This translates into a prevalence proportion for AS which would be ? to ¼ that of autism. We therefore used for subsequent calculations an estimate of 6/10,000 for AS, recognizing the strong limitations of available data on AS.

Lastly, for CDD:

Eight studies provided data on childhood disintegrative disorder (CDD). Prevalence estimates ranged from 0 to 9.2/100,000. The pooled estimate based on eight identified cases and a total surveyed population of 406,660 children, was 2.0/100,000. The upper-bound limit of the associated confidence interval (4.0/100,000) indicates that CDD is a very rare condition, with about 1 case to occur for every 103 cases of autistic disorder.

Fombonne then tackles the question everyone wants an answer to – is there an autism epidemic?

In order to answer this accurately, he explains that there has to be tight control over incidence estimates (the number of new cases occurring in a population over a period of time) and prevalence (the proportion of individuals in a population who suffer from a defined disorder). Failure to control these gives false results. Bearing this in mind, Fombonne goes through the five approaches taken so far to try and determine if theres an autism epidemic or not.

1) Referral Statistics.
Trends in time for referral statistics are not reliable. They fail to control for things such as referral patterns, availability of services, heightened public awareness, decreasing age at diagnosis and changes over time in diagnostic concepts and practices. An example of the issues from referral statistics is:

Strong evidence of “diagnostic switching” was produced in California and in all US states indicating that a relatively high proportion of children previously diagnosed as having mental retardation were now identified as having a PDD diagnosis. Decreased age at diagnosis has also been shown to contribute to the rising numbers of children diagnosed with PDD. In the UK, Jick and Kaye (62) have shown that the incidence of specific developmental disorders (including language disorders) decreased by about the same amount that the incidence of diagnoses of autism increased in boys born from 1990-1997. A more recent UK study has shown that up to 66% of adults previously diagnosed with developmental language disorders would meet diagnostic criteria for a broad definition of PDD.

2) Comparison of cross-sectional epidemiological surveys
If I’m understanding his point here (and please correct me if I’m not) Fombonne is saying that too many epidemiological studies are uniquely designed – not enough attempt to replicate a previous study – and hence:

The most convincing evidence that method factors could account for most of the variability in published prevalence estimates comes from a direct comparison of 8 recent surveys conducted in the UK and the USA. In each country, 4 surveys were conducted around the same year and with similar age groups. As there is no reason to expect huge between-area differences in prevalence, prevalence estimates should therefore be comparable within each country. However, there was a six-fold variation in prevalence for UK surveys, and a fourteen-fold variation in US figures. In each set of studies, high estimates derived from surveys where intensive population-based screening techniques were employed whereas lower prevalence proportions were obtained from studies relying on passive administrative methods for case finding. Since no passage of time was involved, the magnitude of these gradients in prevalence can only be attributed to differences in case identification methods across surveys.

3) Repeat surveys in defined geographical areas
So this is the opposite of the above – these are studies where they are being replicated as closely as is possible. However, the issue here is that there are simply not _enough_ of these studies to form a definite conclusion. However, it may be worth noting that in the two studies Fombonne highlighted as being carried out in exactly the same way in exactly the same place to exactly the same age cohort – but just at two different times one showed no increase in prevalence whilst the other showed no increase at 4 sites and an increase at 2 sites.

4) Successive birth cohorts
This means in very large surveys with a wide age range, if the proportion of people who have autism rises this _could_ be a rise in incidence and therefore a good hint that there is an epidemic. I say _could_ as other possible causes need to be ruled out first.

…two large French surveys [used this method]. The surveys included birth cohorts from 1972 to 1985…, and, pooling the data of both surveys, age-specific prevalence showed no upward trend.

A US survey _did_ show an upward trend but:

…the increase was not specific to autism. These analyses also showed a marked period effect that identified the early 1990s as the period where the prevalence estimates started to go up in all ages and birth cohorts, coinciding closely with the inclusion of PDDs in the federal Individual with Disabilities Educational Act (IDEA) funding and reporting mechanism in the US.

5) Incidence studies
The few incidence studies did show incidence trends rising over short periods of time. As noted in point 4) above, this _could_ be attributed to an autism epidemic. However –

…none of these studies investigations could determine the impact of changes over time in diagnostic criteria, improved awareness and service availability on the upward trend.

Contrary to what people who _want_ there to be an autism epidemic, these are non trivial reasons. It stands to reason that if (for example) Birmingham, UK – the countrys second city, goes from having zero service availability and no means of diagnosis in 1960 to having numerous types of service availability both publicly and privately funded and a _lot_ of means of diagnosis in 2000 there will be a _lot_ more autistic people in Birmingham. A hell of a lot. When we then consider that the diagnosis criteria has widened massively than we go from a hell of a lot more autistic people to a _whole hell_ of a lot. If we _also_ consider that people who used to carry one kind of diagnosis are now being swapped to autism then we go from a whole hell of a lot to a descriptive term beyond my ability. This isn’t even science – its basic common sense. The only issue is – ‘a whole hell of a lot’ is not a very accurate measurement.

Fombonne closes by saying that – based on the available data – we still cannot really say one way or the other if there has been an autism epidemic. Remember when you read the quote below that its _incidence_ that gives us an epidemic.

Current evidence does not strongly support the hypothesis of a secular increase in the incidence of autism but power to
detect time trends is seriously limited in existing datasets. Whilst it is clear that prevalence estimates have gone up over time, this increase most likely represents changes in the concepts, definitions, service availability and awareness of autistic-spectrum disorders in both the lay and professional public. To assess whether or not the incidence has increased, method factors that account for an important proportion of the variability in prevalence must be tightly controlled. The possibility that a true change in the underlying incidence has contributed to higher prevalence figures remains, however, to be adequately tested.