Archive | Science RSS feed for this section

It’s different for girls

13 Apr

One of the puzzling things about autism has always been the disparity between the sexes.  Boys have always been more susceptible than girls. This is not in itself unusual. There are gender differences affecting a whole range of conditions and, if this New York Times article is correct, men frequently come off worse.

But if boys are more susceptible you might have assumed that as the severity of the condition increase this disparity would become more marked. In fact you would have assumed wrong. According to this source:

The greater severity and lower frequency of autism in females has been cited as evidence for a multifactorial polygenic mode of inheritance with differential loading by sex, which predicts greater severity in the less frequently affected sex.

Greater severity is usually taken to include severe cognitive impairment as well and the greater the degree of cognitive impairment the closer the ratio between boys and girls.  But there are problems with this model. David Skuse has argued that the association between cognitive impairment and autism is not because they share a common cause but simply because if you have both conditions you are more likely to be seen by a clinician and get a diagnosis. More able people may be just as autistic but have coping strategies that enable them to avoid a diagnosis.  And if girls have better coping strategies than boys they will be disproportionately overrepresented amongst autistic people without cognitive impairment who are missed by the system.

Last week Woman’s Hour broadcast a segment on Asperger syndrome took up this argument and suggested that there may be as many girls as boys on the spectrum. Most of them are not getting a diagnosis because they present in ways that are unfamiliar to clinicians who are used to seeing the condition in boys. The programme is no longer available but Sunday’s Observer carried a two page spread on the same story.

Experts like Judy Gould and Tony Attwood cited by the Observer still believe there is a gender difference but they estimate that it is only 2.5:1. Asperger girls may be more passive than boys. They do not assert their difference or draw attention to themselves. Instead they observe and copy other people’s behaviour. Their special interests may be intense but are also likely to be more socially acceptable; reading fiction, following soaps, celebrity culture – the sort of thing that lots of other girls do – and so they do not stand out.

Conformity comes at a cost. The Observer quotes Tony Attwood’s estimate that 20% of anorexic girls are undiagnosed autistics. Then there is self harm and other evidence of psychological stress. There are important differences between men and women. They need to be understood and respected. But it does not help autistic women if autism is described as an extreme male brain syndrome. The Observer ends by quoting professor David Skuse who believes that:

if we can prove the ratio of boys to girls is as high as many of us suspect, it would be as significant a milestone in this field as the discovery that the condition is on a spectrum.”

The Somali Minnesotan Autism Epidemic

13 Apr

One of the most intriguing scenarios to pop up from the so-called autism epidemic happened in Minnesota in the US. It was noted that there were what seemed to be disproportionately high numbers of second generation Somali children in Minnesotan schoolrooms. Of course, this was immediately latched on to by the usual suspects, despite the caution issues by health authorities that until proper epidemiological studies had been done it would be impossible to say whether this really was a cluster worth investigation or just a coincidence.

On the 31st March, the Minnesota Dept Health released the study. Its a fairly substantial read. One thing immediately struck me about this:

Administrative prevalence of Somali children, ages 3 and 4, who participated in the MPS ECSE ASD programs was significantly higher than for children of other races or ethnic backgrounds

OK so this is what the families were saying. But there’s an extreme note of caution that should be noted here. This is basically CDDS all over again. Just like some believed that an increasing number of reports to CDDS meant that there was an autism epidemic and just as CDDS said there reports really shouldn’t be used to study these things, MDH are also saying:

Because of the study’s limitations, it is not proof that more Somali children have autism than other children…

This is a vital point. Back in 2005, James Laidler made the clear point that Department of Education data on autism are not reliable for tracking autism prevalence.

Sadly, the new Age of Autism editor, Abdulkadir Khalif, either misunderstood or elected to ignore this issue when he said:

It was obvious from the numbers that the issue of prevalence has finally been settled, and that there definitely is a cluster of autism in Minneapolis

It is clear from the report that Khalif has grossly overstated the case. Firstly, the issue of prevalence is far from settled. MDH seem to be solely using educational data which, as pointed out by Laidler, is not reliable for tracking autism prevalence. Indeed the phrase ‘administrative prevalence’ used by MDH reflects this. ‘Administrative prevalence’ refers solely to numbers of kids in educational programs. This is a clear distinction from ‘prevalence’ which is the proportion of individuals in a population who suffer from a defined disorder. Using only educational data gives a distorted picture.

As has been shown, the USDE data on autism are at odds with studies of autism prevalence, largely because the criteria used by the school districts (the source of the USDE data) to categorize children as autistic are neither rigorous nor consistent. They are inconsistent over time, as are the medical criteria, and are inconsistent from region to region. The USDE data are not reliable for tracking the prevalence of autism, and they in fact never were meant to fill this need.

Secondly Khalif uses the word ‘cluster’ whereas the MDH report does not use it at all. And it is not a word that should be used in such a throwaway fashion – it has a distinct epidemiological meaning.

So clearly, contrary to Khalif’s assertion that the issue of prevalence has been settled, it has not. Contrary to his statement that there is a cluster, there has been no such epidemiological assessment or statement.

Here are some more quotes from the MNH report that Abdulkadir Khalif either chose to ignore or never actually read:

The fact that a child is participating in an ASD early childhood program is an indicator of educational need, but that child may or may not have a medically diagnosed ASD.

Further, Minnesota’s public school open enrollment policy allows children to attend special education programs in school districts where they are not residents. This raised the question of whether participation rates for Somali children might appear higher than the participation rates for non-Somali children because of an influx of Somali children who are not residents of the Minneapolis school district attending MPS ECSE programs for ASD.

Data on variability of ASD prevalence by race, ethnicity, and SES is limited and inconclusive, and apparent differences between racial and ethnic populations may largely be due to differences in case finding and service provision.

Across all assumptions and ASD program types, administrative ASD prevalence estimates for Somali children were uniformly higher than the U.S. parental reported ASD prevalence, but most of the 95% confidence intervals corresponding to the administrative prevalence estimates for Somali children contained the value of the U.S. parental reported ASD prevalence estimate – suggesting that the 2005-2006 administrative ASD prevalence for Somali children might be no different from what would be expected in the U.S. population of children ages 3 and 4 based on parental report.

So what does this mean?

It means that there are no firm answers and that Khalif is simply wrong to assert that there are.

Its always been one of the great puzzles to me that a section of (mostly) parents who demand accurate answers fast cannot seem to understand that there _are_ no accurate answers until the science – proper science – has been done. And that takes time. What legacy do these parents want to leave the autism community? Fast inaccurate mistakes? Or well planned and rigorous science that helps build the growing knowledge we already have?

Omnibus Expert: Patricia Rodier

10 Apr

Autism just plain isn’t mercury poisoning. When can we move on?

Even some of the people who loudly promoted the mistaken idea that “autism is just a misdiagnosis for mercury poisoning” have backed off. But, the groups that promote autism as vaccine injury are packrats: once they’ve collected an idea, bad or not, they won’t ever let it completely go.

Some of you will be thinking, dang, another mercury post. I agree, there are a lot of good arguments against blogging about the mercury-autism connection any more. For one, it gives the idea press that it just doesn’t deserve.

I do think this is worth posting about, though. “This” is the expert report from Dr. Patricia Rodier, submitted to the Autism Omnibus Proceeding. In a single document, we now have an expert on both mercury toxicology and autism. Not faux experts, or worse, businesspeople and public relations people, but an actual, bone fide expert in both fields. I.e. we have a good document to give to people who are being snowed under by the misinformation campaign promoting autism as mercury poisoning.

When Patricia Rodier testified in the Autism Omnibus Proceeding, I was very impressed–and I blogged it right away. I remember at the time telling a friend that it was good to finally see someone officially debunking things like Sally Bernard et al.’s paper, Autism: a novel form of mercury poisoning. My friend pointed out that any college freshman in science (and most not in science) should be able to tear that “paper” apart.

Unfortunately, “should be able to tear the paper apart” isn’t enough. Many people don’t have the time and/or energy. So, many people still think that paper is valid. Let’s face it, that “paper” should have been retracted by the authors long ago, but they still soldier on with the “autism is mercury poisoning” message.

Dr. Rodier’s qualifications are quite good. Her summary is quite good:

As a research scientist who has studied both the toxic effects of methylmercury in animals and autism in children and animal models, I believe I am qualified to evaluate the scientific merit of the allegation.

She may be the only person in the world who has studied both mercury toxicity and autism.

What does she think? In a nutshell:

My conclusion is that the allegation has no scientific support and is highly improbable

Dr. Rodier notes that the comparison that autism and mercury poisoning appear similar isn’t even close.

In othcr words, because the symptoms of methylmercury poisoning
are not similar to those of autism, the authors have tried to construct a new, hypothetical kind of mercury poisoning from symptoms of toxicity of other mercury species and symptoms never reported for any kind of mercury exposure. The hypothesis is not based of facts; instead, the facts are being selected, manipulated, and shaped to fit the hypothesis. The hypothesis is then offered as evidence. But hypotheses are not evidence.

Ouch. Ouch, that is, if you are someone promoting autism-as-mercury-posinong.

Dr. Rodier can back up her words, as we discussed in the previous blog post. But, let’s say that again, Dr. Rodier uses research based facts, not manipulated hypotheses, to come to her conclusion.

I need to get a clean copy of that document, one that looks as good as the information it contains. That document needs to get into the hands of people being lured by the pseudo scientists promoting autism as mercury poisoning.

Thank you, Dr. Rodier for putting yourself on the line to testify. Thanks also to the HHS for allowing these reports to be made public.

Lies, Damned Lies and Science

7 Apr

Prior to starting a family I was already a skeptic and a subscriber to Skeptical Inquirer. This means I looked twice at the “helpful” information I got from all sides. From whether or not my baby would be a boy or a girl, what might have caused his seizures (like that fact that I drank milk, which was offered as an explanation after the “helpful” person was told he was infant only on breastmilk), the assurances I got that he would talk “when he was ready” or that someone’s cousin four times removed or Einstein did not talk until age three, five, or thirty-five (by the way, the Einstein is a total myth, don’t believe it — and don’t accept your child being compared to Einstein, he really had several flaws! See Private Lives of Albert Einstein, I should mention that sitting in therapy waiting rooms does provide lots of reading opportunities), and most recently I was told to try cranial sacral therapy (like a light head massage is going to “fix” the pathways in the brain that make him different!).

There is a tendency for people to give unsolicited advice to young parents, and that seems to double when it involves a child with a disability. Not only does the amount of advice double, but the relative distance from reality increases exponentially. While a parent of a typically developing child would be told to buy a certain organic baby food or to try some kind of “teach your baby calculus” computer program, the parent of child on a different developmental path would be encouraged to try a myriad of supplements, various odd treatments and to try the “miracle cure” they heard from some famous guy on the news.

How does a parent of a newly diagnosed child wade through the “help”, and determine what is real and what is hype? Well there is help, and it is not a cure all, but a book that shows how to look and science and separate fact from hype: Lies, Damned Lies and Science by Sherry Seethaler, a science writer and education at the University of California in San Diego..

It is not a long book, and is separated into short chunks to help explain the basics of science, why disputes in science is not really a bad thing, how to interpret numbers, and who the stakeholders on an issue are, and why they are important. She includes many real world examples and even comparisons to situations in the Harry Potter book where he excels at potion making by using the notes in the margins of an old book.

In Chapter 7, “Fun Figures”, there is a subsection titled “Ask whether a statistical change reflects reality or the way the data were collected.” Readers of this blog should be very familiar with the example she uses. They will also be familiar with the tactics described in Chapter 9, “All the Tricks in the Trade”, especially the section on pseudo experts.

This is a quick, actually quite a fun book to read. It can be a bit repetitive, but that is in part makes it easier to understand the concepts. I knew much of the information (like statistics), but I still learned a great deal. Check it out of your local library, and even purchase a copy for those friends and relatives who keep giving you all sorts of “advice.”

More Hot Air about HBOT

6 Apr

A few weeks ago, BMC Pediatrics published an article that purports to show that Hyperbaric Oxygen Therapy (HBOT) can produce “…significant improvements in overall functioning, receptive language, social interaction, eye contact, and sensory/cognitive awareness..” in autistic children. This study (Rossignol et al, 2009) is billed as a “…multicenter, randomized, double-blind, controlled trial.”

It’s all that and much, much less.

Let’s start by looking at the six “centers” where this research was carried out.

The Centers

The International Child Development Research Center (ICRDC):

This imposing name is attached to a rather less imposing edifice. The ICRDC, brainchild Dr. Jeffrey Bradstreet, is located in a strip mall in Melbourne, Florida, where it not only carries out “cutting-edge research” but also sells a complete line of “supplements” and treats autistic children with a dizzying array of “alternative”, “biomedical” and “integrative” therapies, including HBOT.

Daniel Rossignol MD (Family Practice), Lanier Rossignol (Nurse Practitioner) and Scott Smith (Physician’s Assistant) were the authors from the ICDRC.

The Center for Autism Research and Education (CARE):

This “center” is located in Phoenix, Arizona and has – according to its website – a single practitioner, Cynthia Schneider, MD (OB/Gyn), who is also an author on this paper. One of the “integrative” therapies this “center” offers is HBOT.

One of the other authors, Sally Logerquist, is a PhD psychologist who – according to the paper – is also associated with CARE, but also appears to run social skills therapy groups for autistic children using the “Logerquist Excellent Attitude Program” (LEAP).

True Health Medical Center:

It’s rather difficult to find anything about this “center”, apart from the fact that it is located in Naperville, Illinois – in what appears to be an office complex. Anju Usman, MD (Family Practice) is the author associated with this location.

Neubrander Center:

Although not officially called a “center”, the office of James Neubrander, MD (Pathology) is apparently one of the “centers” of this study. His office is located in the Menlo Park Mall (near Macy’s) and offers – you guessed it! – HBOT as a treatment for autism.

Princess Anne Medical Associates:

A Family Practice medical group in Virginia Beach, Virginia, this “center” is the home of Eric Madren, MD (Family Practice). It’s not clear if this four-physician practice offers HBOT.

The Rimland Center for Integrative Medicine:

A small, one-physician “center” in Lynchburg, Virginia, this is practice location of author Elizabeth Mumper, MD (Pediatrics). Not surprisingly, this “center” sells HBOT services for autistic children.

So, of the six “centers” involved in this study, five are single-physician operations. The remaining “center” has two physicians (three, if you count the naturopath).

I’m underwhelmed.

Well, what about the research itself? Maybe that’s better than the “facilities” might suggest. Let’s take a look.

The Subjects

This study initially enrolled 62 children (33 treatment; 29 control), but only 29 of the treatment group and 26 of the control group finished all 40 sessions. For reasons that pass my understanding, one treatment subject who only finished 9 sessions was included in the analysis. The authors stated that including this subject did not alter results, which begs the question: “Why did they include this subject if it made no difference?”

Outcome measures

The authors used the Aberrant Behavior Checklist (ABC), the Clinical Global Impression (CGI) scale and the Autism Treatment Evaluation Checklist (ATEC) as their outcome measures. All except the ATEC are widely accepted for use in autism treatment trials.

The ABC is a 58-question checklist of – surprise! – aberrant behaviors which are each given a score from “0” (“not at all a problem”) to “3” (“severe problem”). This test has been use – and validated – in a number of disorders, including autism. It gives a global score as well as five subscales: a total of six measures.

The CGI is a generic rating scale used in a variety of clinical trials. For each parameter (e.g. “overall functioning”, “sleep pattern”), the rater gives a score of between “1” (“very much improved”) and “7” (“very much worse”). The authors had both the treating physician and the parents rate the subjects on overall improvement and eighteen discrete parameters: a total of 38 measures in all (19 by the physician and 19 by the parents).

The ATEC was developed by Bernie Rimland and Stephen Edelson and has not been validated. In fact, it has only been used in two published studies – one by Rossignol et al. The ATEC has 25 questions on which the evaluator rates the subject on either a three-point (“not true”, “somewhat true”, “very true”) or four-point (“not a problem”, “minor problem”, “moderate problem”, “serious problem”) scale. It provides a total score and four subscales: a total of five measures.

In all, each subject had a total of 49 evaluation measures (CGI scores and the change in ABC and ATEC scores), of which 47 are independent. The importance of this will become apparent in the section on statistical analysis.

Analysis

As I mentioned above, the decision to include one treatment subject who only completed nine sessions was curious. Why they included this subject and not any of the other three treatment subjects and three control subjects who also failed to complete the entire course of the study is concerning. The smart thing – and the proper response – would have been to drop this subject from analysis.

The authors’ method of analyzing the CGI scales was also curious. Rather than simply using the scores as they were provided, they took the scores and subtracted them from four (the “no change” score). There are a few problems with this.

For starters, the scores are not linear – the difference between “much improved” and “very much improved” is not necessarily the same as between “no change” and “minimally improved”. Nor is the difference between “no change” and “much improved” twice the difference between “much improved” and “very much improved”. For that reason, these types of numerical scores are often referred to as “pseudo-numbers”.

This may seem like nit-picking, but it is a serious concern. Imagine, if you will, that the numbers were replaced by colors. Is the difference between green and orange twice the difference between orange and red? If half of a population of birds are blue and the other half are yellow, is the “average” bird green? The simple fact is that it is not appropriate to treat these “scores” as though they were real numbers, to be added, subtracted and averaged.

Secondly, it appears that the authors used parametric statistics for their analysis of the CGI scores. This is a problem since – as I indicated above – it is nonsensical to do math on pseudo-numbers. I don’t have the raw numbers, so it isn’t possible for me to calculate the absolute impact of this mistake for all of the CGI subclasses, but I can figure out the raw numbers for one group, so let’s look at that one.

It took a little work, but the authors gave enough clues to tease out the raw numbers in the physician “overall functioning” CGI score. The treatment group had an “average” of 2.87 and the control group’s “average” was 3.62; using the unaltered data, a t-test [Note: not an appropriate use of the t-test] gives p-value of 0.0006, not far from what the authors report. When a more appropriate statistical test [Mann-Whitney U-test] is used, the p-value is 0.002, very different from the reported 0.0008. While this is still less than the threshold p-value of 0.05, see below for a discussion of multiple comparisons.

All of these statistical analyses of the CGI scores ignore the fact that these are pseudo-numbers and need to be treated as discrete groups rather than as actual numbers. In truth, even the ABC and ATEC scores should have been treated this way, as well, although it is fairly common practice to treat such multi-factor scores as real numbers. A Chi-square test or Fisher Exact test would be the ideal test, but the problem with that is that the treatment group has one score of “1” (very much improved) and the control group doesn’t. Likewise, the control group has two subjects with a score of “5” (minimally worse) and the treatment group has none. This prevents a Chi-square or Fisher test from comparing each score independently.

One solution is presented by the authors themselves, although they apparently didn’t use it. In their discussion of the CGI, the authors said:

“Children who received a score of ‘very much improved’ or ‘much improved’ on the physician CGI overall functioning score were considered to be ‘good responders’ to treatment.”

If we “bin” the scores into “good responders” and “others”, we find that there were 9 (out of 30 – 30%) “good responders” in the treatment group compared to 2 (out of 26 – 8%) in the control group. Unfortunately, this is not a statistically significant difference (p = 0.08) in the (Yates) Chi-square test and barely reached significance (p = 0.05, but see below) in the Fisher Exact test.

An even bigger problem in the statistical analysis was the failure to correct for multiple comparisons. This problem was brought up by one of the reviewers, and the authors responded by eliminating a table. They did not make the appropriate corrections.

The reason that multiple comparisons are a problem is that the analysis for statistical significance is based on probability. If the probability (the p-value) that the differences between the two groups (treatment and control) is due to random chance is equal to or less than 5%, that difference is considered to be “statistically significant” and accepted as real. That means that there is still a 5% (or less – look to the p-value) chance that the difference is due to chance and not real.

If multiple comparisons are made on the same group of subjects, the probability that one (or more) of them will be “statistically significant” by chance starts to climb. If 14 comparisons are made, the chance of an erroneous “statistical significance” is over 50%. If 47 independent comparisons are made – as in this study – the chance of an erroneous “statistical significance” is over 90%.

For this reason, it is standard procedure to apply a correction for multiple comparisons. The most well-known (and simplest) of these is the Bonferroni Correction, which changes the threshold for statistical significance by dividing it by the number of comparisons. In the case of this study, the threshold (normally p less than or equal to 0.05 or 5%) is reduced to 0.001.

Applying the appropriate correction for multiple comparisons changes the results of this study significantly. Only the physician CGI scores for overall functioning and receptive language reach significance – and these numbers are already suspicious because they were improperly handled to begin with. In fact, as I have shown above, the CGI “overall functioning” p-value wouldn’t reach significance. It is possible that – if the proper statistical tests were used – that the CGI score for “receptive language” would also not reach significance.

Another curious thing. The authors asked the parents after the study whether they thought their child was in the treatment or the control group. Rather than say that the parent’s guesses were no better than random chance (i.e. 50%), the authors stated:

“…there was no significant difference between the two groups in the ability of the parents to correctly guess the group assignment of their child.”

As I said, this was a curious way to put it. As I read this, all it says is that each group of parent were equally able to guess which group their child was assigned to. That could be a 50% accuracy (which would be equal to chance), but a 90% or 99% accuracy – if both groups were that accurate – would also fit that description.

Now, this could simply be an clumsy phrasing by the authors, or it could be a way to make it sound like their blinding was successful when it actually was not.

Summary

This study may have collected some useful data, but its analysis of that data rendered it useless. The CGI scores – where the only statistically significant result was (possibly) seen – were improperly manipulated and the wrong statistical analysis was used.

The other issue is that there is no discussion of why HBOT is thought to be superior to providing the same partial pressure of oxygen at room pressure. This study used 24% oxygen at 1.3 atm, which gives the same partial pressure of oxygen as 31% at sea level. This concentration of oxygen can be easily attained with an oxygen mask or simple oxygen tent – both of which are vastly less expensive than HBOT.

If the authors are arguing that the mild pressure of their inflatable HBOT chambers contributes to the treatment effect, they need to look at the literature on cell membrane compressibility. For those who want to do the calculations at home, the bulk modulus of water (the major component of cells) is 21,700 atm. This means that a 0.3 atm increase in pressure will reduce the cell volume by 0.0014%. The bulk modulus of the lipid bilayer in cell membranes is around 30,000 atm. This means that an increase of 0.3 atm pressure causes a 0.0010% reduction in membrane volume. These are well below the threshold for any clinical effects.

Real pressure effects on the central nervous system are seen at pressures over 19 atm. These effects are:

dizziness
nausea
vomiting
postural and intention tremors
fatigue and somnolence
myoclonic jerking
stomach cramps
decrease intellectual and psychomotor performance
poor sleep with nightmares
increased slow wave and decreased fast wave activity in EEG

None of these effects could be construed as “improvements”, even in autism.

So, this study fails to answer the following questions about HBOT and autism:

[1] Does HBOT improve any feature of autism?
[2] If so, is HBOT any better than supplemental oxygen (which is much cheaper)?

The only real effect of this study was to give a cover of legitimacy to practitioners who are already using HBOT to “treat” autism.

Prometheus

The vaccine debate has a real cost

31 Mar

Of course, this isn’t news. But usually the cost is characterized in the danger to public health.

What about the cost to people with autism? Here’s a blog post from the Simons Foundation. (as an aside–there is a real autism organization). They are quoting Cathy Lord and Paul Shattuck.

Given the diversity of the panel’s members, the strategic plan was, unsurprisingly enough, hotly debated, and continues to be scrutinized.

Most of the debate centers around the plan’s emphasis on environmental risk factors. Lord says this came at the cost of research on more worthwhile topics, such as how to expand treatment services to low-income families — a project for which she was hoping to be funded.

“It’s gone, just gone. I was pretty astonished to see that that had disappeared,” she says.

The report also doesn’t emphasize studying autism’s course beyond childhood, notes Shattuck. “The amount of money that goes into understanding services and aging and supporting people in their daily lives seems disproportionately small,” he says.

One of the problems with the vaccines-cause-autism groups is that they really don’t advocate for people with autism. They have abandoned entirely people of low income and minorities (except where they can be used for political gain).

It isn’t just that groups like SafeMinds, Generation Rescue and the rest can’t be bothered to spend the time worrying about minorities or adults. It’s the fact that the data those groups use to support the “epidemic” makes ZERO sense when you consider minorities.

Consider this: the “rate” of autism is 0.3 per 1,000 for Hispanics in Wisconsin, but 10.6 for Whites in New Jersey.

Why isn’t Generation Rescue calling for an investigating the Hispanics of Wisconsin? Shouldn’t they want to know what is “protecting” that subgroup from autism?

They don’t care, they don’t want to bring attention to the Hispanics in Wisconsin (or the under represented minorities across the nation), because it blows a big hole in the “epidemic”. Obviously we still aren’t counting all the people with autism in our prevalence estimates. How can we rely on the historical data that shows an “epidemic” if we aren’t doing a good job even now?

We’ve covered this many times in the past. It is one thing when the damage caused is more abstract. But when it become very real, when minorities are being left out in the cold, it is an outrage.

Hours and hours were spent in the IACC meetings wordsmithing the vaccine language. To groups like SafeMinds and people like Lyn Redwood, the Strategic Plan was a political document. It was a statement by the government, and it was critical to get as much “admission” of autism being caused by vaccines as was possible. So what if another generation of minorities gets mislabeled with Intellectual Disability or some other Special Education category when SafeMinds was able to get the IACC to admit that many parents think vaccines cause autism?

This is what happens when psuedo “Vaccine-injury” advocates pretend to be Autism advocates and take seats at the table. Lyn Redwood put her own interests and those of her organizations ahead of the well being of people with autism.

That’s just plain wrong.

Mild hyperbaric therapy for autism – Shh!…don’t say it’s expensive

30 Mar

When I recently wrote about the new HBOT-for-autism study (Rossignol et al. 2009)1, I took issue with unlikely claimed treatment pressures for at least one of the study locations. While a potential methodological weakness, this is probably a fairly small problem in light of potential issues with blinding and interpretation of the results as quantitatively and objectively meaningful with respect to autism. But let’s set those potential issues aside for a moment.

Let’s assume that treatment with slightly enriched air (24% vs. 21% oxygen) in an inflatable hyperbaric chamber pressurized to 4 PSI2,3 above ambient atmospheric pressure, could confer some sort of benefit to an autistic child.

I’m not suggesting assumption that it does confer benefit. I’m asking readers to set aside any knowledge of hemoglobin’s role in oxygen transport, as well as any knowledge of real hyperbaric oxygen therapy (breathing 100% oxygen at greater than 1 ATA)4, and evaluate a simpler proposition. Accept the proposal that some sort of benefit is scientifically possible, but then ask yourself a fairly simple question:

Compared to 24% O2 at 4PSI above ambient atmospheric pressure in an inflatable hyperbaric chamber, equivalent oxygen delivery can be achieved with simple oxygen therapy (an oxygen mask) at a fraction of the cost5 – why is a study of the hyperbaric version of this increased oxygen important?

One possibility: studying what’s already for sale

While some might call it being on the “cutting edge”, others may consider it putting the cart before the horse. No matter how you see it, it’s no secret that some Defeat Autism Now practitioners were already selling this type of hyperbaric oxygen therapy well before this study came out. It should be noted that this study’s authors did disclose this conflict of interest with respect to derivation of revenue in their clinical practices from HBOT.

DAR, LWR, SS, CS, AU, JN, EMM, and EAM treat individuals with hyperbaric treatment in their clinical practices and derive revenue from hyperbaric treatment.

Lisa Jo Rudy over at autism.about.com6 had additional comments about the subject:

Dr. Rossignol is “the” proponent of HBOT, and has been speaking at conferences all over the world in support of the treatment. Clearly, he has a personal and professional stake in seeing that the outcomes of a research study are positive.

The present study was funded by the International Hyperbarics Association, a trade group of private hyperbaric therapy centers. Clearly, they have a similar stake in seeing positive outcomes.

While there may certainly be an aspect of genuine scientific interest in understanding if this type of hyperbaric oxygen therapy is beneficial for autistic kids, I think there may also be a certain degree of assumption that it is. After all, why would a practioner already be selling something if they didn’t “believe” it worked? Given the stated conflicts of interest, it doesn’t seem implausible that the authors might have an interest in seeing a long-term revenue stream that could come from additional, and deeper pockets than those of parents willing to “believe” and pay – despite the lack of really convincing scientific evidence at this point.

Consider the following portions of an interview with Dr. Dan Rossignol7:

We chose 1.3 ATA because a lot of children with autism are currently receiving this dose and we are hoping to prove that it works.

“Hoping to prove that it works.”

Dr. Rossignol’s point does not seem unclear. HBOT is popular, and he is, in his own words, “hoping to prove that it works”. This is a valid reason, I suppose, if he is also open to the possibility that it may not, or that it may be a completely moot point if something on the order of one tenth of the cost can do the same thing. Following Dr. Rossignol’s communication about the hope to “prove that it works”, the interviewer asks:

How is the insurance situation coming along?

Insurance situation? Coming along? Was this situation already a well-known “work in progress” back in 2006 (e.g. had it been decided by some, prior to the science, that “mild” HBOT for autism does work, and that insurance reimbursement is really the goal now? Let’s see if we can get Dr. Rossignol’s take on this.

Well, obviously, HBOT is not approved for autism, but we hope to get there. Interestingly, if you take the ABC scale and look at the lethargy subset score, we saw a 49% improvement in symptoms at 1.5 ATA with a p-value of 0.008. If you look at the New England Journal of Medicine study on risperidone from 2002, there was a 56.9% improvement on the ABC irritability subscale with a p-value < 0.001. So the results we had on these 6 children with 1.5 ATA approached the percentage improvement seen with a drug approved for the use in autism. We just need to be able to reproduce these type of findings in a placebo study.

Hopefully when we finish these studies and show that hyperbaric therapy works, then insurance reimbursement will follow.

I don’t necessarily see a geniune scientific perspective here, but that could just be me. I get more of a vibe (at least from this interview), that the interest may lie more in “finishing” the studies and showing “that hyperbaric therapy works”, rather that actually finding out, with really good quality scientific methodology, whether or not it really does work. I’ll acknowledge that I could be wrong about this. Do you think readers will have noticed that the study result mentioned for comparison, was from 1.5 ATA, and probably totally irrelevant to the 1.3 (or less) studies?

Is it just me, or would it seem naive to wish that a few studies like the recent one, are really going to catalyze insurance reimbursement in the long run? I get the impression that many parents may believe this. Insurance companies work to achieve cost efficiencies. One of the ways they do this is by reimbursing at higher rates for equivalent things at lower costs – hospital stays in contracted facilities, generic drugs as compared to name-brand versions, etc. Why on earth would an insurance company reimburse for a 4-5% increase in blood oxygen content for a couple of hours at a time, in an inflatable hyperbaric chamber (at a few thousand dollars a month), when the identical oxygen increase could be delivered with a simple oxygen mask (for under $200 a month)?

You don’t have to take my word for this comparison of oxygen delivery, you can take Dr. Rossignol’s acknowledgement in that same interview:

Some people have criticized using mild hyperbarics at 1.3 ATA because they state that when compared to this pressure, you can get just as high an oxygen concentration in the blood with oxygen by face mask without a chamber. And this may be true in some cases.

In fact, it’s true in most (if not all) cases. The physics of partial pressures does not discriminate. But there may be more to the story.

Squeeze in some hope

After acknowledging the reality of the partial pressure comparison problem, Dr. Rossignol continues:

However, we must remember we are dealing with 2 separate components with HBOT — the oxygen and the pressure. So it appears that many of the effects of HBOT are from the increased oxygen, but we cannot dismiss the pressure effect. I think we need more studies on this as well.

So “many of the effects” are from the oxygen increase, but we can’t dismiss the pressure effect? What pressure effect? Is there a demonstrated significant clinical effect for autism from a very slight, and very temporary, increase in atmospheric pressure alone?

Although I suppose it is possible, a clinically significant effect for autism at such low pressures doesn’t seem likely at all. If it turns out that I am incorrect, this may be good news for some of the parents of autistic children in several U.S. cities: Albuquerque, NM (5312′ AMSL), Aurora, CO (5471′ AMSL), Colorado Springs, CO (6035′-7200′ AMSL), Denver, CO (5280′ AMSL), Reno, NV (4505′ AMSL), and Salt Lake City, UT (4226′ AMSL), to name a few. Something as simple as a move to a closer to sea-level city might provide increases in atmospheric pressure not a lot unlike those provided by the inflatable hyperbaric chambers. If there were some beneficial effect of slight additional atmospheric pressure for autism, certainly there would have been some observations (anecdotal or media reports) over the years, of families with autistic children who moved from states like Colorado to lower elevation states like California – and noticed. Who knows? Perhaps this is something to yet be uncovered.

So, aside from the fact that an identical oxygen increase can be achieved with simple O2 therapy without a hyperbaric chamber at all (and at a fraction of the cost). And, aside from the point that the minute pressure increase (while certainly possible in a strict scientific sense) isn’t known to be a likely candidate to significantly clinically impact autism, is there anything else about this newest HBOT-for-autism study that may merit some critical thought? Maybe, but it’s really just a side-note (perhaps interesting to some, but not terribly relevant to the science itself).

Who farted in the HBOT chamber? (Shh!…Don’t say it’s expensive)

The original manuscript8 for this study contained what I thought was an appropriately realistic comment from the authors in the conclusion. This comment has value in terms of practical knowledge that readers who are not familiar with hyperbaric oxygen therapy would probably find useful. What follows is the first-draft conclusion of this study with that comment emphasized.

Hyperbaric treatment is a relatively time-intensive treatment and can be costly. However, given the positive findings of this study, and the shortage of proven treatments for individuals with autism, parents who pursue hyperbaric treatment as a treatment for their child with autism can be assured that it is a safe treatment modality at the pressure used in this study (1.3 atm), and that it may improve certain autistic behaviors. Further studies are needed by other investigators to confirm these findings; we are aware of several other planned or ongoing studies of hyperbaric treatment in children with autism.

Again, Lisa Jo Rudy over at autism.about.com notes:

No insurance company will cover the very high cost of HBOT for autism, as it is considered an experimental and unproven therapy.

But the above conclusion is not the conclusion that appeared in the peer-reviewed, edited version. Here it is:

Given the positive findings of this study, and the shortage of proven treatments for individuals with autism, parents who pursue hyperbaric treatment for their child with autism can be assured that it is a safe treatment modality at the pressure used in this study (1.3 atm), and that it may improve certain autistic behaviors. Further studies are needed by other investigators to confirm these findings; we are aware of several other planned or ongoing studies of hyperbaric treatment in children with autism.

Why would the authors remove that valuable bit of practical knowledge about time requirements and high cost? Apparently due to a comment from referee #3 for this paper.

Discretionary Revisions

Page 24 In view of the highly positive findings of this study and the fact that no other trial has demonstrated such benefits under strictly controlled conditions to open the conclusions with negative comments demeans the study. Many other inventions used for ASD children are equally time consuming and hyperbaric treatment need not be expensive.

Authors: “The negative comments were removed from the conclusion.”

Opening the conclusion with negative comments demeans the study? Such comments don’t really touch the content of the study itself, and what the now absent comment did do, was provide some practical perspective – quite likely, very accurate practical perspective. Why would it be suggested by referee #3 that the practical comments demean the study? Perhaps it was meant that the comments demean the use of mild hyperbaric oxygen therapy as an autism treatment (therefore actually demeaning a desired interpretation of this study)? That would seem a real possible concern, since the justification offered, has absolutely nothing to do with the study itself, and doesn’t amount to much more than logical fallacy and simple assertion.

“Many other inventions used for ASD children are equally time consuming…”

This is about as basic an example of the “two wrongs make a right” fallacy as can be presented. Two wrongs don’t make right. Just because other interventions are also time consuming, does not mean a researcher is unjustified, or shouldn’t add the point about practicality that HBOT is relatively time consuming. Further, if the authors are aware of such a potential practical issue, it could be argued that ethics would dictate that it is mentioned. Other treatments presenting similar impracticalities do not automatically relieve any potential ethical responsibility in this regard.

“…hyperbaric treatment need not be expensive.”

Compared to what? Hyperbarics in a gold-plated hyperbaric chamber? If there is no significant effect for autism from the brief, and small increase in added pressure in one of these inflatables, the increased oxygen delivered by providing 24% O2 at 4 PSI above ambient atmospheric pressure, is easily matched (or exceded) with simple O2 therapy. In short, this type of hyperbaric treatment would be the hard way, and the expensive way to achieve the results.

Referee #3 also added the following comment:

The reviewer has a preference for the word treatment rather than ‘therapy’. In view of the proven changes that relate to increased inpsired fractions of oxygen it is suggested that treatment would be preferable.

Authors: “The word “therapy” has been replaced with “treatment” throughout the paper.”

The “T” in the acronym “HBOT” does, in fact, represent the word “therapy” in medical usage. I happen to think the terms “treatment” and “therapy” are fairly interchangeable in the context of drug delivery, but I do wonder if there is any significance to such a preference. Is this a semantics issue that has the potential to impact perceptions of those who make decisions about insurance coverage for autism? But I digress. So what’s up with these comments from referee #3, comments with a little fallacious reasoning, that express possible concern about the perception of a high price tag for mild hyperbaric oxygen therapy, and a commment that communicates a preference for the word “treatment” over “therapy”?

I honestly don’t know. What I can tell you is that referee #3 was Philip James, MD. Dr. James is a professor in the field of hyperbaric medicine and hails from the U.K. He appears to have published quite a bit in the field of hyperbaric medicine as well.

According to the International Hyperbarics Association website:

Dr. James is responsible for founding the Hyperbaric Trust in the United Kingdom which promotes the treatment of cerebral palsy and the brain injured child and was responsible for having the National Health Service pay for this therapy.

Dr. James (Referee #3) appears to have been categorized (with a doctor profile) as a medical advisor to International Hyperbarics Association back in February of 2006 (shortly before this study9 began). Hey wait a minute, there’s that name again – International Hyperbarics Association. Where have I seen that before? Oh yeah, in the study itself:

We are grateful for the work of Shannon Kenitz of the International Hyperbarics Association (IHA) for an unrestricted grant which funded this study, which included use of hyperbaric chambers and funding for all hyperbaric technician salaries during the study. The IHA had no involvement in the study design, collection, analysis, interpretation of data, writing of the manuscript, or in the decision to submit the manuscript for publication.

I’m not sure how the International Hyperbarics Association defines itself exactly – are its listed medical advisors excluded from that definition? That would seem likely.

As of this writing, Dr. Rossignol is listed as a medical advisor at the IHA website. 10
As of this writing, Dr. Neubrander is listed as a medical advisor at the IHA website. 10
As of this writing, Dr. James (referee #3), is categorized as a medical advisor at the IHA website with a physician profile page.11,12

Side notes aside, where to, from here

So all in all, it seems that “mild” HBOT-for-autism researchers may have their work cut out for them. Although probably not very likely, it is possible that a small temporary change in atmospheric pressure could do something for autism, and that should be studied next, then, better replications should follow.

In the long run, it will be difficult to ignore the scientific fact that simple oxygen therapy alone can easily provide identical increases in blood oxygen content, at a fraction of the cost of mild hyperbaric oxygen therapy (as it currently being studied for autism). If HBOT-for-autism proponents think insurance companies should step up to pay for an expensive treatment that provides a 4-5% increase in blood oxygen (without scientifically establishing benefit of the small and temporary pressure increases), they might do well to consider these famous words (most recently from Barack Obama) – “You can put lipstick on a pig. It’s still a pig.”

Notes:

1 Hyperbaric treatment for children with autism: a multicenter, randomized, double-blind, controlled trial

Click to access 1471-2431-9-21.pdf

2 Medical device pre-market notification (FDA-cleared)

Click to access K001409.pdf

3 Manufacturer product sheet

Click to access vitaeris-lowres2007-8.pdf

4 Definition of Hyperbaric Oxygen Therapy
http://www.uhms.org/ResourceLibrary/Indications/tabid/270/Default.aspx

5 Hyperbarics and Hypotheses

Schooling and Statement

6 Hyperbaric Oxygen as a Treatment for Autism: Let the Buyer Beware
http://autism.about.com/b/2009/03/14/hyperbaric-oxygen-as-a-treatment-for-autism-let-the-buyer-beware.htm

7 Interview with Dr. Dan A. Rossignol: Hyperbaric Oxygen Therapy Improves Symptoms in Autistic Children

Click to access Rossignol%20HBOT%20Medical%20Veritas%202.pdf

8 Pre-publication history
http://www.biomedcentral.com/1471-2431/9/21/prepub

9 ClinicalTrials.gov Identifier: NCT00335790
http://clinicaltrials.gov/ct2/show/NCT00335790

10 Medical Advisors
http://www.ihausa.org/

11 Index of /docs
http://www.ihausa.org/docs/

12 International Hyperbarics Association Medical Advisor – Professor Philip B. James, M.D.
http://www.ihausa.org/docs/james.html

Is there an autism epidemic – the latest science

25 Mar

A new paper from Eric Fombonne is in electronic print at the journal Pediatric Research. It will apparently be published in the paper version of the journal some time after April.

The title is ‘Epidemiology of pervasive developmental disorders’ and as the name suggests, Fombonne looks at all the available quality epidemiology he can find relating to PDD’s.

This article reviews the results of 43 studies published since 1966 that provided estimates for the prevalence of Pervasive Developmental Disorders, including Autistic Disorder, Asperger Disorder, Pervasive Developmental Disorder Not Otherwise Specified, and Childhood Disintegrative Disorder.

Combining all these categories together Fombonne presents a prevalence of 60-70/10,000.

For autistic disorder, Fombonne says:

The correlation between prevalence and year of publication was statistically significant and studies with prevalence
over 7/10,000 were all published since 1987. These findings point towards an increase in prevalence estimates in the last 15-20 years.

For PDD-NOS, Fombonne explains that it is next to impossible to get accurate prevalence rates as:

This group has been much less studied in previous epidemiological studies…

Again, for Aspergers, Fombonne says that AS specific epidemiological studies are sparse but, in something of a surprise:

By contrast, other recent autism surveys have consistently identified smaller numbers of children with AS than those with autism within the same survey. In 9 out of 10 such surveys, the ratio of autism to AS prevalence in each survey was above unity, suggesting that the prevalence of AS was consistently lower than that for autism. How much lower is difficult to establish from existing data, but a ratio of 3 or 4 to 1 would appear an acceptable, albeit conservative, conclusion based on this limited available evidence. This translates into a prevalence proportion for AS which would be ? to ¼ that of autism. We therefore used for subsequent calculations an estimate of 6/10,000 for AS, recognizing the strong limitations of available data on AS.

Lastly, for CDD:

Eight studies provided data on childhood disintegrative disorder (CDD). Prevalence estimates ranged from 0 to 9.2/100,000. The pooled estimate based on eight identified cases and a total surveyed population of 406,660 children, was 2.0/100,000. The upper-bound limit of the associated confidence interval (4.0/100,000) indicates that CDD is a very rare condition, with about 1 case to occur for every 103 cases of autistic disorder.

Fombonne then tackles the question everyone wants an answer to – is there an autism epidemic?

In order to answer this accurately, he explains that there has to be tight control over incidence estimates (the number of new cases occurring in a population over a period of time) and prevalence (the proportion of individuals in a population who suffer from a defined disorder). Failure to control these gives false results. Bearing this in mind, Fombonne goes through the five approaches taken so far to try and determine if theres an autism epidemic or not.

1) Referral Statistics.
Trends in time for referral statistics are not reliable. They fail to control for things such as referral patterns, availability of services, heightened public awareness, decreasing age at diagnosis and changes over time in diagnostic concepts and practices. An example of the issues from referral statistics is:

Strong evidence of “diagnostic switching” was produced in California and in all US states indicating that a relatively high proportion of children previously diagnosed as having mental retardation were now identified as having a PDD diagnosis. Decreased age at diagnosis has also been shown to contribute to the rising numbers of children diagnosed with PDD. In the UK, Jick and Kaye (62) have shown that the incidence of specific developmental disorders (including language disorders) decreased by about the same amount that the incidence of diagnoses of autism increased in boys born from 1990-1997. A more recent UK study has shown that up to 66% of adults previously diagnosed with developmental language disorders would meet diagnostic criteria for a broad definition of PDD.

2) Comparison of cross-sectional epidemiological surveys
If I’m understanding his point here (and please correct me if I’m not) Fombonne is saying that too many epidemiological studies are uniquely designed – not enough attempt to replicate a previous study – and hence:

The most convincing evidence that method factors could account for most of the variability in published prevalence estimates comes from a direct comparison of 8 recent surveys conducted in the UK and the USA. In each country, 4 surveys were conducted around the same year and with similar age groups. As there is no reason to expect huge between-area differences in prevalence, prevalence estimates should therefore be comparable within each country. However, there was a six-fold variation in prevalence for UK surveys, and a fourteen-fold variation in US figures. In each set of studies, high estimates derived from surveys where intensive population-based screening techniques were employed whereas lower prevalence proportions were obtained from studies relying on passive administrative methods for case finding. Since no passage of time was involved, the magnitude of these gradients in prevalence can only be attributed to differences in case identification methods across surveys.

3) Repeat surveys in defined geographical areas
So this is the opposite of the above – these are studies where they are being replicated as closely as is possible. However, the issue here is that there are simply not _enough_ of these studies to form a definite conclusion. However, it may be worth noting that in the two studies Fombonne highlighted as being carried out in exactly the same way in exactly the same place to exactly the same age cohort – but just at two different times one showed no increase in prevalence whilst the other showed no increase at 4 sites and an increase at 2 sites.

4) Successive birth cohorts
This means in very large surveys with a wide age range, if the proportion of people who have autism rises this _could_ be a rise in incidence and therefore a good hint that there is an epidemic. I say _could_ as other possible causes need to be ruled out first.

…two large French surveys [used this method]. The surveys included birth cohorts from 1972 to 1985…, and, pooling the data of both surveys, age-specific prevalence showed no upward trend.

A US survey _did_ show an upward trend but:

…the increase was not specific to autism. These analyses also showed a marked period effect that identified the early 1990s as the period where the prevalence estimates started to go up in all ages and birth cohorts, coinciding closely with the inclusion of PDDs in the federal Individual with Disabilities Educational Act (IDEA) funding and reporting mechanism in the US.

5) Incidence studies
The few incidence studies did show incidence trends rising over short periods of time. As noted in point 4) above, this _could_ be attributed to an autism epidemic. However –

…none of these studies investigations could determine the impact of changes over time in diagnostic criteria, improved awareness and service availability on the upward trend.

Contrary to what people who _want_ there to be an autism epidemic, these are non trivial reasons. It stands to reason that if (for example) Birmingham, UK – the countrys second city, goes from having zero service availability and no means of diagnosis in 1960 to having numerous types of service availability both publicly and privately funded and a _lot_ of means of diagnosis in 2000 there will be a _lot_ more autistic people in Birmingham. A hell of a lot. When we then consider that the diagnosis criteria has widened massively than we go from a hell of a lot more autistic people to a _whole hell_ of a lot. If we _also_ consider that people who used to carry one kind of diagnosis are now being swapped to autism then we go from a whole hell of a lot to a descriptive term beyond my ability. This isn’t even science – its basic common sense. The only issue is – ‘a whole hell of a lot’ is not a very accurate measurement.

Fombonne closes by saying that – based on the available data – we still cannot really say one way or the other if there has been an autism epidemic. Remember when you read the quote below that its _incidence_ that gives us an epidemic.

Current evidence does not strongly support the hypothesis of a secular increase in the incidence of autism but power to
detect time trends is seriously limited in existing datasets. Whilst it is clear that prevalence estimates have gone up over time, this increase most likely represents changes in the concepts, definitions, service availability and awareness of autistic-spectrum disorders in both the lay and professional public. To assess whether or not the incidence has increased, method factors that account for an important proportion of the variability in prevalence must be tightly controlled. The possibility that a true change in the underlying incidence has contributed to higher prevalence figures remains, however, to be adequately tested.

Autism, HBOT, and the new study by Rossignol et al.

21 Mar

I recently read the BMC Pediatrics article, “Hyperbaric treatment for children with autism: a multicenter, randomized, double-blind, controlled trial1. I know this paper is attracting a lot of attention in the media, and it is certainly being ballyhooed about the internet. Hell, I’ve even received e-mail spam about this study! But I’m sorry to say, I don’t really share the excitement. In fact, I see what looks like a pretty significant error in the methodology of this study. It’s one of those types of potential errors that stand out like a strobe light or a siren – it’s really tough for me to pretend it’s not there.

Once again, I’m going to ask readers to set aside, for the moment, anything they may know about the role of hemoglobin in oxygen transport and how the minute increases (probably around 3-4%) in total blood oxygen content afforded by this kind of hyperbaric therapy, or simple O2 therapy for that matter, are probably pretty likely to be insignificant.

Both the paper and ClinicalTrials.gov2 list the Center for Autism Research and Education, Phoenix, Arizona, as a study location. This is a problem, because the stated treatment pressure in the study (1.3ATM) seems highly unlikely to actually be achievable in Phoenix with the equipment that was apparently used for this study.

As described in the section titled, “Interventions”:

“These procedures included covering control switches, inflating and deflating the chambers to simulate pressure changes, and masking the sounds from the chambers.”

The use of inflatable monoplace hyperbaric chambers, is a clear indication that the actual total pressures (and quite likely results of this study) would have been affected by the ambient air pressures at the times and locations of treatment. In fact, the ambient air pressure is the largest component of the stated treatment pressure in this study (ambient pressure + added treatment pressure = total treatment pressure).

Ambient pressure

Local atmospheric pressure is typically reported as sea-level pressure3 for its utility to aviation, and the meaningful interpretation of weather maps, etc., but the actual station pressure is affected by the elevation. The expected ambient atmospheric pressure, corrected for altitude, (or station pressure) in Phoenix, Arizona4 is 28.69 in Hg (where there is a modest elevation of 1161’ AMSL). Wanting to give this paper the benefit of the doubt, and knowing that “high pressure” weather is typical of the Phoenix climate, I looked at 30-day data5 for actual station pressure in Phoenix at a station of slightly lower altitude than the Center for Autism Research and Education. The 30-day mean station pressure is 28.81 in Hg, so I’ll use that one for calculations, as it will yield results more likely to be in the study’s favor.

Added treatment pressure

The actual operating pressure of the inflatable chambers, as stated by the manufacturer, is 4 PSI. 6,7 This pressure is also indicated on the Center for Autism Research and Education’s website:

“The chambers used at care utilize a pressure of 4 psi.”8

Total treatment pressure

The total treatment pressure can be easily calculated with the following conversions:
in Hg * 0.491 = PSI
PSI + PSIG = Total PSI
Total PSI * .068 = ATA

For Phoenix, Arizona, this gives a calculated total treatment pressure of 1.23 ATA.

28.81 * 0.491 = 14.15 PSI
14.15 PSI + 4 PSIG = 18.15 PSI
18.15 PSI * .068 = 1.23 ATA

Damn, that’s a pretty big difference from the paper’s stated 1.3 ATM – representing an addition of only .23 ATM (instead of .30 ATM) above mean sea-level pressure of 1 ATM.

I’ve corresponded with the lead author of this study in the past, and he stated that he observes gauge pressure of 4.15 PSI. Despite the manufacturer specs, the FDA-cleared medical device premarket notification, and the Center for Autism Research and Education’s website (which all indicate operating pressure of 4 PSI), and wanting to give the benefit of the doubt, I’ll use 4.15 PSI for the next calculation, as it will be more likely to yield results in the study’s favor.

28.81 * 0.491 = 14.15 PSI
14.15 PSI + 4.15 PSIG = 18.30 PSI
18.30 PSI * .068 = 1.24 ATA

It could be argued that treatment pressure for the other study locations were properly rounded up to 1.3 ATM (even though the actual pressures were quite likely to be considerably lower), however, even with all the calculations purposely leaned in favor of a higher number for Phoenix, Arizona, the study’s stated treatment pressure, there, should have properly rounded to 1.2 ATA! This suggests an overstatement of the added treatment pressure for the Phoenix location of 50% (.3 ATM is 150% of .2 ATM). Even if given the benefit of the doubt yet again, and an exception to proper rounding were made for solely for the Phoenix location in this study, the study’s likely overstatement in added treatment pressure for Phoenix is still a full 25%. (.3 ATM is 125% of .24 ATM – 25% more added pressure above 1 ATM was claimed in this paper, than was probably delivered).

I think this is a big enough boo-boo, that the editors of BMC Pediatrics should call for detailed errata. In the interest of scientific accuracy, it would seem prudent for BMC Pediatrics to:

1. Clarify for its readership and the scientific community, that the stated pressure of 1.3 ATM in this study is rounded up, and includes the ambient air pressure, or alternatively, state the estimated pressure in terms of ATA.

2. Clarify for its readership and the scientific community, that the stated pressure of 1.3 ATM in this study is an estimated pressure, since no actual measurements of ambient station pressure for the locations, and dates/times of treatments were reported.

3. Note for its readership and the scientific community, that the stated pressure of 1.3 ATM was not likely to be uniformly achievable across all study locations due to the use of inflatable hyperbaric chambers and changes in elevation (and atmospheric pressure) across study locations, potentially confounding the results of this study.

4. Note for its readership and the scientific community, that estimated pressures in the placebo control group are affected by these same issues that affect the treatment group, potentially confounding the results of this study further.

What do you think?

1 BMC Pediatrics 2009, 9:21doi:10.1186/1471-2431-9-21
http://www.biomedcentral.com/1471-2431/9/21/abstract

2 http://clinicaltrials.gov/ct2/show/NCT00335790

3Federal Meteorological Handbook No. 1 – Table 11-2
http://www.nws.noaa.gov/oso/oso1/oso12/fmh1/fmh1ch11.htm

4 LAT/LON 33.5º N 118.08º W

5 http://www.wrh.noaa.gov/mesowest/getobext.php?wfo=psr&sid=KPHX&num=720

6 Medical device pre-market notification (FDA-cleared)

Click to access K001409.pdf

7 Manufacturer product sheet

Click to access vitaeris-lowres2007-8.pdf

8 http://www.center4autism.org/therapyHBOT.asp

Poling turns his back on genetics

13 Mar

It’s been a year since the concession in the Hannah Poling case was made public. I’ve been thinking that we would likely see some discussion on it again–especially since the Bailey Banks case didn’t turn into the media event that the autism-is-caused-by-vaccines groups would have liked.

OK, I’m not that good at predicting events, but I was thinking after a year it is time to write a couple of posts about some issues from the Hannah Poling case for a couple of weeks. So, I wasn’t totally surprised when Dr. Jon Poling came out with an op-ed piece in the Atlanta Journal Constitution, “Blinders won’t reduce autism”.

When I read this last night, I thought “why blog this?” But, one line in there bugged me–it’s a common misconception but one that a doctor, heck a neurologist, should never make: the idea that genetic conditions aren’t treatable.

Here’s the quote:

We should be investing our research dollars into discovering environmental factors that we can change, not more poorly targeted genetic studies that offer no hope of early intervention

Wow. I guess we should tell Dr. Randi Hagerman at the UC Davis MIND Institute and everyone else working on fragile-X (a genetic condition that is on the verge of demonstrating valuable interventions) to stop their work?

And, why is it that people who claim to support “gene-environment” interactions seem to have disdain for the “gene” part? How are we supposed to separate the various autism subgroups without identifying the genes? And, if we identify genes, won’t their function give us some idea of what environmental causes might be worth studying?

OK… I’ve got that out of my system….

As long as we are here, we might as well look at some other fallacies. A good place to start is the Autism Street blog, who covered the poling op-ed. It’s well worth the read, as he covers some things I won’t.

One thing we do both cover–this statement by Dr. Poling:

Public school systems are drowning in the red ink of educating increasing numbers of special-needs students.

Autism Street has a nice graph (again, I encourage you to take a look), but here I’ll just point out that this assertion by Dr. Poling about the increasing numbers of special education students is just plain false. The percentage of the student population in Special Education has remained remarkably constant over the past 10 years or so. The cost of some of the autism therapies (ABA in particular) has likely driven costs up, but that isn’t what Dr. Poling said.

The main reason I was going to avoid discussing Dr. Poling’s Op-Ed is the fact that is is rather poorly disguised attempt to air his ongoing battle with Dr. Paul Offit.

Dr. Poling writes discusses how Dr. Andrew Zimmerman is a hero to the cause because of a recent book he edited. He then makes Dr. Offit the villain for Autism’s False Prophets:

On the other hand, Dr. Paul Offit, the vaccine inventor whose Rotateq royalty interests recently sold for a reported $182 million, has written a novel of perceived good and evil called “Autism’s False Prophets.”

Frankly, I think Dr. Poling should have listened to that little voice in his head (which I hope was there) saying, “Don’t take the cheap shots”. By which, I think that describing Dr. Offit’s book as a novel was rather silly and just points out that this is a personal attack by Dr. Poling. It doesn’t add, it just detracts.

If you think calling that a personal attack is a stretch, here’s a bit of telling imagery:

In the story, Offit takes no prisoners, smearing characters in the vaccine-autism controversy as effortlessly as a rich cream cheese.

Actually, I thought that Dr. Offit gave people like Andrew Wakefield a lot of respect, considering the low quality of their research and their public actions.

I was struck by the “cream cheese” allusion. Anyone recall this?

Paul Offit is the Philadelphia cream cheese of the autism debate — he smears so effortlessly

–Dan Olmsted, September 13, 2008

It stuck in my mind because it was so bad. Seriously, I had some people outside of the autism world read that bit by Dan Olmsted and asked them what they thought Dan Olmsted was trying to say. The readers didn’t come away with Mr. Olmsted’s message (that Dr. Offit smears others easily). Instead, they came away thinking Dan Olmsted was saying that it was easy to smear Paul Offit! S

My guess is that Mr. Olmsted wasn’t writing for anyone other than the Age of Autism regulars who would overlook his clumsy writing for a chance to poke fun at Dr. Offit, so he probably isn’t bothered.

I guess Dr. Poling thought it was a good analogy.

But, back to my own clumsy writing. Dr, Poling makes this statement:

As both parent and doctor, I cannot fathom turning my back on a child nor science, in order to avoid inconvenient questions about vaccine safety or any other reasonable environmental factor.

For my part, I wonder how a neurologist can turn his back on considering genetic conditions worthy of intervention. I wonder how a scientist who supports the idea of gene-environment interactions can turn his back on genetics.

Dr. Poling closes with this statement:

In the end, logic and reason will prevail over politics and profits.

God, I hope so. Unfortunately, Dr. Poling seems to have allied himself with groups who have abandoned logic. Generation Rescue and David Kirby come readily to mind.