More Hot Air about HBOT

6 Apr

A few weeks ago, BMC Pediatrics published an article that purports to show that Hyperbaric Oxygen Therapy (HBOT) can produce “…significant improvements in overall functioning, receptive language, social interaction, eye contact, and sensory/cognitive awareness..” in autistic children. This study (Rossignol et al, 2009) is billed as a “…multicenter, randomized, double-blind, controlled trial.”

It’s all that and much, much less.

Let’s start by looking at the six “centers” where this research was carried out.

The Centers

The International Child Development Research Center (ICRDC):

This imposing name is attached to a rather less imposing edifice. The ICRDC, brainchild Dr. Jeffrey Bradstreet, is located in a strip mall in Melbourne, Florida, where it not only carries out “cutting-edge research” but also sells a complete line of “supplements” and treats autistic children with a dizzying array of “alternative”, “biomedical” and “integrative” therapies, including HBOT.

Daniel Rossignol MD (Family Practice), Lanier Rossignol (Nurse Practitioner) and Scott Smith (Physician’s Assistant) were the authors from the ICDRC.

The Center for Autism Research and Education (CARE):

This “center” is located in Phoenix, Arizona and has – according to its website – a single practitioner, Cynthia Schneider, MD (OB/Gyn), who is also an author on this paper. One of the “integrative” therapies this “center” offers is HBOT.

One of the other authors, Sally Logerquist, is a PhD psychologist who – according to the paper – is also associated with CARE, but also appears to run social skills therapy groups for autistic children using the “Logerquist Excellent Attitude Program” (LEAP).

True Health Medical Center:

It’s rather difficult to find anything about this “center”, apart from the fact that it is located in Naperville, Illinois – in what appears to be an office complex. Anju Usman, MD (Family Practice) is the author associated with this location.

Neubrander Center:

Although not officially called a “center”, the office of James Neubrander, MD (Pathology) is apparently one of the “centers” of this study. His office is located in the Menlo Park Mall (near Macy’s) and offers – you guessed it! – HBOT as a treatment for autism.

Princess Anne Medical Associates:

A Family Practice medical group in Virginia Beach, Virginia, this “center” is the home of Eric Madren, MD (Family Practice). It’s not clear if this four-physician practice offers HBOT.

The Rimland Center for Integrative Medicine:

A small, one-physician “center” in Lynchburg, Virginia, this is practice location of author Elizabeth Mumper, MD (Pediatrics). Not surprisingly, this “center” sells HBOT services for autistic children.

So, of the six “centers” involved in this study, five are single-physician operations. The remaining “center” has two physicians (three, if you count the naturopath).

I’m underwhelmed.

Well, what about the research itself? Maybe that’s better than the “facilities” might suggest. Let’s take a look.

The Subjects

This study initially enrolled 62 children (33 treatment; 29 control), but only 29 of the treatment group and 26 of the control group finished all 40 sessions. For reasons that pass my understanding, one treatment subject who only finished 9 sessions was included in the analysis. The authors stated that including this subject did not alter results, which begs the question: “Why did they include this subject if it made no difference?”

Outcome measures

The authors used the Aberrant Behavior Checklist (ABC), the Clinical Global Impression (CGI) scale and the Autism Treatment Evaluation Checklist (ATEC) as their outcome measures. All except the ATEC are widely accepted for use in autism treatment trials.

The ABC is a 58-question checklist of – surprise! – aberrant behaviors which are each given a score from “0” (“not at all a problem”) to “3” (“severe problem”). This test has been use – and validated – in a number of disorders, including autism. It gives a global score as well as five subscales: a total of six measures.

The CGI is a generic rating scale used in a variety of clinical trials. For each parameter (e.g. “overall functioning”, “sleep pattern”), the rater gives a score of between “1” (“very much improved”) and “7” (“very much worse”). The authors had both the treating physician and the parents rate the subjects on overall improvement and eighteen discrete parameters: a total of 38 measures in all (19 by the physician and 19 by the parents).

The ATEC was developed by Bernie Rimland and Stephen Edelson and has not been validated. In fact, it has only been used in two published studies – one by Rossignol et al. The ATEC has 25 questions on which the evaluator rates the subject on either a three-point (“not true”, “somewhat true”, “very true”) or four-point (“not a problem”, “minor problem”, “moderate problem”, “serious problem”) scale. It provides a total score and four subscales: a total of five measures.

In all, each subject had a total of 49 evaluation measures (CGI scores and the change in ABC and ATEC scores), of which 47 are independent. The importance of this will become apparent in the section on statistical analysis.


As I mentioned above, the decision to include one treatment subject who only completed nine sessions was curious. Why they included this subject and not any of the other three treatment subjects and three control subjects who also failed to complete the entire course of the study is concerning. The smart thing – and the proper response – would have been to drop this subject from analysis.

The authors’ method of analyzing the CGI scales was also curious. Rather than simply using the scores as they were provided, they took the scores and subtracted them from four (the “no change” score). There are a few problems with this.

For starters, the scores are not linear – the difference between “much improved” and “very much improved” is not necessarily the same as between “no change” and “minimally improved”. Nor is the difference between “no change” and “much improved” twice the difference between “much improved” and “very much improved”. For that reason, these types of numerical scores are often referred to as “pseudo-numbers”.

This may seem like nit-picking, but it is a serious concern. Imagine, if you will, that the numbers were replaced by colors. Is the difference between green and orange twice the difference between orange and red? If half of a population of birds are blue and the other half are yellow, is the “average” bird green? The simple fact is that it is not appropriate to treat these “scores” as though they were real numbers, to be added, subtracted and averaged.

Secondly, it appears that the authors used parametric statistics for their analysis of the CGI scores. This is a problem since – as I indicated above – it is nonsensical to do math on pseudo-numbers. I don’t have the raw numbers, so it isn’t possible for me to calculate the absolute impact of this mistake for all of the CGI subclasses, but I can figure out the raw numbers for one group, so let’s look at that one.

It took a little work, but the authors gave enough clues to tease out the raw numbers in the physician “overall functioning” CGI score. The treatment group had an “average” of 2.87 and the control group’s “average” was 3.62; using the unaltered data, a t-test [Note: not an appropriate use of the t-test] gives p-value of 0.0006, not far from what the authors report. When a more appropriate statistical test [Mann-Whitney U-test] is used, the p-value is 0.002, very different from the reported 0.0008. While this is still less than the threshold p-value of 0.05, see below for a discussion of multiple comparisons.

All of these statistical analyses of the CGI scores ignore the fact that these are pseudo-numbers and need to be treated as discrete groups rather than as actual numbers. In truth, even the ABC and ATEC scores should have been treated this way, as well, although it is fairly common practice to treat such multi-factor scores as real numbers. A Chi-square test or Fisher Exact test would be the ideal test, but the problem with that is that the treatment group has one score of “1″ (very much improved) and the control group doesn’t. Likewise, the control group has two subjects with a score of “5″ (minimally worse) and the treatment group has none. This prevents a Chi-square or Fisher test from comparing each score independently.

One solution is presented by the authors themselves, although they apparently didn’t use it. In their discussion of the CGI, the authors said:

“Children who received a score of ‘very much improved’ or ‘much improved’ on the physician CGI overall functioning score were considered to be ‘good responders’ to treatment.”

If we “bin” the scores into “good responders” and “others”, we find that there were 9 (out of 30 – 30%) “good responders” in the treatment group compared to 2 (out of 26 – 8%) in the control group. Unfortunately, this is not a statistically significant difference (p = 0.08) in the (Yates) Chi-square test and barely reached significance (p = 0.05, but see below) in the Fisher Exact test.

An even bigger problem in the statistical analysis was the failure to correct for multiple comparisons. This problem was brought up by one of the reviewers, and the authors responded by eliminating a table. They did not make the appropriate corrections.

The reason that multiple comparisons are a problem is that the analysis for statistical significance is based on probability. If the probability (the p-value) that the differences between the two groups (treatment and control) is due to random chance is equal to or less than 5%, that difference is considered to be “statistically significant” and accepted as real. That means that there is still a 5% (or less – look to the p-value) chance that the difference is due to chance and not real.

If multiple comparisons are made on the same group of subjects, the probability that one (or more) of them will be “statistically significant” by chance starts to climb. If 14 comparisons are made, the chance of an erroneous “statistical significance” is over 50%. If 47 independent comparisons are made – as in this study – the chance of an erroneous “statistical significance” is over 90%.

For this reason, it is standard procedure to apply a correction for multiple comparisons. The most well-known (and simplest) of these is the Bonferroni Correction, which changes the threshold for statistical significance by dividing it by the number of comparisons. In the case of this study, the threshold (normally p less than or equal to 0.05 or 5%) is reduced to 0.001.

Applying the appropriate correction for multiple comparisons changes the results of this study significantly. Only the physician CGI scores for overall functioning and receptive language reach significance – and these numbers are already suspicious because they were improperly handled to begin with. In fact, as I have shown above, the CGI “overall functioning” p-value wouldn’t reach significance. It is possible that – if the proper statistical tests were used – that the CGI score for “receptive language” would also not reach significance.

Another curious thing. The authors asked the parents after the study whether they thought their child was in the treatment or the control group. Rather than say that the parent’s guesses were no better than random chance (i.e. 50%), the authors stated:

“…there was no significant difference between the two groups in the ability of the parents to correctly guess the group assignment of their child.”

As I said, this was a curious way to put it. As I read this, all it says is that each group of parent were equally able to guess which group their child was assigned to. That could be a 50% accuracy (which would be equal to chance), but a 90% or 99% accuracy – if both groups were that accurate – would also fit that description.

Now, this could simply be an clumsy phrasing by the authors, or it could be a way to make it sound like their blinding was successful when it actually was not.


This study may have collected some useful data, but its analysis of that data rendered it useless. The CGI scores – where the only statistically significant result was (possibly) seen – were improperly manipulated and the wrong statistical analysis was used.

The other issue is that there is no discussion of why HBOT is thought to be superior to providing the same partial pressure of oxygen at room pressure. This study used 24% oxygen at 1.3 atm, which gives the same partial pressure of oxygen as 31% at sea level. This concentration of oxygen can be easily attained with an oxygen mask or simple oxygen tent – both of which are vastly less expensive than HBOT.

If the authors are arguing that the mild pressure of their inflatable HBOT chambers contributes to the treatment effect, they need to look at the literature on cell membrane compressibility. For those who want to do the calculations at home, the bulk modulus of water (the major component of cells) is 21,700 atm. This means that a 0.3 atm increase in pressure will reduce the cell volume by 0.0014%. The bulk modulus of the lipid bilayer in cell membranes is around 30,000 atm. This means that an increase of 0.3 atm pressure causes a 0.0010% reduction in membrane volume. These are well below the threshold for any clinical effects.

Real pressure effects on the central nervous system are seen at pressures over 19 atm. These effects are:

postural and intention tremors
fatigue and somnolence
myoclonic jerking
stomach cramps
decrease intellectual and psychomotor performance
poor sleep with nightmares
increased slow wave and decreased fast wave activity in EEG

None of these effects could be construed as “improvements”, even in autism.

So, this study fails to answer the following questions about HBOT and autism:

[1] Does HBOT improve any feature of autism?
[2] If so, is HBOT any better than supplemental oxygen (which is much cheaper)?

The only real effect of this study was to give a cover of legitimacy to practitioners who are already using HBOT to “treat” autism.


About these ads

23 Responses to “More Hot Air about HBOT”

  1. FreeSpeaker April 6, 2009 at 03:47 #

    It is manufactured evidence used for sales hype.

    Nothing new.

  2. Do'C April 6, 2009 at 08:09 #

    Hi Prometheus,

    Thank you for taking a look at the stats and exposing some very real weaknesses. While relative weakness aren’t necessarily desired, the truth does count.

    I have a few questions if you don’t mind.

    What is your take on the authors’ apparent plausibility of benefit resting heavily on cerebral hypoperfusion in autism?

    What is your take on the authors’ 1 cited study for inhibition of pro-inflammatory cytokine release (in vitro) with respect to any likely clinical significance in autism?

    What is your take on the authors’ reason for inclusion of the ATEC, especially in light of a the large number of multiple measures?

    Is there a possibly valid reason not make corrections (such as Bonferroni) for the multiple measures, instead of simply sweeping a table under the rug (so to speak)?

  3. Joseph April 6, 2009 at 14:33 #

    The global impression scores are probably affected by subjectivity more than the ABC or the ATEC scores. It’s not surprising that’s where you’d see the bigger gaps if indeed the blinding was less than perfect.

    One thing that caught my attention was the differences in some intake characteristics between the groups:

    90.9% vs. 75.9% male.
    48.5% vs. 34.5% on meds.
    45.5% vs. 37.9% on ABA.

    This is probably not significant, but there’s a trend. I’m wondering if similar HBOT trials have been carried out but not published.

  4. _Arthur April 6, 2009 at 15:13 #

    Where the research HBOT sessions free for all the study subjects ?

  5. Prometheus April 6, 2009 at 16:45 #


    Thanks for your questions!

    [1] The issue of possible “cerebral hypoperfusion” in autism rests on the finding that perfusion – as measured by SPECT or PET scanning – increases after HBOT. This makes it an exercise in circular reasoning.

    IF HBOT actually “works” for autism (the question being explored by this study), THEN the increase in perfusion seen after HBOT could be the mechanism. Citing the increase in perfusion as evidence that HBOT “works” for autism “closes the loop” without actually answering the question of whether HBOT “works” for autism.

    All we can tell from the SPECT and PET scan results is that HBOT increases cerebral perfusion. We don’t know if this change is greater, less or different in autistic people compared to the general population. If HBOT doesn’t “work” for autism, the increase in perfusion could be either irrelevant OR it could be a sign of something worse.

    One possible explanation for the increased cerebral perfusion seen after HBOT could be reactive hyperemia – the increased blood flow seen in the brain (and other tissues) after an injury. If HBOT were causing oxidative stress or damage in the brain, you would expect to see increased perfusion afterwards.

    [2] The Granowitz et al paper you are referring to (I think) found decreased secretion of interferon gamma in human whole-blood cultures at significantly higher pressures than in the Rossignol et al study.

    Interferon gamma is a cytokine and is seen in inflammatory responses, but its primary role is as a part of the innate immune system, fighting against viruses and intracellular bacteria. While reducing interferon gamma secretion might reduce inflammation, it would also reduce the immune response to viral infections.

    [3] The ATEC was used – I suspect – because it was developed by the founder of the DAN! movement and because it is FREE. More reliable measures, such as the ADI-R and ADOS are NOT free and also require training to use and more time to administer.

    The ATEC may be a good test, but it hasn’t been validated. The developers (Drs. Rimland and Edelson) have shown that it is internally consistent, which is NOT the same as showing that it can reliably distinguish between real improvement and wishful thinking. That work is yet to be done.

    According to ARI, James Adams – a PhD Engineer – is currently working to validate the ATEC. I’m sure that he will be able to bring his engineering expertise to bear on this question.

    [4] Not correcting for multiple comparisons is a common mistake made by novice researchers (and, I’m sad to say, even a few veterans). Based on the authors’ response to the reviewer, I got the impression that they did not understand the issue and thought that the reviewer was simply objecting to the table they subsequently removed.

    I hesitate to say that there is no legitimate reason to not correct for multiple comparisons because there may be some bizarre set of circumstances where that would be the correct thing to do. Let me say just that if the authors had a legitimate reason to not correct for multiple comparisons, they should have stated so in their paper.


    The standard in medical research is that the subjects are not charged for the treatments or testing. In their acknowledgments, the authors thanked the International Hyperbarics Association (IHA) – a trade group for HBOT practitioners – for funding that covered the use of chambers and technician salaries. I cannot tell if the subjects were charged or not.

    Of note, Jeff Bradstreet (Dan Rossignol’s boss), James Neubrander and Dan Rossignol are medical advisors to the IHA.

    I think that it would be fascinating to find out if the subjects – especially those who were in the placebo arm – were charged for treatment or testing.


  6. Sullivan April 6, 2009 at 19:46 #

    Rather than simply using the scores as they were provided, they took the scores and subtracted them from four (the “no change” score).

    Any time people start manipulating the data in odd ways, it raises a flag.

  7. Prometheus April 6, 2009 at 21:22 #


    After I “deconstructed” the physician “overall functioning” CGI scores, I ran the numbers in both the “raw” and the “cooked” form. It didn’t change the statistical significance, even using the wrong tests, like the Student t-test.

    I think they “cooked” the numbers because it made the results look better – in a purely visual sense.

    Rating scales that are “upside down” – where better outcomes have lower numbers – are “bad” in an advertising sense because a “good” outcome looks lower than a “bad” outcome. This is pure Advertising 101: manipulate the numbers so that your “product” has the highest bars in a bar-graph.

    By showing their data as the difference from the “no change” score, the better outcomes have higher bars. It’s visually more pleasing and makes for better PowerPoint slides at “conferences” given to the general public.

    The general public (which includes most parents of autistic children) won’t know that you can’t add or subtract pseudo-numbers and frankly wouldn’t care if you told them. But they will recognise that the bars for the treatment group were higher than those for the control group – even if they understand nothing else.

    Let’s be frank – this study wasn’t written for doctors or scientists to read. It was written for prospective clients (the parents of autistic children desperate for a “cure”) and potential jurors. It was, in short, written to convince the general public that HBOT is a “good thing” for autism.


  8. Sullivan April 6, 2009 at 23:10 #

    Let’s be frank – this study wasn’t written for doctors or scientists to read. It was written for prospective clients (the parents of autistic children desperate for a “cure”) and potential jurors. It was, in short, written to convince the general public that HBOT is a “good thing” for autism.

    If you are including insurance companies in the term “general public”, I would agree.

  9. MJ April 7, 2009 at 01:58 #

    “Any time people start manipulating the data in odd ways, it raises a flag.”

    You are right – your attempt to manipulate the data on the study for the, what is it, third time does raise a large red flag.

    Give it a rest with the pseudo science already. This is just getting sad.

  10. Joseph April 7, 2009 at 02:22 #

    @MJ: Explain how data of the study was manipulated. That’s a vague accusation. You need to elaborate.

  11. Do'C April 7, 2009 at 04:21 #

    MJ, have you figured out that I’m not the author of this article yet?

    I didn’t work with this studie’s data either. I looked at flawed methodology, potential conflicts of interest, and expense of pseudo hyperbaric therapy compared to simple O2 therapy.

  12. Sullivan April 7, 2009 at 05:22 #

    Give it a rest with the pseudo science already. This is just getting sad.

    Ironic, neh?

  13. Do'C April 7, 2009 at 05:23 #

    That’s interesting that the Granowitz study was cited for “pressure alone” effect. It was in-vivo, and at 2.0 ATA, so I agree, it is fairly irrelevant to this study. Additionally, it was refuted, to some degree, by a second study the next year.

    Previous investigations have shown that increased atmospheric pressure can affect such cellular functions as interferon- g secretion [16] and apoptosis [17,18]. To assess the effects of increased atmospheric pressure on cytokine production, monocytemacrophages were cultured in 8.75% O2, 2.1% CO2 at 2.4 ATA (increased atmospheric pressure). We used 8.75% O2 and 2.1% CO2 so that cells at 2.4 ATA would be exposed to the equivalent of 21% O2, 5% CO2 at sea level. When compared to cells cultured in normoxia at sea level, up to 12 h of increased atmospheric pressure did not affect IL-1 b or TNF- a synthesis (data not shown).

    Actually, my question number two was referring to this citation.

    Shiratsuch H, Basson MD:
    Differential regulation of monocyte/
    macrophage cytokine production by pressure.
    Am J Surg 2005, 190(5):757-762.

    The authors apparently cited this in support of the observed improvement effect in the “placebo” group.

    If you don’t mind, I’d still like to post the question.

    What is your take on this study with respect to any likely clinical significance in autism in either the treatment or control group?

  14. passionlessDrone April 7, 2009 at 14:27 #

    Hi Prometheus –

    I was particularly interested in reading the peer review notes you provided. In other areas (maybe here?) I’ve seen the accusation leveled that the journal in question was a pay to publish venture, which I took to mean it was not peer reviewed. Maybe, instead, it is just that it has a relatively weak peer review process?

    Could you speak towards the differences in “toughness” in the peer review process? For example, when one reviewer apparently wanted better statistics, as opposed to just removal of a table, would other journals have resubmitted the paper back to the reviewers to determine if they felt the changes were sufficient to warrant publication? The devil is in the details, I suppose.

    - pD

  15. Prometheus April 7, 2009 at 19:48 #


    The pressure change in the Shiratsuch et al (2005) study was 20 mmHg, roughly equal to 0.03 atm, the increased pressure the control group was exposed to. This is equivalent to a 740 foot change in altitude. In a more recent study – Shiratsuch et al (2007) – they investigated the mechanisms behind the observed increase in phagocytosis in macrophages at slightly increased pressure (20 mmHg).

    An review of the literature reveals that this degree of increased pressure (20 mmHg) – which is equivalent to the increased tissue pressure seen during inflammation – actually seems to stimulate an inflammatory response, but the data are too sparse to claim that the transient increase in pressure seen in the control group (20 mmHg) could be stimulating neuroinflammation.

    Rather than explaining any improvements seen in the control group, these studies further reinforce my concernt that the increased blood flow seen after HBOT may be the result of increased neuroinflammation. At the very least, it is a weak “alibi” for not finding a greater difference between the two groups.


    Biomed Central (BMC) Pediatrics chages $1515 per article for publication. It does this so that it can remain “open-access” and still pay the bills. Instead of the readers paying the freight, the authors do.

    However, BMC journals do have peer review. This is in contrast to Medical Hypotheses – another journal that charges for publication – which does not use peer review.

    Peer review is not a panacea – it is only as good (or as “tough”) as the reviewers and the editor make it. Clearly, the editor fell down on the job in this case because a serious flaw (failure to correct for multiple comparisons) was left uncorrected.

    This is not a problem seen only in Biomed Central journals. Reviewers may not notice problems because they are reading too many articles or because they lack sufficient expertise in a certain area. Since the editor decides if the corrections made by the authors are good enough and will only send them to the reviewers if he/she has concerns, the expertise and work-load of the editor plays a role as well.

    Finally – and this isn’t discussed enough, I think – a journal that is paid by the authors is under significant pressure to fill the “pages” in order to pay the bills. It tends to make the journal more “sensitive” to the authors, possibly at the expense of good peer review. “Old style” journals – which get their revenue from subscribers and advertisers – are under different pressures, but generally tend to err on the side of keeping the journal’s reputation intact in order to attract readers (and thereby advertising revenue).

    In truth, I’ve read crap studies in “traditional” paper journals and I’ve read a lot of good studies in the open access online journals. As I’ve said before, peer review will never be infallible until we remove humans from the process.


  16. passionlessDrone April 7, 2009 at 21:30 #

    Hi Prometheus –

    How frequent are you able to view the exchanges between peer reviewers and/or authors? I’ve never seen that kind of thing available before; probably because I haven’t known where to look, but also because for whatever reason, I’d always figured this wasn’t generally available. Anyways, off tangent.

    As I’ve said before, peer review will never be infallible until we remove humans from the process.


    - pD

  17. Dr Aust April 7, 2009 at 22:33 #

    Nice critique of a predictably poor study, Prometheus. Sounds like this paper would make a good learning exercise for medical / final yr Bachelor degree / grad students.. of the “how to read, deconstruct and critique a paper” variety.

    Second the comment above:

    Peer review is not a panacea – it is only as good (or as “tough”) as the reviewers and the editor make it.

    Many classic examples of dismal “peer” review can be found, needless to say, in the journals devoted to “alternative” therapies.

  18. storkdok-nos April 11, 2009 at 20:44 #

    Prometheus, as always, nicely done.

    I wonder what you think about this from the comment section at BMS:

    Normobaric oxygen vs Hyperbaric Oxygen

    Kenneth Stoller (06 April 2009) International Hyperbaric Medical Association email

    Oxygen dosing (which is modulated by pressure far more than absolute concentration)is what provides the signaling at a cellular level, including the subcellular level in the mitochondria, to facilitate the changes observed using a hyperbaric environment.

    No where was this made more clear than in a 163 patient prospective study of carbon monoxide poisoning patients who were given different doses of supplemental normobaric oxygen. Six week post-treatment cognitive sequelae were unchanged – contrast that to the results when hyperbaric oxygen is used. (Weaver et al in reply to Scheinkestal et al: The role of hyperbaric oxygen in carbon monoxide poisoning. Emerg Med Australa 2004;16;394-399).

    The gas laws and hyperbaric medicine are not taught in medical school, so it is unfortunate that these questions have to come up, because there is a fundamental lack of understanding about what oxygen under pressure does. This problem is far greater than treating autistic children – this endemic lack of understanding about what hyperbaric oxygen does and when to use it has kept hyperbaric chambers out of Emergency Departments everywhere when there should be a chamber in every trauma center.

    In the case of autistic children and children with CP, the therapeutic window, or that pressure that will effect DNA signaling, appears to be very low – in many cases less than 1.3 ATA. That has been my experience with dozens of children so affected. However that is not the appropriate pressure (oxygen dose) for treating other conditions.

    Oxygen dose is poorly understood in hyperbaric medicine and even less understood outside of the field, again there is a lot of misapplication. Breathing in normobaric oxygen, regardless of the concentration is not comparable to using oxygen under pressure.

    Competing interests

    President of the International Hyperbaric Medical Association

  19. Do'C April 12, 2009 at 01:31 #

    Hi Storkdok-nos. In my opinion, Stoller would have been hard-pressed to submit a comment that pleads ignorance and irrelevancies stronger. I submitted another comment to BMC Pediatrics.

    The statements about Gas Laws not being taught/understood are ludicrous, as is this statment:

    Breathing in normobaric oxygen, regardless of the concentration is not comparable to using oxygen under pressure.

    At the FIO2 (24%), and added pressure (.23-.27 ATM) of the study, they are comparable, and I’d venture to guess that Stoller is unhappy about that being pointed out by several commenters so far.

  20. daedalus2u April 12, 2009 at 01:36 #

    Storkdok, I saw that comment and I think it is a sign the author either doesn’t know what he is talking about or is being deliberately misleading and trying to promote HBO2. It isn’t O2 pressure that is important it is O2 partial pressure. In this study the O2 partial pressure was never as high as is attainable via breathing 31% O2 at one atmosphere. That is an O2 partial pressure of 0.31 atm.

    Carbon monoxide poisoning occurs because heme holds CO a lot stronger than it does O2. You have to increase the O2 partial pressure to get the chemical potential of O2 high enough for it to displace the CO. In a review that Stoller wrote,

    he talks about the levels of HBO2 used for CO poisoning are 2.5 to 3 atmospheres of pure O2. That is 10 times higher than the 0.31 atmospheres of O2 used in this study. I think what he says is misleading to compare less than 100% O2 given at higher atmospheric pressure to 100% O2 given at 1 atm.

    He gets the NO stuff wrong in his review, NO binds to hemes more strongly than does CO (in general). The association binding constants of O2, CO and NO to sperm whale myoglobin are 1.2, 27 and 220,000 respectively.

    HBO2 will eventually cause injury and eventually will cause seizures. The use of O2 at one atmosphere for CO poisoning between episodes of HBO2 is associated with adverse outcomes in CO poisoning. So what? What does that have to do with autism?

    What exactly is HBO2 doing in autism? Without a hypothetical physiological mechanism, there is no hypothesis of how HBO2 is going to do anything to improve autism symptoms. We know that autism is not a novel form of CO poisoning. What exactly is the “therapeutic window” that he says appears to be lower in autism?

  21. Prometheus April 12, 2009 at 19:56 #


    Dr. Stoller is simply exposing his lack of knowledge. Although gas laws and hyperbaric medicine may have not been taught where he went to medical school, in the US (as well as the UK and Europe), the gas laws are not only part of the required chemistry and physics pre-requisites, but are also taught – again – in the sections on pulmonary physiology and blood physiology. Hyperbaric medicine is discussed in these sections, as well.

    While Dr. Stoller is somewhat correct that normobaric oxygen is not as effective in treating carbon monoxide poisoning, he neglects to mention (or perhaps was unaware) that these patients are given 100% oxygen at pressures of 2 – 3 atmospheres. He also neglects to mention (or, again, is unaware) that there are studies that fail to show any advantage to HBOT in carbon monoxide poisoning.

    Dr. Stoller goes on to claim:

    “In the case of autistic children and children with CP, the therapeutic window, or that pressure that will effect DNA signaling, appears to be very low – in many cases less than 1.3 ATA.”

    HBOT therapy in CP (cerebral palsy) has been – at best – “controversial”. A large multicenter (real “centers”) trial of HBOT in 111 children with CP (Collet et al, 2001) showed no difference. The data supporting its use in autism is too flimsy to even reach the level of “controversial”.

    I assume that by “DNA signalling”, Dr. Stoller means gene expression – a testable hypothesis which has, to date, not been tested by practitioners (like Dr. Stoller) who make a living treating patients with HBOT. If Dr. Stoller has some data to support his claims, I encourage him to make it public. Otherwise, it looks as though he is simply being self-serving in his assertions.

    Dr. Stoller goes on to claim:

    “Breathing in normobaric oxygen, regardless of the concentration is not comparable to using oxygen under pressure.”

    Cellular respiration operates on the partial pressure of oxygen – the product of the total pressure and the fraction (concentration) of oxygen. In the “study” discussed above, the partial pressure of oxygen in the “treatment” group was 237 mm Hg (for those at or near sea level) and the “control” group was breathing 164 mm Hg partial pressure of oxygen (again, if they were at or near sea level).

    The partial pressure of oxygen experienced by the “treatment” group could have been attained – easier and will far less expense – by having them breath 31% oxygen at sea level pressure. If Dr. Stoller doesn’t understand this, perhaps he should attend a refresher course on the gas laws.


  22. June 3, 2009 at 02:42 #


    True Health Medical Center in Naperville Illinois is also the same location for Richard Gelband of Gelband Natural Health and Chiropractic. It is also the location of Pure Compounding Pharmacy. Look them up. They’re all the same address.

    Dr. Usman’s main scam is chelation of autistic children, but she also does HBOT, she orders vacuous tests and now sells a full range of unnecessary vitamins and useless supplements.

  23. 楽天 ブーツ September 16, 2013 at 03:09 #

    アイフォン 楽天 ブーツ

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 952 other followers

%d bloggers like this: