Association Between Autism and Environmental Mercury Exposure Disappears Once Population Density is Controlled for

2 May

california-pollution-autism-analysis

[Correction 5/4/2008: Please see this comment. The trends and conclusions don’t change. The scatter of the graphs is not affected in a way that is noticeable, but the Y ranges do change. The adjustment formula also changes. See the corrected spreadsheet for details.]

This is a critique of Palmer et al. (2008), a recent study claiming to associate the administrative prevalence of autism in Texas school districts and proximity to coal-fired power plants, as well as mercury emissions. Normally I would just point out the likely problems of the paper, but this time I will go further and test a key hypothesis of my critique using California data in a way that is straightforward enough for readers to verify.

Background

Palmer et al. (2008) is not the first study of its kind. Palmer et al. (2006) claimed to document that “for each 1000 lb of environmentally released mercury, there was a 43% increase in the rate of special education services and a 61% increase in the rate of autism.” The more recent paper by Palmer et al. does not result in such remarkable estimates, considering its finding that “for every 1,000 pounds of release in 1998, there is a corresponding 2.6 percent increase in 2002 autism rates.”

Windham et al. (2006) is a case-control study done in the San Francisco Bay Area which claims to associate autism with emissions of Hazardous Air Pollutants (HAPs).

Then we also have Waldman et al. (2007), which I consider a study of the same type, except it associates autism with precipitation (as a proxy of television exposure) instead of environmental pollution.

My primary criticism of these types of studies is that they are attempting to find a cause for an epidemiological phenomenon that could very well not require an environmental explanation. That is, administrative data (special education data in particular) is not equipped to tell us if there are real differences in the prevalence of autism from one region to the next. No screening has ever demonstrated that substantial differences in administrative prevalence between regions are not simply diagnostic differences.

That said, the studies have been done, and they have found statistical associations. This usually means they either found a real effect or they have failed to properly control for some confound.

As I have noted repeatedly over the last couple of years, the glaring confound that most likely mediates these types of associations is urbanicity. The association between urbanicity and autism was documented even before these studies were carried out. It is plausibly explained by a greater availability of autism specialists in urban areas and by greater awareness in the part of parents who live in cities.

Palmer et al. (2008) does control for urbanicity, which might be one of several reasons why its findings are underwhelming compared to those of Palmer et al. (2006).

Is the control for urbanicity in Palmer et al. (2008) adequate?

There are two main problems with the control for urbanicity, described in the paper as follows.

Urbanicity. Eight separate demographically defined school district regions were used in the analysis as defined by the TEA: (1) Major urban districts and other central cities (2) Major suburban districts and other central city suburbs (5) Non-metropolitan and rural school districts In the current analysis, dummy variables were included in the analysis coding Urban (dummy variable 1, and Suburban (dummy variable2), contrasted with non-metro and rural districts which were the referent group. Details and specific definitions of urbanicity categories can be obtained at the TEA website http://www.tea.state.tx.us/data.html

.

1. It is too discrete. Within the set of urban districts, some districts will be more urban than others. The same is true of rural districts. Palmer et al. (2008) is effectively using a stratification method to control for urbanicity, but this method is limited, especially considering the paper looks at 1,040 school districts. A better methodology would be to use population density as a variable.

2. Modeling for distance. The paper models autism rates based on distance to coal-fired power plants. It follows that a control variable should model distance to urban areas rather than urbanicity of each district. Granted, this would not be easy because, as noted, urbanicity is not a discrete measure. But it needs to be noted as a significant limitation of the analysis. Consider school districts in areas designated as “rural” that are close to areas designated as “urban.” Such proximity would presumably provide access to a greater availability of autism specialists than would otherwise be the case.

California Analysis

This time around I thought it would be a good idea to run some actual numbers in order to test this population density confound hypothesis that up to this point has been simply theoretical. I will use county-level data from the state of California, which was fairly easy to obtain on short notice. The data used is the following.

  • Special education autism caseload data at the county level for 2005 was obtained from a California resident who had requested it from the California Department of Education.
  • County population and density data for 2006 was obtained from counties.org.
  • Atmospheric mercury concentration data was obtained from the EPA’s 1996 National Air Toxics Assessment Exposure and Risk Data for 2006.
  • All of the raw data, intermediate data, formulas, and resulting charts can be found in this spreadsheet which I am making available for readers to verify and tweak as needed.

Population Density vs. Autism

Autism prevalence was calculated by dividing the special education autism caseload of each county by its population (Column G). This is not a precise determination, of course, but it should not affect the analysis. In any given California county, the population under 18 is roughly a fifth of the total population of the county.

A first attempt at modeling population density vs. autism prevalence (Chart A) suggested the relationship was logarithmic. So I modeled log(population density) vs. autism prevalence, which resulted in the clear correlation you see in Figure 1 (Chart B).

Pop. Density vs. Autism Prevalence

Figure 1: Pop. Density vs. Autism Prevalence

This is as expected. You will note, however, there is one significant outlier in the lower-right quadrant. That is San Francisco county. Presumably, because of its peculiar geographic characteristics, its population density is the highest in the state. Nevertheless, San Francisco is an important data point since it is a significant urban area which happens to have a relatively low special education prevalence of autism. Let’s leave it in and see how it affects things.

I will use a simple standardization method of adjustment for population density. Basically, I will standardize autism prevalence in each county, such that population density is no longer a factor. Think of it this way. If the population density of each county grew such that its log were now about 3.5, how would we expect autism prevalence to be affected? The following formula is what I came up with.

Adjusted(Y) = Y + 7 – 1.93 * X

The fact that the adjusted prevalence (Column H) is not dependent on population density can be verified graphically (Chart C). Readers can click back and forth between Chart B and Chart C to better understand the effect of the adjustment. I will come back to this adjusted prevalence.

Mercury Exposure Concentration vs. Autism

I obtained atmospheric mercury exposure concentrations for each county from 1996 EPA data (Column I). More recent data would’ve been better since our population density data is from 2006, but it is not clear if newer data is available. I learned of the 1996 data because that is what Windham et al. (2006) uses. I’m working under the assumption that changes in population density in the last decade have been roughly uniform across the state.

Let’s first look at Figure 2 (Chart E), a graph of log(mercury exposure) vs. autism prevalence, without adjustment for population density.

Pop. Density vs. Autism Prevalence

Figure 2: Mercury Exposure vs. Autism Prevalence

There is a graphically noticeable trend in Figure 2, which is not surprising. The question is, does the trend remain after adjustment for population density?

Pop. Density vs. Autism Prevalence

Figure 3: Mercury Exposure vs. Autism Prevalence Adjusted For Pop. Density

Figure 3 (Chart D) is a graph of log(mercury exposure) vs. standardized autism prevalence; that is, autism prevalence adjusted for population density as previously calculated. In this figure we see there’s no longer a graphically discernable correlation between environmental mercury and autism. In fact, Excel produces a linear fit that indicates there’s somewhat of an inverse correlation between environmental emissions and autism prevalence.

Granted, if we were to remove San Francisco as an outlier, the trend would be pushed upwards. But then in this graph there appear to be two additional outliers in the middle upper part of the graph, Orange county and Los Angeles county. Keep in mind we have not adjusted for wealth. Regardless of how we might adjust the analysis, I fail to see that the graph would support a statistically meaningful association between mercury exposure and autism.

Further Confirmation

So far I have provided evidence that, in California, an association between environmental mercury exposure and autism disappears once we control for population density. This is clear to my satisfaction, but I thought it would be a good idea to attempt an inverse exercise as an illustration of the adjustment method. That is, let us try adjusting prevalence for mercury exposure, and see if the correlation with population density remains.

This is similar to what I did previously. A linear model is discerned from the correlation between log(mercury exposure) and autism (Chart E). This is used to derive an adjustment formula (Column K) whose validity can be verified graphically (Chart F). The new adjusted prevalence (Column K) is used in a new graph of log(population density) vs. autism: Figure 4 (Chart G).

Pop. Density vs. Autism Prevalence

Figure 4: Pop. Density vs. Autism Prevalence Adjusted For Mercury Exposure

What Figure 4 (Chart G) tells us is that even after we control for mercury exposure, there is still a clear correlation between autism and population density. In other words, population density wins bigtime – I believe that is the epidemiological term.

Conclusion

An analysis of California data suggests that correlations between the administrative prevalence of autism and environmental mercury emissions are fully mediated by population density. Palmer et al. (2008) suggests there is a real effect in Texas, but its results are not convincing primarily because its control for urbanicity is limited and inconsistent with the hypothesis the paper tests.

30 Responses to “Association Between Autism and Environmental Mercury Exposure Disappears Once Population Density is Controlled for”

  1. Joseph May 2, 2008 at 22:45 #

    Orac also has a critique that lists a number of other problems with the paper.

  2. isles May 3, 2008 at 06:18 #

    Joseph, this is an elegant analysis and a joy to read! I hope you will be submitting it to the journal that published the Palmer paper.

    The fact that articles like Palmer’s get published is really starting to make me wonder whether there is any such thing as meaningful peer review these days.

  3. Matt May 3, 2008 at 06:25 #

    Joseph,

    good analysis.

    I am left with the same question that is raised (and unanswered, or even appropriately explored) by Palmer: why is there an effect with population density?

    The most obvious answer, and the answer we must act on, is that it is not real and that we are missing a fraction of the autistic population.

    On the main point–the methodology you use is better than Palmer’s in one big respect: you are presenting actual mercury deposition rather than proximity to sources.

  4. mike stanton May 3, 2008 at 08:27 #

    Joseph,

    this is excellent. Have you seen Williams J.G., Higgins J.P.T., Brayne C.E.G. (2006) Systematic review of prevalence studies of autism spectrum disorders Archives of Disease in Childhood 9, 8-15 This also found a significant correlation between autism prevalence and uran/rural locations.

  5. Kev May 3, 2008 at 09:34 #

    Thanks for writing this up Joseph, its really good.

  6. Joseph May 3, 2008 at 13:59 #

    On the main point—the methodology you use is better than Palmer’s in one big respect: you are presenting actual mercury deposition rather than proximity to sources.

    Actually, it’s atmospheric mercury concentrations, which is presumably even better. However, I would’ve liked to have mercury emission information, since I’m testing a hypothesis about paper methodology rather than checking if there’s an effect. The mercury concentration data could be affected by proximity between counties, which might not be controlled very well by population density. Emissions of each county, on the other hand, are presumably not affected by proximity.

    Either way, the data was good enough to prove a point. It’s the same data used by Windham et al. (2006).

  7. Joseph May 3, 2008 at 14:22 #

    this is excellent. Have you seen Williams J.G., Higgins J.P.T., Brayne C.E.G. (2006) Systematic review of prevalence studies of autism spectrum disorders Archives of Disease in Childhood 9, 8-15 This also found a significant correlation between autism prevalence and uran/rural locations.

    Thanks Mike. The following is part of a discussion I had a while back with Dr. Windham, who requested supporting sources for the link with urbanicity:

    Lauritsen (2005) had previously found that “the risk of autism was associated with increasing degree of urbanisation of the child’s place of birth and with increasing paternal, but not maternal, age.” Hoshino (1982) found that the prevalence in urban districts was considerably higher than in rural districts in Japan. The findings of Mandell (2005) in regards to diagnosis age and income, and availability of specialists are probably significant as well.

  8. Harold L Doherty May 3, 2008 at 15:01 #

    What journal is this published in?

  9. Joseph May 3, 2008 at 15:32 #

    Your point, Harold?

  10. Harold L Doherty May 3, 2008 at 16:03 #

    It wasn’t a point Joseph, it was a question.

    If you haven’t submitted it already perhaps you could consider Health & Place?

  11. Joseph May 3, 2008 at 16:12 #

    No, I have no interest in submitting it. I have made my findings public and that is good enough for me. I’m sure the matter will be resolved in the scientific literature in due time.

  12. Harold L Doherty May 3, 2008 at 16:24 #

    Too bad.

    Maybe some one else will offer a critique similar to yours for journal publication and the public could then be assured of a full debate by learned authorities on the issue.

  13. mayfly May 3, 2008 at 16:39 #

    San Francisco is a small county which includes only the city of San Francisco. A large part of Los Angeles County lies in the Mojave desert.

    Also San Francisco’s school system is not highly regarded. People may be moving to neighboring counties to get better services.

  14. Joseph May 3, 2008 at 16:41 #

    Yes, San Francisco is an interesting outlier in many respects, including the fact that it has the highest mercury concentration of the state, by far.

  15. Schwartz May 3, 2008 at 18:39 #

    Joseph,

    Good work and very interesting. I’ve been following your ideas around the diagnostics/prevalence issue with quite a bit of interest for a while now.

    My only comment is that you don’t really summarize the acknowledged weaknesses of this analysis.

    I recall a lot of discussion between Interverbal, Prometheus and DeSoto about the proper treatment of outliers. My thoughts are that eggregious outliers really should be removed. I believe that DeSoto’s paper discusses some criteria for making that decision.

    The other unfortunate thing is that you don’t have newer air/mercury data (1996). My understanding is that CA has been moving steadily toward cleaner electricity production for some time now — and I believe the power generation layout in the state changed significantly in the late 90’s early 2000’s as they moved to Natural Gas power generation. However, I don’t know if that has actually affected the air/mercury levels at all (it could be coming from out of state). But I would certainly expect regional differences to occur as the picture of power generation changes.

  16. Joseph May 3, 2008 at 18:50 #

    I do discuss the San Francisco outlier quite a bit. If I remove it, the trend does move up. However, it’s not a matter of just removing it. The adjustment formulas need to change a bit too, since the slope changes slightly. The other thing is that if I remove San Francisco, can I also remove Los Angeles and Orange?

    On the one hand, it seems unfair to the mercury hypothesis to leave San Francisco in the graph. But then, isn’t the fact that it has the highest concentration of mercury in the state something that should be taken into account?

    Either way, I don’t believe the general shape of the graph changes. Whether you can fit a linear trend slightly tending upward or downward I don’t believe means a whole lot.

    There are of course a lot of other limitations in the analysis that are similar to the limitations of Palmer et al., but then I’m testing a hypothesis about the methodology of prior papers; I’m not really trying to determine precise associations.

  17. Schwartz May 3, 2008 at 19:55 #

    Joseph,

    I think that if LA and Orange meet the criteria for removal they should be removed as well.

    My basic understanding is that if you’re trying to do trend/fit analysis, you have to be very careful with outliers as they will improperly skew the results, especially if there is something special about them — even if it relates to what you’re studying. If there is something special though, I believe that must be studied on it’s own because grouping them in the overall analysis messes with your results.

    From DeSoto et al:

    “An outlier is defined as a score that is substantially greater or less than the values obtained from any other individual.
    10(p521) Outliers have an unduly large influence on the outcome of a statistical test. What actually qualifies as an outlier differs depending on the research question and the statistician analyzing the results; however, values greater than 3 standard deviations either above or below the mean generally qualify as extreme cases.11”

    10. Gravetter FJ, Wallnau LB. Essentials of Statistics for the Behavioral Sciences. 4th ed. Pacific Grove, CA: Wadsworth; 2005.

    11. Tabachnik B, Fidell LS. Using Multivariate Statistics. New York: Prentice Hall; 2006.

  18. Joseph May 3, 2008 at 20:27 #

    San Francisco is pretty extreme in mercury concentrations. In its other measures it’s not that extreme. But just because of how it shows in the density vs. autism graph, I tend to agree it would be appropriate to remove it.

    BTW, I’ve done the same analysis US-wide at the state level using mercury deposition data and 3-5 IDEA prevalence. Again, the naive correlation exists, but disappears once you adjust for population density. I also get an inverse correlation there, probably because something else, like wealth, needs to be adjusted for. There are outliers, but from what I’m seeing, they don’t seem to skew things as much as SF. I might write about that sometime. I want to see if I can attempt other correlations while I’m at it.

  19. MJ May 4, 2008 at 00:26 #

    I think I have to agree with Harold. If you think that your rebuttal is robust enough then I think you should submit it to the journal for publication. You are a big believer in the peer review process so you should submit to it as well.

    Just as an aside, not sure if you care, the spreadsheet you posted has some of what I assume is of your personal information still in it. I think M$ publishes a tool that can remove it.

  20. Joseph May 4, 2008 at 02:06 #

    To publish in a journal I’d do things differently, which would require a lot more work. The graphical adjustment is good as a way to make a point in a blog.

    Plus my area of formal training is not epidemiology. Would they consider it? Plus I’d prefer not to out myself as an autistic man, as you might understand; and so on.

  21. isles May 4, 2008 at 02:51 #

    Judging from the amount of peer review that seems to have been applied to the Palmer paper, you could probably submit it pseudonymously and they’d never bother to check.

  22. Ms. Clark May 4, 2008 at 04:02 #

    Joseph, I hope you send the link for this blog to Health & Place and to some of the autism epidemiology people, like maybe Irva Hertz-Picciotto from the MIND Institute.

  23. MJ May 4, 2008 at 15:06 #

    “To publish in a journal I’d do things differently, which would require a lot more work. The graphical adjustment is good as a way to make a point in a blog.”

    So you are saying that it is not a rigorous as the original study that you are seeking to discredit? Saying that you were attempting to make a point on a blog implies that it is not up to the same standards are the original research. And if that is the case it isn’t much of a rebuttal, is it?

    “Plus my area of formal training is not epidemiology. Would they consider it?”

    Only one way to find out.

    “Plus I’d prefer not to out myself as an autistic man, as you might understand; and so on.”

    No, I don’t understand that. If you feel strongly about it you should attempt to get it published as a rebuttal, this is an important issue.

    If you think that you lack the needed skills then attempt to find a co-author who would be willing to help with the process.

    Ms. Clark wrote:
    “Joseph, I hope you send the link for this blog to Health & Place and to some of the autism epidemiology people, like maybe Irva Hertz-Picciotto from the MIND Institute”

    I don’t think the journal would accept a submission via a link to a blog.

  24. Joseph May 4, 2008 at 16:34 #

    FYI – I’ve posted a correction to the spreadsheet here. It doesn’t actually change the conclusions or the trends in this analysis, but it is an important correction. The graphs look about the same, so I’m not going to replace them in the post, but readers should check the new spreadsheet if they want to see the corrected graphs.

    The correction is that it’s not enough to make the slope zero in order to standardize, but the average standardized prevalence should also be zero. In other words, the “standardized” prevalence is a collection of deltas (positive or negative) from the expected prevalence to the actual prevalence.

    There’s a mathematical reason for this, and I will explain, since I don’t want people to think that I’m doing hocus pocus here.

    First let me attempt a math-free explanation, which will not be as boring. What matters is how different from expected a county is. For example, San Francisco has a lower prevalence of autism than is expected given its population density. Hence, it should be expected to have a lower than normal mercury concentration. Los Angeles has a higher prevalence of autism than is expected given its density. So it should have a higher mercury concentration than normal.

    Now, this can be derived from multivariate analysis, in which prevalence would be modeled as follows:

    P = A*d + B*m + C

    where d is (log) density and m is mercury concentration. A, B and C are coeficients that need to be calculated.

    Mutivariate analysis cannot be graphed in 2 dimensions. This is a problem because in a graph you can see features (like San Francisco, Los Angeles and Orange) which help give a better idea of the merit of the correlation.

    But we can do the next best thing. Let’s define P’ as follows:

    P’ = P – A*d – C

    Then clearly

    B*m = P’

    If we were to graph m vs. P’, we would see a slope given by B.

    Now, consider what would happen if B tends to be zero:

    – The graph of m vs. P’ would be a flat line with y = 0.

    – P = A*d + C, which means that P’ would tend to be zero. In other words, a graph of d vs. P’ would be a flat line with y = 0 as well.

    So we start by assuming that B tends to zero and we try to contradict ourselves. If B is zero, then A*d + C must be the trend we see in the graph of d vs. P. If we then calculate P’ = P – (A*d + C), which we call the standardized prevalence, we should expect that a graph of m vs. P’ should also be a flat line with y = 0.

    Now, the graph of m vs. P’ will rarely be an actual flat line. It can be a negative trend (like in this analysis) or a positive trend. The graph vs. P’ won’t tell you the actual coefficients but it will tell you the general direction of the corrected trend. Also, graphically you should expect to see a trend if there’s one.

  25. Joseph May 4, 2008 at 17:01 #

    MJ – Are you suggesting that blog posts that critique papers should be journal-quality posts? Otherwise don’t bother?

    I can think of many bloggers who would be unhappy if that were the case.

    So I did an analysis that took me several hours (didn’t count) to rebutt a paper that I imagine took a lot more time and effort to do. What is the significance of that? I could’ve written a critique without counter-analysis in one hour, and that wouldn’t mean the critique didn’t have points that need to be addressed (specifically the point about the control for population density is obviously unaddressed).

    I think people understand that we’re in a blog and not a journal, and that there are differences between the two. There’s no peer review here except for the review of readers who take the time. I think blogs have some good things going for them in terms of enabling scientific debate, but of course they should be taken for what they are.

  26. century May 4, 2008 at 19:39 #

    Joseph said

    “So I did an analysis that took me several hours……. What is the significance of that?”

    Money?

    and he also said

    “There’s no peer review here..”

    Exactly – so it’s not worth a diddly-squat!

  27. Kev May 4, 2008 at 21:18 #

    century, MJ: This blog is a place to debate things that its invited authors want to debate.

    You can debate the issues raised in the piece or you can fuck off somewhere where snideness is appreciated. Consider that your last, and only warning. You are both acting like a pair of silly school children and when I decided to start blogging again I also decided not to tolerate idiots annoying me.

    So, to summarise: debate issues vs fuck off. Choose now.

  28. Joseph May 4, 2008 at 22:38 #

    Is Century suggesting that I get payed to blog? I wish. Where do I sign up? Do they pay per word or post?

    I get payed by the hour, on an actual consulting job that is a little bit flexible fortunately. So not only do I not get payed to blog, it could easily be argued that I lose money for blogging. My wife certainly makes that argument.

    Actually, I hope I don’t get hooked on blogging again, although I do have another similar post planned.

  29. Catherina May 5, 2008 at 13:58 #

    Joseph,

    you may consider a “Letter to the Editor”. It would probably take a couple of more hours of your time, but the potential exposure is much greater and it might get a discussion going (often, authors are asked to rebutt scientific correspondence).

    Don’t even grace the “you must be paid to post here” allegations with a reply.

  30. Joseph May 5, 2008 at 17:18 #

    David Gorski has written another critique of Palmer et al. (2008) at Science-based Medicine and quotes this post.

Comments are closed.