Saturday, March 20, 2010

Ten Research Methods Articles Every Parent Of an Autistic Child Should Understand

Perhaps understandably, many parents of autistic children keep an eye on (or attempt to keep an eye on) the latest research and treatments. Parents' lists are regularly flooded with discussion of various treatment methods, medical research studies, psychological studies, and just about anything else one could imagine... most of it related to research.

At the same time, the average parent knows around as much about how research is conducted and about what study findings really mean as does the average sixth-grader. This combination is very much not a good thing.

Simply put, many of the conclusions parents reach when reading the literature are not anywhere near accurate. Even many of the most basic aspects of research are commonly misunderstood in truly dramatic fashions.

The ideal solution, of course, would be to sit every parent of an autistic child down and give them a series of college-level (undergraduate or postgraduate) classes on the scientific process. Unfortunately, this is pretty spectacularly unrealistic. Frankly, many parents could use classes on critical thinking skills, too, but that's every bit as unrealistic.

As such, I've compiled a "Top Ten" list of papers which cover things that most parents don't get. Of course, like any such list, there are a number of biases operating in how I've constructed this. Perhaps a reader will be able to spot some of these... and, as an exercise for my readers, I've tried to make a few of them as blatantly obvious as possible. Narrowing this down to ten papers was emphatically not easy, and there are a number of papers which almost (but did not quite) make the cut for a variety of reasons. I may blog on a few of them later.

To finish: for the purposes of this list, I've defined "research methods articles" as any peer-reviewed writing dealing primarily with the design, conducting, and interpretation of research.

Edit: Also note that there's usually more than one good article on any of these topics, and I excluded "duplicate" articles. I often had to drop very good articles which deserved to be on this list because of that.



10. Strech & Tilburt (2008). Value judgements in the analysis and synthesis of evidence.


One of the (many) reasons why conflicts of interest are so important when dealing with research is the fact that there is a lot of wiggle-room in experimental design. Scientists routinely make value judgments in designing and interpreting research, and this paper serves to highlight many of the ways in which this impacts the process of research.

Just as a quick illustration: how do my value judgments impact the content of this top 10 list?



9. Rutter (2008). Epidemiological methods to tackle causal questions.

In this paper, Matthew Rutter (who I admittedly have issues with relating to other works) discusses the issue of attempting to determine the cause of something without being able to manipulate it in a lab (or clinical trial, etc.). Of course, in the modern autism world, it's more important to understand how it's possible to establish the reverse -- that something is not the cause of something else -- without an experiment... and that is, admittedly, often far simpler.






8. Lesaffre (2008). Use and misuse of the p-value.

Statistical significance testing is one of the most ubiquitous aspects of the modern scientific process. Unfortunately, it's also the source of many of the problems with it. As scientists haven't been able to find (or settle on) a better alternative, however, it still pops up just about everywhere.

One statistic -- the p-value -- is central to this process. Unfortunately, many people (including scientists) misunderstand just what the p-value is, what it means, and what it represents. Lesaffre's paper discusses this and the issues surrounding it.



7. The PRISMA statement.

One of a number of "reporting standards" documents which standardize scientific reporting in the medical literature, the PRISMA statement deals with systematic reviews and meta-analyses, and specifically with which items of the review's process and methodology most need to be reported.

The other documents of this type (e.g. the CONSORT and STROBE statements) are also very important, but the PRISMA statement deals with ways to document the possibility of biases that effect the process of drawing a conclusion from the entire body of available literature. By contrast, the others deal with the conclusion of single studies.

Of course, since what's important is understanding, most interested parents should read the explanation and elaboration document, not the PRISMA statement itself.

Were I doing a longer list, the CONSORT statement, at least, would be in here. As is, however, I believe that the biases covered by the PRISMA statement to be more important for parents of autistic children to understand... and, frankly, I felt that one major standards document was enough for this list.



6. Manchikanti (2008).
Evidence-based medicine, systematic reviews, and guidelines in interventional pain management, part I: Introduction and general considerations.

The concept of evidence-based medicine has revolutionized clinical practice over the past few decades. This article discusses the concept of evidence-based medicine, its history, its tools, and countless other related topics, providing a great introduction to the medical literature... and provides a basic foundation for understanding it.

Best of all, it's available for free.







5. Ioannidis (2008). Perfect study, poor evidence: Interpretation of biases preceding study design.

Even if a study is designed, conducted, analyzed, and reported perfectly, it can still be biased or otherwise flawed in a large number of ways. This paper reviews and discusses this phenomenon, including (but not even close to limited to) such factors as poor scientific relevance, straw man effects, and the importance of the analysis of the geometry of a research field.





4. Ioannidis (2008). Why most discovered true associations are inflated.

When something is first discovered, researchers' estimates of its importance are generally exaggerated. This article discusses this phenomenon and the reasons for it, painting an unusually frank and readable picture of just why this happens.



3. Ioannidis (2005). Why most published research findings are false.

One of the more annoying aspects of science is the fact that we know that most of our discoveries are simply wrong. The problem, however, is that we don't usually know which ones until far later. This is one of many reasons why replication is so important in the sciences and why the habit of interpreting individual studies, taken in isolation, as "definitive" is really, really problematic. "False positive" findings abound in science -- especially the social and medical sciences -- and often lead armchair scientists or doctors astray.

This paper, one of the most influential papers published in the last decade, discusses this phenomenon and the reasons for it. If you have time to read the responses and the discussion that followed the publication of this article, that is also very much worth the effort of doing. I particularly recommend Mooneshinghe, Khoury, & Jannssens's (2007) essay, Most published research findings are false—But a little replication goes a long way.

The full text is available for free. I really love open-access scientific literature. Long live PLoS!



2. Altman (2002). Poor-quality medical research: What can journals do?

Poor-quality research is a problem in any field. Simply put, it's possible for a poorly-designed study to find anything, no matter how absurd. If I really wanted, I could easily design a study that, while looking legitimate to uninformed non-experts, would conclude that the Rocky mountains are flatter than a random pancake from IHOP. There is even precedent for this.

This is why expertise in experimental design and research methods is so important... both for designing and interpreting studies. Critical appraisal of any research is key, and you can never just trust the author's interpretation of his own work. It's also a large part of why a number of processes (e.g. peer review) are in place and why doctors get so up in arms about irresponsible media reporting.

And this article is available for free from JAMA. Have I yet mentioned that I love open-access literature?








1. Simon (2008). Lost in translation: problems and pitfalls in translating laboratory observations to clinical utility.

Where to begin? This one article manages to cover about half of what's wrong with modern clinical autism research and with autism research funding priorities. Forget the political issues involved and the tie-ins between genetic research and prenatal testing. Forget the issue of whether a medical model is appropriate for autism or not. Forget even the normocentric bias which pervades most autism research and the question of whether or not it is appropriate to view autism as a disease.

Today, most funding into basic research into autism goes into attempts to understand the underlying biological processes that differentiate autistic and non-autistic individuals. This is a tremendously complicated task, one which has countless problems which I could rant about for hours. The sheer amount of money which this task has already gone into this task (and which it will likely require in the future) is mindboggling... not to mention researcher time and effort, etc.

This doesn't mean that I think that the task is worthless. Basic understanding of biological factors and processes is rarely worthless. There are, however, a phenomenal number of difficulties in taking these (usually incomplete) understandings and doing anything useful with them. At the same time, other approaches (what Simon refers to as focusing on "predictive laws rather than on trying to understand the [biological] why of those laws", p. 2 of the author's manuscript, parenthetical word added) offer far better cost-effectiveness... and neatly avoid a lot of the convolutions in the process which Simon spends the rest of the paper explaining.

Edit (3/21/10): An excellent discussion of some of the factors I'm trying to talk about can be found here. A dissenting -- but still valid -- opinion regarding that specific application can be found here. Many thanks to Tyler Cowen and Michelle Dawson for highlighting these and pointing me in their directions. Additionally, to avoid a misreading of the above: focusing on predictive laws does not avoid the need for translational research; it simply makes the process thereof less convoluted. One example of this would be in finding and using valid and robust surrogate endpoints within studies... but this is a much, much longer discussion.


This paper (the author's manuscript of which is available for free here) is an excellent discussion these problems... albeit in a different context. This context actually represents the paper's largest flaw, one which annoys the heck out of me: Simon is a cancer researcher and the paper was published in a cancer research journal. If, however, we are to understand the issues involved with taking the medical research models applied to cancer and applying them to autism (as Autism Speaks and others repeatedly insist on doing), we must first understand the issues with those models in general -- and they very much have them, even when they're used appropriately. This paper does an excellent job of highlighting those.

Edit: Corrected a spelling error.

4 comments:

  1. Excellent compilation, Alexander. :-) I will pass it along to the facebook groups I belong to.

    ReplyDelete
  2. Ditto! I was just fantasizing about a list like this just a few weeks ago!

    ReplyDelete
  3. A possible addition and one which has been very helpful to me, and possibly many psychology students and parents:

    Mercer, Jean. (2009) Critical thinking and child development concepts.

    This paper appears in 5 parts.

    Part 4, 5 and 6 have the meat.

    ReplyDelete
  4. Do you know what journal it appeared in? I don't have access to PsychINFO right now, and it's not showing up in PubMed.

    ReplyDelete