Thursday, February 12, 2009

Group vs Single Subject Research: Part 1


Researchers are usually trained in either group or "between group" research or "single subject" research. Which of the two any individual claims as an area of expterise has much to do with, 1) where he or she recieved their graduate education, and/or 2) the preferences of their advisor(s), thesis and or dissertation advisor.

One design is not necessarily better than the other. I am by birth a single-subject design guy. So I am biased. It took me a majority of my graduate training to be able to acknowledge the merits of group research designs. But all designs have a place. Which of the two are used is in large part based on the type of research question being asked. But I digress ...

Group research typically involves comparing Group A to Group B regarding some demension(s) or variable(s). A basic design will compare Group A (who we'll call the "Control" group) to Group B (who we will refer to as the "Experimental" group). Group B is exposed to the treatment variable where Group A was not. For example, if we are looking at a new 3rd grade science curriculum, we will assess Group A (Control Group) who is not using the new curriculum against Group B (Experimental Group) who is using the new curriculum. We've given both groups a pre-test and they both performed similarly. Both groups recieve 7 weeks of science education. At the end of 7 weeks, we administer a post-test to both groups. We are looking to see if the performance of Group A differes "Significantly" from Group B on the post-test. In this case, Group B performed signicantly better than Group A on the post-test. This information allows us to make a statement about the research results. In this case, the new 3rd grade science curriculum was "correlated" with higher post-test scores.

There are some things to be aware of when reviewing group design research. In general, information gathered from group designs are treated with common statistical procedures. These are referred to as inferential statistics and allow us to group, norm, average, and make overall inferences about the data. Where problems start is when "inferences" start getting stated as "fact" or as an indication of "causality". Group designs can have difficulty controling for variables that can confound the data. Confounding variables are events that might effect the data that you were not aware of. Going back to our example, what if we found out that during the 7 weeks of science instruction, Group B just happened to also be watching a special a 7 week Discovery Channel special on the curriculum begin taught. This is a confounding variable. Is the better performance on the post-test from Group B a result of the curriculum, or the show? There is no way to know. Researchers are normally on the look-out for these variables.

Group designs have strong "external validity" and weak "internal validity". Where as Single-Subject designs have strong "internal validity" and weak "external validity". What this means is that Group designs are good about telling you something about a large group of people but not good at all telling you about an individual performance within that group.

How can you tell what type of research design was used? If you have the actual study, you can usually determine the type of group design used in the abstract. If not, it will be detailed in the "methods" section. As a last ditch effort, the way the data is being described is usually a dead give away. If the study is talking about "average", "median", "standard deviation", "percentage", "positive or negative correlation", then it is a group design.

Group design researchers get themselves into trouble with they make statements of fact, or causality based on their data. When a researchers makes statements outside of the scope of their data, they are making a critical mistake. In our day and age of the minute-to-minute news cycle, and the role of the "sound bite", a mistake such as saying, "In our study we proved ...", or "our data proves that X caused Y" can have disastrous effects. If you hear someone with a lot of initials after their name making these kind of statements, be very leary.

I'm reminded of an incident I had the fortune to observe back in the late 90's. I was attending a Defeat Autism Now (DAN) conference in Cherry Hill, NJ. It just so happened that Victoria Beck was speaking to the audience about how Secretin "cured" her son of autism. That night the local media was devoting a lot of time to the conference, and of course, this amazing course of developments regarding autism. For some reason, I still remember the name of the Texas physician who was being intereviewed by one of the local news reporters (amazing for me since I can't remember where I put my pen down 5 minutes ago). Dr. Baker, when asked what he thought of the "secretin" treatment stated, "It's a miracle cure!". I'll never forget that. I will also not forget the next 2 years in which parents desperately tryied to get their hands on viles of Secretin. The financial strain, emotional strain, and the pure hope that these parents were exposed to, in my mind, was unforgiveable. Not to mention the trauma caused because of holding a child with autism down to a table while they were either inected or given an IV of this hormone.

It took the scientific community about 2 years to conduct enough experiments to make some descernable conclusions about the effects Secretin had on the symptoms of autism. Any guesses? Nada, zero, zelch. Secritin was found to have no effect on the symptoms of autism.

This little diversion of a summary was intended to emphasize the importance of science. And why its so important for those educated in the scientific method to be responsible when reporting and dicussing their findings. I guess also to help people understand that just because someone might have a lot of initials behind their name doesn't mean that we let our skeptical guard down.

Part II will discuss Single Subject Research Designs.

No comments:

Post a Comment