Archive for August, 2008

Checking Sources

This post is part of a series of my notes on articles provided by Dr. Newman in his Regression workshop at Roundtable 2008 at Andrews University, summer of 2008.

O’Neill, B. (1994, March 6). The history of a hoax. The New York Times, pp. 46-49.

The full text of this article is actually posted online here too.

This little article traces the origin of comparing the top discipline problems in the 1940s and those “today”. It’s a really interesting read, and these are some of the lessons for research that I see:

  • Roots & Fruits. Last summer at Roundtable, Dr. Covrig emphasized the importance of following the “roots and fruits” of research. i.e. where did it come from? what is it based on? (roots) and who else has quoted it? what other research has it inspired? (fruits). It’s pretty clear that it’s important to find out the source(s) of what you’re quoting.
  • “In a study” Credibility. Seems like we all believe something once someone says the words “research study.” I’ve been reading Neil Postman’s book Technopoly this summer. I figure if I’m so heavily involved in technology I should read a dissenting voice once in a while. In the chapter on “invisible technologies” he discusses statistics and polls. “Public opinion is a yes or no answer to an unexamined question” (p. 134). He suggests that we are quick to believe anyone who can quote a study. But do we take the time to examine the questions and answers?
  • Cautious Comparison. One of the flaws of the comparison of these two lists (if quoted as research), is that the question is unknown for the second list “today”. So if we don’t know what the question was, how can we compare to another? How can they be compared if they aren’t at least answers to the same questions?

A great article and interesting read. Shakes you up a bit and helps you realize the important of thinking critically about the information you consume.

Advertisements

Leave a comment »

Newman’s Low R-Squares Article

This next series of posts are my notes on articles provided by Dr. Newman in his Regression workshop at Roundtable 2008 at Andrews University, summer of 2008.

Newman, I., & Newman, C. (2000). A discussion of low r-squares: Concerns and uses. Educational research quarterly, 24(2), 3-9.

This article is available in the OCLC Article First database.

Summary
The point of this article is to suggest that low R-squares shouldn’t be thrown out without consideration. They may have value under certain circumstances, so should be considered carefully.

What is an R Square?
So first let’s remind ourselves what an R squared is. Wikipedia has a very detailed overview if you want to read that. Basically when you’re using linear regression, the R squared is the explained variance. It is the percent of variance that can be explained by the variables you’re examining. I.e. why do some people get a higher  or lower score on your measurement than others? Your variables may explain some of that variance in scores.

Why are R Squares low?
The article suggests some reasons why R Squares can be low.

  • They can be low (and are appropriately low) in the early stages of research. i.e. not enough research has been done to identify all the variables that would account for the variance.
  • In social sciences, the predictor variables tend to have small effects.
  • There might be some measurement error. It is very difficult in social sciences to measure a construct such as intelligence, attitude, etc. So it’s pretty common to have some measurement error. This is where the reliability and validity scores come into the picture.

How do you know your research is any good?
From what I’ve learned about stats so far, there are a few ways we can look at our data to see what it tells us and if the results are useful.

  • Tests of significance tells is if the effect happened by chance or not.
  • Effect size is another important measurement, which used to not be reported, but really is a critical piece of information to help others interpret your results.
  • Replicability is also important. In fact, Dr. Newman suggests that a measure of replicability is more useful that mere significance. Maybe it’s significant with this set of respondents, but does it hold up with another set?

Under what circumstances is a low R square “ok”?
The article suggests some examples and things to consider when looking at low R squares. There are several examples in the article of places where a low effect size or R square is still helpful.

  • A drug that explains less than 1% of the variance may still impact 60,000 lives in a population of 1 million.
  • The odds ratio at casinos may be just slightly in advantage for the house over players, but that adds up to billions of dollars over time.
  • When looking at groups of people vs. individuals, the smaller R squared still has value.
  • If the small R square is consistent and replicable, it still has value.
  • A low R square may not necessarily be a wrong path (as suggested by McNeil quoted in the article), it may only be a partial explanation of the variance and further research will improve it by adding additional predictor variables.
  • It may be better to have a smaller R square that is replicable vs. a higher R square that isn’t replicable.

In summary, the point seems to be that it’s ok early on in research to have a smaller R square when the goal is to hopefully eventually get to a larger R square. This article seems really useful to use in interpreting research that results in a low R square!

Comments (1) »