A good quote

“Ah, there’s nothing more exciting than science. You get all the fun of... sitting still, being quiet, writing down numbers, paying attention... Science has it all.”- Principal Skinner

Friday, June 12, 2015

The Other Person's Hunchback

 There's an old Italian saying, "you can only see the other person's hunchback." This applies to how we see ourselves in relation to other people in so many ways......but none more telling than in defining our own biases. Over and over again, studies show that we see more bias in other people's decision-making than our own. And it isn't that we're over-estimating how much bias is in other people's decision-making, it's that we underestimate our own - we don't see our own hunchback.

A new study by Carnegie Mellon (HERE) shows how pervasive this is. The authors sum it up this way: “People seem to have no idea how biased they are. Whether a good decision-maker or a bad one, everyone thinks that they are less biased than their peers,” said Carey Morewedge, associate professor of marketing at Boston University. “This susceptibility to the bias blind spot appears to be pervasive, and is unrelated to people’s intelligence, self-esteem, and actual ability to make unbiased judgments and decisions.”

So if people are incapable of being objective decision-makers, how can we do objective research? The answer is, maybe we can't - not with perfect detachment. The best we can do is develop controls to limit the power of our biases when it comes to interpreting our data and drawing conclusions. Peer review helps.

Monday, June 1, 2015

Social Scientists as Attention Hogs - A bad idea who's time has come?

In this NY Times article, an ugly truth is exposed: the desire for popular press attention has influenced journals in the social sciences to inflate results. These are technical journals, designed to add to science's body of knowledge through rigorous study and peer reviewed results. But the need to "make a splash" is overtaking the boring and careful nature of good science.

In addition to the impulse to make a splash now being exhibited by journal editors, authors themselves are under pressure to "make it big," whether due to pressures from universities (who often grant tenure based on a faculty member's publication success, not teaching ability) or from grant-writers who want to see a lot of "bang-for-the-buck."

When I started in grad school, it was the beginning of the digital age of publishing in journals. In the old days, we'd have to make multiple copies of a manuscript (yes, on paper) and mail them to the editors in a big envelope. The editor would then turn around and mail multiple copies to experts in the field for review. Then get the comments back (again, on paper)....it was slow.

High-speed internet, email, and digital submissions and mark-ups made everything faster. But researchers began to complain that maybe it was too fast. Was the need for speed good for science?  Was it encouraging sloppy reviewing and publication of dubious results, because everyone was moving too quickly to think hard about the content they were reviewing?

It's not an easy question to answer, and it's an uncomfortable one. I don't know the answer, except to note that the Times article points out that researchers today are complaining about how slow the publications process is - a sign that things aren't going to slow down any time soon.

Wednesday, May 20, 2015

"You can’t fatten a pig by weighing it," and Other Observations

I haven't posted to the blog as much this term as I have in the past. Their just hasn't been a lot of interesting stuff out there.  But two posts caught my eye. First, there's THIS ONE from the NY Times. The authors point out how much "Big Data" there is out there - a lot of it coming from the internet, where companies pay a lot to know where you "click." But there's a difference between Big Data and useful data.  As they point out, "The things we can measure are never exactly what we care about. Just trying to get a single, easy-to-measure number ...doesn’t actually help us make the right choice."

The authors are pointing out the limitations of a purely quantitative approach. I'm a "quant." I have been for a long time. But without context, quantitative information becomes "just more data." For context, you need to actually talk to people. You need not only to know WHAT behavior is (the quantitative), but WHY the behavior is (the qualitative). That's why quantitative and qualitative exist together in social science.

The authors point out that surveys are a good way to help bridge the WHY. That's because surveys can be intermediary between pure quantitative and pure qualitative. They can provide context in a way that can be displayed as numbers, or averages or other central tendencies. 

The other article I ran across is HERE.  It is a good review of why we do evaluation research.  Unlike other scientific endeavors, evaluation research isn't about knowledge-for-its-own-sake.  Instead, it serves a purpose. Like the first article I mention above, this one talks about gathering data in context. Context in evaluation research provides a direction to change things for the better - whether its in an organization or in a community. As the author points out, "“You can’t fatten a pig by weighing it.” In other words, you can’t get better at impact if all you do is measure."

Wednesday, April 22, 2015

Readings for the class on Saturday

Hi folks;
Class this weekend will focus on (among other things) quasi-experimental designs and evaluation studies. These aren't really different areas - quasi-experimental designs are frequently used to evaluate programs. We'll be discussing two program evaluation studies in class: The Oregon Health Plan Standard study, and the evaluation of the drug-abuse prevention program D.A.R.E.

For the DARE article, click HERE.

There are two articles about OHP Standard.
Click Here for the first

Click Here for the second.


Sunday, April 12, 2015

Experimental evidence: Does thinking about God make you a daredevil?

Howdy folks.
Applied social science research reaches into lots of related fields - management theory, political science, and marketing are three big ones. CLICK HERE to read about a study from the Standford Graduate School of Business about how thinking about God influences subsequent behavior.

In the study, a group of participants read a Wikipedia entry about God (the EXPERIMENTAL group) while a different group of participants read a Wikipedia entry about something metaphysically neutral (the CONTROL group.  Then both groups were asked if they wanted to view something benign, or something a bit more risky (as in, a possibility that it would harm their eyes). The risky choice had a small reward attached.

The study shows that reading about God made people more likely to pick the risky-but-rewarding choice. Interestingly, this was independent of people's beliefs in God.

If you remember the list of "cognitive biases" that we talked about in class, you might recognize this as related to "Knowledge Bias" - the tendency to stick with what you know than to try new things, even if the new thing provides greater reward.  Why would thinking about God lead to riskier behavior? Hard to know, off-hand. But that's the thing about science - a good study can inspire follow-up research.

This article is a great introduction to experimental and control groups which we'll discuss on the 24th and 25th.

Thursday, April 9, 2015

Article to Discuss in Class

In our first class, we'll discuss an article from the NY Time's website, "Upshot" (a link to the article is HERE).  In the article, the authors discuss a controversial study that did not find a link between parental engagement with children and child well being.

As you read the article (there's also a link to the original research if you want to view that, as well) ask yourself if the method used to measure parental engagement was a good one. Were there other methods that could have worked better? And what does this bit of controversy say about the relationship between social scientists and the press?