July 18, 2011
MarkBernstein.org
 
Follow me on Twitter

Web Science Methodology

Clare Hooper points to a the value of mixing qualitative and quantitative methods in Web Science.

She points to a very intriguing CACM piece by Judy Robertson entitled “Stats: We’re Doing It Wrong;” even the minority of submitted research papers that report statistically significant results on reasonably large groups, it seems, are generally using the wrong statistical tests.

It’s worse than it appears. Robertson is especially critical of Likert-scale analyses, which begin with everyone’s favorite style of question:

The software was useful in my work:
Strongly Disagree...Disagree...Agree...Strongly Agree

These results often fall afoul of the The Media Equation – the compelling desire to praise the computer we’ve been using in order not to hurt its feelings – or simply to pleasing the desires of the investigators. Worse, though, the sort of questions we can ask here are not the questions to which we require answers. It’s nice that a program is detectably more pleasant to use, but that’s not the point: we’re building tools to let people do stuff they couldn’t do otherwise.

The software was so unpleasant to use that I’d prefer to make a terrible mistake than to use it:
Strongly Disagree...Disagree...Agree...Strongly Agree

When we study a kiosk about the Holocaust, we don’t really care about ease of use or usability or graphic design. What we care about is: Never Again. Other things may be easier to measure, but that’s not a compelling argument for neglecting the goal. Nor is that knowledge beyond our ken, forever a matter of faith or a question of taste. We know a lot of stuff that works, but you can’t weigh Catch-22 or The Naked and the Dead or Cat’s Cradle or Shoah or Das Boot on a Likert scale.