31 December 2011

What do quallys know, anyway?

Don't worry, Robert Caro,
I'm still pre-ordering your
books in hardcover.
Well, let's write a post guaranteed to alienate some friends and granting agencies! You should know upfront that I'm partly venting here and partly trying to work through some ideas about what "qualitative," "quantitative," and "multi-method" research are.

 I'm reading Daniel Kahneman's Thinking Fast and Slow, which brought to mind some latent fears about qualitative methods. In particular,
  1. What do narrative historians know, anyway?
  2. How are they sure of what they know?
  3. If narrative evidence conflicts with statistical evidence, how do we adjudicate between these accounts?
These ideas were prompted by a gushing piece about Apple's archives at Stanford University. There's lots of happy stories about how "researchers" will be able to learn more about Apple's history, and the history of Silicon Valley, by delving into these records.

And I'm sure they will! Really! But how much more will they learn from this sort of thing than they would from:

  1. The (fairly extensive) existing secondary record, including newspaper and magazine articles
  2. The (fairly extensive) existing primary record (see e.g. and e.g. #2)
  3. Something that I've never actually seen an academic or popular historian do: taking apart quarterly and annual reports and investigating them in the same way a forensic accountant would.
  4. Something that I'm really sure I've never seen an academic or popular historian do: learn how to program or read circuit diagrams in order to understand exactly what it was that, say, Woz was doing when he designed the Apple ][ that made it so efficient, or how innovative the Macintosh's handling of certain features (like rendering the mouse arrow using software instead of hardware) were.

I bring these points up, and particularly the last two, because the more country studies and biographies I read the more I'm torn between the fact that narrative historians clearly know a heck of a lot of facts and Sartori's dictum that "that he who knows only one country knows none."

Yes, I'm using this not to illustrate the depths of human
suffering but the challenges of methodology. 30 Rock
was right: Grad students are the worst people.
The depth of knowledge that Adam Tooze or Richard Evans brings to their discussion about Nazi Germany, for instance, is pretty much matchless. But neither Evans nor Tooze could invest the same amount of effort into knowing, say, Imperial Japan or Fascist Italy; both of them would do the same thing I would do, namely buying some books by Herbert Bix or R.J.B. Bosworth, mine their bibliographies, and spend six months reading through them. Add the UK, France, or God help them the USSR and they're 2.5 years away from feeling comfortable about beginning primary source research.

And, inter alia, they'll have to learn Russian, Japanese, and Frankish, which may take some time.

And what happens after this investment of time? Once they've done this, they're still apt to be subject to both the availability heuristic, based on the ideas they've been exposed to during their research, and the narrative fallacy, because they're in the business of crafting narratives. That presupposes that there is a narrative to be found, that the narrative is meaningful, and that the narrative is not simply the product of randomness.

More to the point, as I alluded earlier, historians do not have a specialized functional area of expertise aside from research methodology. Now, that's a lot! But it's not a field that seems to have vastly become more sophisticated over the past century compared to, say, economics, sociology, political science, or psychology.

I'm not insisting that historians would be better
off if they read this, just rudely insinuating it.
Some may object that I am now conflating sophistication with quantification. And to a large degree I am. Why? Because of work like this study of Nazi voting patterns by Gary King. In reading this article, I learned more about the bases of support for the Nazis than I have in several years of avocational reading about the NSDAP. Then there's work like this Larry Bartels article on the irrationality of retrospective voting, which makes a point that I will go so far as to say that qualitative researchers in any field would be unable to prove: that voters are subject to myopic retrospective voting. And even if you think that the Bartels article, the Achen and Bartels working papers, or the Healy et al classic on football games and incumbent vote share don't prove it, you can return to the classic "shark attack" paper on the Wilson 1916 election. Yes, qualitative researchers might be able to suggest that, as they have since approximately Plato's time, but they are simply incapable of making the argument as persuasively or as conclusively as quantitative researchers.

This is not an argument about whether it is "harder" or "easier" to do qual or quant work. The bad news for quallys is that it is frequently easier for quantitative researchers to write papers, certainly once they've invested in learning methodology, writing code, and putting together a dataset. The worse news for quallys is that quantitative workers benefit from every advance in computer power vastly more than they do; consider recent advances in computer-assisted textual analysis, which is beginning to encroach directly on what historians' core competency used to be: analyzing giant corpuses of text.

Next week: What do quantoids know, anyway? The same half-baked level of rambling "analysis" deployed against number-crunchers who think that knowing how to program R code is a substitute for knowing what Watergate was.

1 comment:

  1. "Sartori's dictum that 'he who knows one country knows none'"

    Sartori no doubt said some smart things in his career (and maybe is still saying them? I'm not bothering to find out if he's still alive), but this is not one of them, imo. Too tired right now to elaborate, sorry.

    ReplyDelete