31 December 2011

What do quallys know, anyway?

Don't worry, Robert Caro,
I'm still pre-ordering your
books in hardcover.
Well, let's write a post guaranteed to alienate some friends and granting agencies! You should know upfront that I'm partly venting here and partly trying to work through some ideas about what "qualitative," "quantitative," and "multi-method" research are.

 I'm reading Daniel Kahneman's Thinking Fast and Slow, which brought to mind some latent fears about qualitative methods. In particular,
  1. What do narrative historians know, anyway?
  2. How are they sure of what they know?
  3. If narrative evidence conflicts with statistical evidence, how do we adjudicate between these accounts?
These ideas were prompted by a gushing piece about Apple's archives at Stanford University. There's lots of happy stories about how "researchers" will be able to learn more about Apple's history, and the history of Silicon Valley, by delving into these records.

And I'm sure they will! Really! But how much more will they learn from this sort of thing than they would from:

  1. The (fairly extensive) existing secondary record, including newspaper and magazine articles
  2. The (fairly extensive) existing primary record (see e.g. and e.g. #2)
  3. Something that I've never actually seen an academic or popular historian do: taking apart quarterly and annual reports and investigating them in the same way a forensic accountant would.
  4. Something that I'm really sure I've never seen an academic or popular historian do: learn how to program or read circuit diagrams in order to understand exactly what it was that, say, Woz was doing when he designed the Apple ][ that made it so efficient, or how innovative the Macintosh's handling of certain features (like rendering the mouse arrow using software instead of hardware) were.

I bring these points up, and particularly the last two, because the more country studies and biographies I read the more I'm torn between the fact that narrative historians clearly know a heck of a lot of facts and Sartori's dictum that "that he who knows only one country knows none."

Yes, I'm using this not to illustrate the depths of human
suffering but the challenges of methodology. 30 Rock
was right: Grad students are the worst people.
The depth of knowledge that Adam Tooze or Richard Evans brings to their discussion about Nazi Germany, for instance, is pretty much matchless. But neither Evans nor Tooze could invest the same amount of effort into knowing, say, Imperial Japan or Fascist Italy; both of them would do the same thing I would do, namely buying some books by Herbert Bix or R.J.B. Bosworth, mine their bibliographies, and spend six months reading through them. Add the UK, France, or God help them the USSR and they're 2.5 years away from feeling comfortable about beginning primary source research.

And, inter alia, they'll have to learn Russian, Japanese, and Frankish, which may take some time.

And what happens after this investment of time? Once they've done this, they're still apt to be subject to both the availability heuristic, based on the ideas they've been exposed to during their research, and the narrative fallacy, because they're in the business of crafting narratives. That presupposes that there is a narrative to be found, that the narrative is meaningful, and that the narrative is not simply the product of randomness.

More to the point, as I alluded earlier, historians do not have a specialized functional area of expertise aside from research methodology. Now, that's a lot! But it's not a field that seems to have vastly become more sophisticated over the past century compared to, say, economics, sociology, political science, or psychology.

I'm not insisting that historians would be better
off if they read this, just rudely insinuating it.
Some may object that I am now conflating sophistication with quantification. And to a large degree I am. Why? Because of work like this study of Nazi voting patterns by Gary King. In reading this article, I learned more about the bases of support for the Nazis than I have in several years of avocational reading about the NSDAP. Then there's work like this Larry Bartels article on the irrationality of retrospective voting, which makes a point that I will go so far as to say that qualitative researchers in any field would be unable to prove: that voters are subject to myopic retrospective voting. And even if you think that the Bartels article, the Achen and Bartels working papers, or the Healy et al classic on football games and incumbent vote share don't prove it, you can return to the classic "shark attack" paper on the Wilson 1916 election. Yes, qualitative researchers might be able to suggest that, as they have since approximately Plato's time, but they are simply incapable of making the argument as persuasively or as conclusively as quantitative researchers.

This is not an argument about whether it is "harder" or "easier" to do qual or quant work. The bad news for quallys is that it is frequently easier for quantitative researchers to write papers, certainly once they've invested in learning methodology, writing code, and putting together a dataset. The worse news for quallys is that quantitative workers benefit from every advance in computer power vastly more than they do; consider recent advances in computer-assisted textual analysis, which is beginning to encroach directly on what historians' core competency used to be: analyzing giant corpuses of text.

Next week: What do quantoids know, anyway? The same half-baked level of rambling "analysis" deployed against number-crunchers who think that knowing how to program R code is a substitute for knowing what Watergate was.

29 December 2011

Why do people use giant puppets in street protests?

His fur isn't the only thing that's red.
So, what's up with the puppets? When Occupiers do their thing, even in Lincoln, Neb., they always make sure to bring their puppets. They don't emerge spontaneously: the puppeteers and the costumes are coordinated in advance, at least to the same degree that a potluck is coordinated. Puppets aren't free, so somebody's funding them; at least one puppeteer draws money from a Kickstarter project. (The puppeteer in question is the notorious guy who wanted to get an MFA in puppetry and blamed the economy for his inability to get a job.)

Not being in the target audience for protesters or puppetry (except) (also except), it's unsurprising that the point is lost on me. But unlike other things that I don't get--Sex and the City, Miller Lite, Seventh-Day Adventism--I can't even see the appeal. Putting the puppets together is hard work, I'll grant; this video certainly suggests that there's planning that goes into each puppet.

But what is it supposed to mean? Why did this become the signature form of leftist protests? Even when Tea Partiers hoist their quasi-literate signs, I at least know immediately what they're for (white people) and against (non-white people). With the puppets, though, I'm not sure what the semiotics are. (Is the Statue of Liberty sincere or ironic?)

The best guess I can hazard is that the puppets are a manifestation of the self-reinforcing insularity of the protests. (This interview with a leading street protest puppeteer certainly suggests that the artists are severely out of touch with the apparent targets of their agit-prop.) Puppeteers communicate only with each other; the spectacle they produce is not the puppets, but the incomprehension of the audience.

27 December 2011

Richard Nixon Was Not Having a Gay Affair With Bebe Rebozo

Nevertheless, this article from the Daily Mail is worth a read, if for no other reason than for this paragraph:
A new biography by Don Fulsom, a veteran Washington reporter who covered the Nixon years, suggests the 37th U.S. President had a serious drink problem, beat his wife and — by the time he was inaugurated in 1969 — had links going back two decades to the Mafia, including with New Orleans godfather Carlos Marcello, then America's most powerful mobster. Yet the most extraordinary claim is that the homophobic Nixon may have been gay himself. If true, it would provide a fascinating insight into the motivation and behaviour of a notoriously secretive politician.
Indeed! I'm writing a new book right now that alleges that George W. Bush was a secret Soviet sleeper agent activated by Vladimir Putin. If true, it would provide a fascinating insight into the motivation and behavior of a notoriously blundering politician.

Similarly, a friend of mine is finishing an article that alleges that Calvin Coolidge was actually carved from an oak tree. If true, it would provide a fascinating insight into the motivation and behavior of a notoriously silent politician.

And let's not forget the conspiracy theory that William Henry Harrison never died of pneumonia but is, instead, still the living president of the United States. If true, it would provide a fascinating insight into the motivation and behavior of a notoriously deceased politician.

21 December 2011

Models, Modeling, and Social Science

This is on the first page of
Google Images results for
"models."
Kaiser Fung writes about the difficulties of modeling social behavior, particularly in economics; Andrew Gelman picks up the thread here.

Well, it's always encouraging to hear people much smarter than me say they find modeling behavior to be hard, since I do too. It's much more discouraging to hear people much smarter than me say that modeling is going to be well-nigh impossible. But I think that this is an overly pessimistic conversation. Yes, detecting causal inference is hard; yes, it is probably epistemologically impossible for us to uncover the "real" drivers of human behavior; and, yes, the measurements of human behavior and of the motivating forces of those actions that we routinely deploy are pretty bad.

Nevertheless, the question is not whether we should ask political science to perform as well as physics. (It won't.) The question, rather, is whether we can reasonably expect social science to outperform our intuitions and our folk wisdom, and to become more sophisticated and more certain about at least some propositions over time.

That, I think, is likely. Think of the implications just of (say) the Arrow theorem or the Hibbs bread-and-peace model for understanding American politics. Arrow takes a lot of high-school civics course pabulum off the table, while Hibbs should remind us that much of the churn of polling in the general election is irrelevant even as campaigns are determined, at least in a sense, by the fundamentals that we wish that they would be determined by.

It's odd to see Rodrik cast as the defender of economic orthodoxy, and in particular of quant-led orthodoxy. (Really? That Dani Rodrik?) Yet it's equally strange to see Fung and Gelman so glum about the chances that knowledge can lead to better estimates of both the current social scene and also of the broader regularities of human behavior. Without taking too much of a swipe at social science before, say, 1975 (and I mean from ca. 10,000 BC to AD 1975), I think it's pretty clear that we've learned a lot, and that there are good grounds to think we'll learn more.

20 December 2011

When the movie is better than the book

No, this post wasn't an excuse to link to this picture.
But having written this post, I wanted to link to a pic
of Olivia Wilde, and so why not?
This is always a fun cocktail party game, at least until the pedantic jerk who insists "You can't compare them, because they're meant to be different" shows up and you're all like, "Yeah, we know, jackass, but they can't be completely different, because the book and the movie have the same title, and the same characters, and the same plot, mostly," and then someone quotes McLuhan and then someone else quotes Woody Allen and you realize that you're having the same conversation you had five years ago with a completely different set of people.

All of this, as a preamble for an unequivocal declaration that Cowboys And Aliens, the movie, is better than Cowboys and Aliens, the graphic novel.

I know C&A wasn't a great movie. I saw it, and I didn't regret that I'd spent the money, but I wasn't thrilled. But next to the source material it is an act of unparalleled genius, a tribute to the ability of editorial insight to find the hidden masterpiece, or at least journeyman's work, inside even the roughest chunk of marble.

What was alluded to in the movie (see, the aliens are displacing the indigenous Earthlings just like the cowboys displaced the Indians!) is blatant in the novel. What was interesting in the movie--why are the aliens here?--is rendered completely dry by some exposition given by the alien captain, who not only speaks (a lot) but (a) picks a feud with some local cowboys and (b) talks like a third-rate imitation Lensman villain. Think George Lucas dialogue, but less realistic.

Oddly, at least two Philip K. Dick works ("We Can Remember It For You Wholesale" and "Second Variety") were similarly improved by their translation to the big screen. So maybe it's a genre effect.

19 December 2011

Kim Jong-Il and other people are dead

KJI's death, of course, is the most important thing to happen this month. For a great many people (at least 40 million people on the Korean peninsula), it is the most important thing to happen in 15 years. I assume it will get vastly less press than Christopher Hitchens' death. The Hitch could drink, and write, and sleep around. He was the kind of public intellectual that journalists think public intellectuals should be: glib and impressive to people who had a B average (these days, an A-) in their English major at a "good" school. Update: iOz does it better.

11 December 2011

Fun With Stata and Jogging

This morning, I ran a longer distance than I ever had before. Just an 8k, for those who jog regularly. But the prospect of running five miles--five miles--filled me with the sort of dread that I assume SEC defensive linemen have when they approach final exams in Diff EQ.

What was my time? It was very bad, by the standards of people who run: 10:29/mile. By my standards, this is very, very good---not far off from what I ran in the single mile when I was in middle school but for much, much longer.

Anyway, I wanted to have some fun, so I grabbed the race results from a Web site and decided to do some analysis.

Table 1 displays the results of ordinary least-squares estimates of running times per mile for the participants in the Jingle All the Way 8K Race this morning. Scanty information is available, so the models are pretty spare. Nevertheless they do have some pretty strong results.



Table 1. OLS models of time per mile in seconds
for the Jingle All The Way Race, December 2011.



(1) (2) (3)
Overall Men Only Women Only

age 1.642***1.264***1.955***
(11.52) (5.84) (10.29)
female 78.24***
(25.03)
_cons 484.3***497.8***552.2***
(85.19) (60.89) (84.93)

N 4726 1709 3017

t statistics in parentheses
* p < 0.05, ** p < 0.01, *** p < 0.001


As you can see, women are significantly--significantly--slower than men. For a runner of the same age, women are 78 seconds slower than men. Moreover, women slow down with age somewhat faster than men, losing nearly 2 seconds from their mile time for each year they get older. Compare that with men, who slow by only about 1.3 seconds per year.

The point is reinforced by the figure, which displays kernel density estimates for women's and men's time. On the other hand, there are way, way more women than men running, so it is possible that men just don't run if they're not competitive. (Please, God, I'm not trying to do causal analysis here ... )

28 November 2011

The Irony of Simplicity

It is a truth universally acknowledged that a methodologist in possession of a Harvard chair must be in want of popular influence. But it strains credibility to assert that the ternary plot is easily intuitive. The sadder truth, however, is that it is still easier to understand than a table of multinomial logit results. (When not writing this blog, I occasionally write a dissertation.)

A Constitutional Perspective on Power and Our Populist Moment


The conversation between Conor and the PM has got me thinking about Hobbes and power and how to make sense of its abuses at UC-Davis and in Oakland some weeks ago.

It seems to me that the architectural design of our regime itself speaks to the point, and the gradual loss of commitment to that design explains both the popular movements of our time and the institutional responses to them.

The PM says that we are more dangerous in uniform than out of it, and he's surely right that "absolute power corrupts absolutely." The whole theory of Madison's republicanism can be said to be based on precisely this recognition of human fallibility when it comes to wielding the power of the regime. The technological means for a Third Reich or a Hiroshima were not yet imaginable to the American framers, and they would not have been able to conceive of the devastating power office holders now employ. But even before we invented the power to kill en masse, they held power to be a problem, trusting the executive especially a good deal less than Hobbes trusts his sovereign.

Set deep in the foundations of the American psyche, John Winthrop and his Puritan followers emphasized our imperfect and imperfectable knowledge, our inability to completely master and govern ourselves, our needs, our lacks, our intellectual and moral insufficiencies. Ours, they taught, is a broken world of sin - not due to malevolence or willful evils, but the opposite - our myopia and incomprehensions would frustrate even and especially our attempts to do good.

Niebuhr makes the point in The Irony of American History that Puritanism reached its final expression not in American religious life, but in the forms of American government. We usually think of America as a secular place, where there is a “separation of church and state.” But Niebuhr understood that America’s Lockean institutions were themselves informed by a religious world-view. According to the American Constitution, nodes of power are to be separated and opposed to one another. They are to be separated at the national level by creating institutional incentives and shared responsibilities that would force the executive, judicial and legislative branches to govern each other, and the most powerful among these, the legislative, the framers divided further into two chambers. The framers start from the assumption that men are motivated by “interest,” which is to say that every person to occupy constitutional office will be fallen and broken and will be moved by pride to expand their own dominion. The offices must be made to “check and balance” each other because we can't fully "check and balance" ourselves. Lockean institutions are designed to accommodate Calvinist selves.

As fractious and divided as the nation was during the ratification debates, the psychological centrality of interest was almost universally assumed to be true. On this point, the Anti-Federalists departed from the Federalists only in harboring hope that interest could governed by our better angels if political power were even more widely dispersed, more truly federal, local, and even democratic. They aspired more to the ancient model of democratic alternation and were wary of representation and the rule of trustees. For the Anti-Federalists, local government allowed for more opportunities for civic participation, and participating in government was the great school of citizenship, where men learned to rule and be ruled in turn.

Federalism itself is the second way in which the American framers were wary of political power. It was to be not only shared among the branches of national government, but federalized, decentralized, shared laterally amongst states and the national government. Much has recently been made of the alternative voice in American political thought -- emphasizing the polyphony of American political thought -- and rightly so. It's true enough that the Federalists sought more centralized power than the Anti-Federalists, but both were wise enough to see that centralized power could be a threat.

Ours was designed as a regime of institutions inspired by Locke, by Montesquieu, infused with a religious sensibility, all of which amounted to a rejection of the Hobbesian optimism about the sovereign. Conor is right about that. Axiom number one for Hobbes is that without an arbiter that is empowered to restrain each party, the parties will always be potentially at war, or in Conor's memorable phrase, violent conflict will always be "a lurking logical possibility." Because there is no arbiter to restrain the sovereign were he to feel threatened by the commonwealth or its members, the sovereign stands in relation to the commonwealth as the frightened, neurotic individuals stood in relation to each other before the commonwealth was instituted. The American insight is to restrain and frustrate centralized power because, frankly, the framers did not trust that it can be responsibly used.

Although Americans tend not to think in these terms any longer, they do feel that something is not quite right. So it is that a populist movement from the right, decrying government centralization in Washington (at the expense of the states) and in the White House (at the expense of the Congress and Courts) is matched by a populist movement from the left, decrying the financial centralization in a few banks. The Tea Partiers and the Occupiers, though divided on a range of social issues, both carry the flag of decentralization. The disproportionate use of force levied against the Occupiers at UC-Davis and in Oakland and elsewhere is precisely the kind of force our regime was designed to restrain.

This leads me to conclude that the moment is ripe for a mature reconsideration of and renewal to our commitment to the Constitution. In my view, the uses of power and populism we see around us neither warrant a progressive wish to transcend the Constitution, nor do they warrant the the bizarre necromancy of some of the Constitution's biggest fans. In recognizing that the same centralization of power our regime is dedicated to dispersing is the source of popular protest and the institutional response to that protest, we should be moved to recognize the Constitution's lasting wisdom.

- A Citizen of Rochester -


21 November 2011

The vanity of power

Following yesterday's discussion of Linda P.B. Katehi, TalkingPointsMemo reports that Katehi is staying on. Her explanation: "The University needs me." This is far from the best way for her to address the situation. The Regents and the President of the UC system should take action now.

20 November 2011

War made the state, and the state university made war

Conor comments on the situation at UC-Davis, in which police dispatched by the University's chancellor, Linda P.B. Katehi, tear-gassed peaceful demonstrators.

You can see the video here: As the authors at the Edge of the American West comment, what is most chilling is that the campus police officer "looks utterly nonchalant, for all the world as if he were hosing aphids off a rose bush. The scene bespeaks a lack of basic human empathy, an utter intolerance for dissent, or perhaps both."

Conor argues that this shows that we should not believe that institutions ameliorate violence:

None of that means, of course, that police brutality is excusable in the context of positive liberal laws like ours. We can certainly expect and demand better, even if we understand—with Hobbes—that such violence is always lurking.

Indeed, this sort of incident actually shows why Hobbes’ prescriptions don’t match his critique. Humans are just as violent when they take the reins of power. We are every bit as dangerous when wearing a police uniform as when we are outside a community of laws.

In the universe of American political thought, I would argue that there are two responses to this. The first, catalogued ably recently by Radley Balko, is the view that Conor is wrong: that we are much more dangerous when in uniform than when we are individuals. And the twentieth century supports this view quite well. Without uniforms, and the vast organizations they represent, there is no possibility of a Holocaust, of a Great Leap Forward, of a Hiroshima.

I do not imply a moral equivalence among the regimes responsible for those acts. But it is both trivially true that they could not have taken place without states (no individual or voluntary collective could carry out such mobilizations of resources) and more profoundly true that we cannot debate the moral equivalence of such acts without ascribing agency and intentionality to institutions (the "United States" launching a nuclear attack on "Japan" is more than just a shorthand for referring to the actions of certain crew members of a certain plane dropping a certain bomb on a certain town; it is also reflective of how our analytic categories constrain and permit judgment). In any case, however, it is clear that the resources of violence available to individuals within institutions are vastly, astronomically, greater than the resources for violence available to individuals outside of institutions.

The second response to Conor is to take the first response as granted and to ask why some massive institutions (Nazi Germany, Soviet Russia) are far more coercive than other massive institutions (Canada, Norway). And this is important, because the answer suggests why we find relatively modest applications of force by campus police officers against students far more shocking (or, maybe, just as shocking) as the routine application of coercion by "real" police officers elsewhere. After all, at least part of this story has to do with something that Hobbes never considered fully, and that is the issue of legitimacy--of the right to rule in the sense that Weber defined the term.

It is not the actual fact of violence that should concern us. It is the illegitimacy of that coercion. And illegitimate use of force must be met with both moral condemnation and actual signals of disapproval from the community.

Thus, it is less productive for us to discuss the general tendencies to violence of the Hobbesian state, and far more productive for us to call for the obvious: that the Regents of the University of California, Davis campus, should fire Linda P.B. Katehi, Chief of Campus Police Annette Spicuzza, and Lieutenant John Pike of the UC-Davis Campus Police immediately, for cause, and without a severance package.

11 November 2011

Should professors have coaches?

Really, this was a terrible show. It lasted for nine seasons.
In The New Yorker, Atul Gawande suggests professionals need coaches.

Over at Statistical Modeling, Causal Inference, and Social Science, Andrew Gelman is more skeptical. He writes:

But I don’t know if this could work for statisticians (or for physicists or computer programmers or various other technical jobs). I’m sure I could benefit from advice—if I had Don Rubin or Xiao-Li Meng or Jennifer Hill on a string to answer my statistics questions at all time, I’d be in much better shape (this is not possible so I have partitioned off areas in my brain to simulate Rubin and Meng and Hill—it’s not as good as the real thing but it actually can be helpful, sort of like those old Windows emulators they used to have on Macs)—but that sort of advice and feedback seems a bit different from coaching, somehow.

Gelman misses this one. He asks whether Gawande is "getting coached on his reporting and writing," and sets this up as a test about whether Gawande is really serious about getting feedback on his "core competency"(to add to his list of business jargon).

Let me assure that the answer is yes.

Gawande is a professional writer, but the principal difference between professional and amateur writers is that some institution has chosen to publish Gawande. That entails not merely giving him a check for the essays he writes but also imposing stringent editorial standards. And there are few magazines where that editorial intrusion is more rigorous or searching than The New Yorker.

Gelman might object that this is not quite "coaching," in the sense that it is an essential part of the process of being a magazine writer. But, actually, it is Gawande's volitional coaching--privately engaging the services of a senior surgeon to give him tips--that is the deviation from the essence of coaching. Players don't get to choose their coach, but their coach does get to choose who plays. The incentive structure is no less clear in journalism. The principal difference between the coaching he receives as a surgeon and that he receives as a writer is that in only the latter is it required. (Oddly, the field in which the professional is subject to review is not the one where his mistakes could kill.)

That underscores the difference between volitional and institutional coaching. Peers cannot criticize their peers too strongly without incurring too many costs. But a coach hired by a client who wants searching criticism can, at least in principle, say things that peers cannot. And because their relationship is based on a fee-for-service model, the coach-client relationship is, again in principle, easier to maintain than the peer-to-peer relationship. After all, you can't fire your peers, but you can fire your coach.

Churchill's secret of leadership

Churchill's wartime recollections:

All I wanted was compliance with my wishes after reasonable discussion.

Serving on University committees has made me distinctly more charitable to this point of view.

10 November 2011

You are a little blog bearing up a corpse

(Those among you who are fans of Epictetus will get the reference.)

A delayed shout-out to a great new blog dealing with Stoicism, philosophy, and modernity. Applied Stoicism ought to be more popular than it is.

It's a good day ... for data science!

Despite the lab coat, he's more of an engineer than a scientist.
Kaiser Fung recounts three hours in the day of the life of a "data scientist." The post triggers a few observations.

First, what Fung doesn't mention is that this is actually fun. Screwing around with computers is a perfect example of nonwork, in that it is labor-intensive enough to feel like you're being productive while having no actual value added. (Much like blogging!) But unlike much nonwork (in the real world, examples include answering the phone, answering emails, going to meetings, and so forth), writing code is like solving a whole bunch of logic puzzles all at once. And the frequently (apparently) arbitrary relationship between success and effort makes you feel like a lab rat in one of those experiments that prove that random rewards are more successful at generating effort than rules-based ones.

Second, the term "data scientist"is a little misleading. Just as most mad scientists are actually mad engineers, so too are most data scientists really data engineers, at least day-by-day. (There's nothing wrong with that; engineers get things to work! The software engineers who built Google's search functions are praiseworthy!) But note what Kaiser is doing: he's moving data from X to Y. No hypo testing, just problem-solving.

Third, I'm again reminded of the difference in practice between the life of quants, quals, and squishes. (In classic social science tradition, I'm breaking up the dialectic and calling this progress.) Quants spend their time wrestling with datasets, which is often way harder than quals or squishes believe. Quals spend their time wrestling with cases, which is often much harder than quants or squishes admit. And squishes spend their time figuring out the substrate of reality, which confuse quants and quals who simply assume that problem away.

But at the end of the day, the quant approach is actually more collaborative than quals or squishes admit, and the qual/squish approach is more solitary. Because so many quant problems are engineering in nature, two (or more) heads are better than one--and once a given problem is cracked, the answer is open to everyone immediately. But squishes and quals have to rely on a lot of tacit knowledge. It's very easy for me to consult with someone on a Stata problem. It's very, very hard for me to consult even on a qual topic I know well, like the Nixon administration.

09 November 2011

Using "reshape" to generate country-year data in Stata

The other day, I observed a colleague creating a country-year dataset by hand--using Excel to type out a list of countries and then manually add years. It took her eight or ten hours.

This is a little inefficient.

So I thought I'd give a very quick tutorial in how to do this in 10 seconds.

First, open Stata and create a new file. (For convenience, I'll refer to this as "country.dta".)

Create one new variable, called "country."

Populate this with some arbitrary number of country names--"Belgium","France","Germany", whatever. Since this is an example, four or five will be fine.

Next, create some number of years, like so:

gen year1960=1960
gen year1961=1961
gen year1962=1962


You should now have four variables--"country", "year1960", "year1961", and "year1962"--of which the latter three should be identical. To see your data, type

browse

Now, type

reshape long year, i(country)
drop _j


Once again, type


browse


to see your data.


You'll see that you now have your data arrayed in country-year format. 


This is a toy example, but it's got obvious advantages. For more on the tools that went into this, see the UCLA computing site or type 


help reshape


from the Stata command line.

08 September 2011

An Awe

Among the traits characteristic of the historical line of research begun during the 1930s by the Annales school, reference to statistical objectifications has been significant. From this point of view quantitative history has inherited, via Simiand, Halbwachs, and Labrousse, elements of the Durkheimian school and, even closer to the source, of the mode of thinking centered on averages engendered by Quetelet, who opposed macrosocial regularities to the random, unpredictable, and always different accidents of particular events. It sought, by this technique, to overcome individual or factual contingencies in order to construct more general things, characterizing social groups or the long run, depending on the case. This attempt to give form to the chaos of countless singular observations involves having recourse to previous sources or to specific encodings, of which historians ask two questions: Are they available? Are they reliable? In this perspective, the question of the reality and consistency of objects is assimilated to the question of the reliability of their measurements. The cognitive tools of generalization are presumed to have been acquired and firmly constituted. All that matters is the controlled collection and the technical treatment--eventually automated--of the data.
--The Politics of Large Numbers: A History of Statistical Reasoning, Alain Desrosieres, trans. Camille Naish, Harvard UP, 1998, p. 323.

12 August 2011

Number One for 12 August 2011

The FCC won't let me be me:

05 August 2011

03 August 2011

Debt and blockbusters

Summer is a time for blockbuster movies--sprawling, spectacular, and
illogical monuments to excess. This summer, the most successful disaster
film hasn't been a Hollywood creation has been the Countdown to the
Default.

The imaginary apocalypses now playing the multiplex pale next to the
consequences of America's impending defaults, downgrades, or some
combination of the two. (Sure, evil transforming robots might wreck
Chicago, but a default would have wrecked my retirement portfolio.)

Ironically, the national debt played a supporting role in one of the
summer's biggest movies, Captain America. The movie shows the journey of
Steve Rogers, a good-hearted Boy Scout type who wants to defend the world
against the bad guys, from 98-pound weakling to world-saving superhero.

But Captain America's first assignment isn't punching Hitler in the jaw.
It's traveling across the country selling war bonds. In other words, in
today's debate over the national debt, Captain America would be foursquare
for spending more money than the government earns.

What's more, he would be part of a proud pro-debt tradition. Contrary to
Tea Party slogans, the debt has been a central tool for uniting Americans
for most of U.S. history. Buying debt has even been seen as a patriotic
duty.

Forget the wonk-centered debate over the extent to which American
policymakers have abandoned--or ever adopted--Keynesian ideas. The national
debt has been an essential part of the American experience ever since 1790,
when Alexander Hamilton made a deal with Thomas Jefferson and James Madison
to swap the location of the national capital for the federal government's
assumption of state debts (what we would today call a "bailout").

The resulting bargain ensured both that the capital would be located in the
South instead of the commercial, cosmopolitan, and (the South feared)
abolitionist North and that the federal government would play a major role
in the nation's financial markets.

The creation of the debt thus insured the future of the Union by binding
both Southerners and Northerners to the federal government. Subsequent uses
have been just as calculated: the Civil War was paid for by a mix of
taxation, inflation, and debt sales, as was American involvement in both
World Wars and the defense buildup that helped the United States win the
Cold War.

In every case, policymakers calculated that taking on massive amounts of
debt was in the national interest--and voters agreed. Voters not only
returned the politicians who voted for that spending to office, they turned
out in droves to buy bonds in order to support deficit spending.

Captain America's bond salesmanship was fictional, but real-life bond sales
drives featured Bette Davis and Rita Hayworth. Irving Berlin even wrote a
theme song for bond sales performed by Bugs Bunny ("Here comes the freedom
man/Asking you to buy a share of freedom today").

For most of their history, in other words, Americans have managed to stop
worrying and love the debt. That's not a part of the nation's history that
John Boehner, Eric Cantor, or the Tea Party's members care to remember.
They instead claim that the debt is the result of out-of-control Washington
spending.

And they're right. Unlike earlier generations' deficit spending, the
contemporary national debt wasn't incurred to beat the Nazis or save the
Union. It's the result of decisions taken in President George W. Bush's
first term to slash taxes on the wealthy without cutting spending
elsewhere--and then to embark on costly nation-building projects without
raising taxes to pay for them.

For all their righteous anger about the debt, then, the pro-default crowd
misses the point. The problem isn't the size of the national debt. Instead,
it's the purposes for which it was spent and the ability of the government
to raise the revenue to pay for the money that it has already spent.

A default on the debt would signal the end of an unbroken American
tradition of faithful repayment. Doing the right thing now requires a
steadfat mix of spending cuts and tax increases. That would be a dull
ending to the high-stakes negotiations, but real life isn't a  movie, and
sometimes boring endings are the best kind.

27 July 2011

Everyone hates adultery

John Sides and Matt Yglesias delve into the General Social Survey to see why people's attitudes toward adultery have changed.
Hester Prynne by Flickr user Billhd


The GSS is great; it may be my favorite dataset of all time. And I remember the first time I sat down and started playing with it I immediately gravitated to the variables about social behavior--extramarital sex, premarital sex, pornography, gay sex, and so on. Political scientists hardly ever get to play with these sorts of variables, and guys who do International Relations absolutely never have time to wonder what causes adultery. And Sides, like Douthat and Yglesias, is right to be shocked at the levels of support for extramarital sex--it is striking the first time you plop these into Stata to see that your parents' or grandparents' generation enjoyed a bit on the side.

Sides observes that more-educated Americans have gotten less accepting of adultery, and speculates that the availability of divorce has something to do with it. Yglesias notes that an alternative theory is that female empowerment has made women less reliant on breadwinners, and so the "marriage market" may have allowed them to find more suitable mates and enforce better behavior by holding out an exit option.

I think these are both plausible theories worth investigating. But I want to add some data and some thoughts to this debate. What follows uses only descriptive statistics (for now), so no interpretation is needed, just some careful thought. But I want to drive down one main point: Adultery used to be at least somewhat popular, but everyone hates adultery more than they used to, and this shift has affected practically all subgroups over the past forty years.

First, let's take a look at some additional data. The two charts below show shifts in opposition to adultery by sex and by race over the period 1972 to 2010.



First, a couple of notes. I follow Sides in defining opposition to adultery (and, yes,  I am using the shorter and judgmental term) by using the GSS question XMARSEX. (By the way, this is how we refer to GSS variables at my department--"XMARSEX," "PORNLAW", "HOMOSEX", etc. Way cooler than standard variable names like "V16".) In this case, I am conservative: Opposition to adultery includes only those who say adultery is always wrong.

Second, we should be aware that the GSS responses constrain our ability to answer these questions. What we want to know is under which circumstances people believe adultery is acceptable, but we merely know if they think it is "almost always" or "sometimes" wrong, or "not wrong at all." It is possible that the categories we're measuring have changed--that the person who thought in 1975 that extramarital sex was almost always wrong would believe that given today's divorce laws that it is now simply wrong to engage in affairs. There are strategies to cope with challenges such as these, but I'm not going to get into them now.

Third, we should always be aware that America has changed a lot in the past 40 years. In 1972, only 11 percent of respondents had a college degree; in 2010, 29 percent of respondents did. Similarly, in 1972, about 84 percent of respondents were white; in 2010, only about 76 percent of respondents were. (That's one reason why I just use "White" and "Black" as categories in the race chart--there aren't enough observations to support finer-grained categories across such a long time period.)

Getting back to the charts with these caveats in mind, though, we can see that the same picture that John held out is broadly true: even groups (like men) that used to be favorable to adultery are no longer supporters. The same pattern can be seen by looking at religious affiliation, which I present in two ways. The first looks at Protestant versus Catholic attitudes and the second at all respondents who view themselves as religious versus those who have no religious preference (e.g., atheists and agnostics). (Again, data limitations prevent me from displaying anything more fine-grained.)

Here we begin to see an interesting pattern. The shift in societal mores has affected even atheists and agnostics, contrary to what we might expect if we get all of our news from Glenn Beck. Secondly, although nonreligious respondents are more tolerant of deviations from marital fidelity, the gap is narrowing quickly; nonreligious respondents are now almost as willing to condemn adultery as Catholic respondents were in 1972.

Just for fun, I also looked at a partisan breakdown.

Although there is a widening split between the parties on this issue (and others), nevertheless Democrats and Republicans alike have grown less tolerant of adultery, at least at a mass level.

Given the relationship of adultery to marriage (long recognized by playwrights among others), we might want to investigate whether those who are married have different ideas than the population as a whole. A plot of opposition to adultery by marital status reveals a surprising pattern. In this case, we see that although married respondents have always been intolerant of adultery, divorced/separated respondents and never-married respondents have quickly caught up.

Note that fewer than half of single respondents in 1972 thought extramarital sex was "always wrong," which explains Mad Men. Very interestingly, in the early 1970s divorced respondents were fairly accepting of extramarital sex, but over the past few years their attitudes have converged on those of married respondents a generation ago. (One wonders if there isn't a natural ceiling to how strongly divorced respondents will condemn adultery, since some of them, at least, are presumably divorced because of an affair.)

Finally, I looked at whether levels of workforce participation by women have tracked with changes in perceptions on adultery. The first chart takes women who are working either full or part-time and compares their attitudes with women who are not (e.g., laid off, never working, retired, and so forth).

Here, we can see that there is support for a modified version of the Yglesias hypothesis. If the effect of women getting jobs is to make them more likely to demand marriage equality and fidelity, than that effect is recent. In the 1970s and 1980s, women who worked approved of adultery at nearly the same rate as men.


 However, there does seem to be some support for the idea that male attitudes have changed. The chart immediately above compares the attitudes of men whose wives work full- or part-time with those of men who are unmarried or whose wives do not work. Whereas there was practically no difference between those two groups throughout the 1970s or even the George H.W. Bush administration, a wide gap opened up during the Clinton-Bush administrations. (It remains to be seen whether the steep decline in 2010 is a blip.) This is consistent with Yglesias' (girlfriend's) hypothesis: male attitudes have shifted faster and more consistently than women's.

Finally, I want to point out that the GSS is sponsored by the National Science Foundation. This is what federal support of social science underwrites. It's critical to have these data in uninterrupted time-series, and it's essential that guys like Tom Coburn not be allowed to take away funding for such projects.

13 July 2011

Number One for 13 July 2011

There's only millions that lose their jobs and homes and sometimes accents / There's only millions that die in their bloody wars, it's all right
"Ping Pong," Stereolab

11 July 2011

What do professors do?

The question, from the Hon. Mr. (Matthew) Cameron [Yglesias blog understudy]: How can we raise the teaching productivity of the university?

The answer: The quickest route is to sacrifice research.

I think that there's a widespread misunderstanding of what professors actually do and what universities are actually for. Teaching is secondary. This is not true by biomass--the bulk of tertiary education institution are, indeed, devoted by teaching--but in terms of societal value research is the primary goal and justification of the academic enterprise.

Let's unpack the fallacy. Cameron argues a point that a lot of people find intuitively appealing: Universities are soooooo inefficient, because they use large in-person lecture courses to teach! We should just put those online and use the savings to drive down tuition costs to help the Struggling Middle Class and make college more accessible for the Honest Working Classes. (I think all of this is important, so I'm not mocking the SMC and the HWC themselves, but rather pundits'--often Ivy alumni who frankly don't know many authentic SMC and HWC members--invocation of these mythical creatures.)

Consider how Cameron phrases the difficulties:
Universities have little reason to cut costs because their reputations directly benefit from higher per student academic spending. So even if a school achieves cost savings without sacrificing quality – say, by replacing large, intro-level lectures with online courses – it will be regarded as less prestigious by many ranking methodologies. ... inally, university faculty view online technology as a threat to their role at the heart of the higher education system. This was evident during an exchange between George Mason University Prof. Tyler Cowen and Stanford University Prof. Tim Bresnahan at yesterday’s conference. When Cowen raised the possibility of universities employing fewer professors once online courses are widespread, Bresnahan responded defensively by asserting that he does more than just teach.
Cameron then lays out what he sees faculty as doing:
Obviously, Bresnahan has a point – the specialized expertise and personalized guidance that professors can convey to students in higher-level college courses truly is indispensible. For entry-level lectures with hundreds of students, however, faculty members often don’t do much more than teach. They don’t grade papers, they don’t meet their students and they aren’t able to delve into the finer details of the subjects they teach. These classes aren’t just a waste of students’ money, however; they’re also a drain on professors’ time. If they were freed from their obligation to teach such classes, professors would be able to devote more effort toward their niche in the higher education system – stimulating students’ intellectual curiosity through personal interactions and engaging learning experiences.
Here we see the thinking behind the critique of the large lecture course from the point of view of those whose only engagement with academia has been as a student:
  • Professors are primarily teachers;
  • teaching is best carried out in face-to-face interactions;
  • large lecture courses make it hard for professors to have face time with students;
  • ergo, large lecture courses are a waste of time.
There is a corollary argument for replacing these courses with online courses:
  • The Internet is cheap, and allows for on-demand scheduling;
  • Students from the HWCs and SMCs are motivated and disciplined;
  • Online courses suffer from all of the disadvantages of large lecture courses (no face time, no instructor engagement, etc) but at least they are cheaper to provide;
  • Universities will pass on all cost savings from eliminating large lecture courses to students;
  • Universities will be unharmed by losing those tuition dollars;
  • ergo, let's move large lecture courses online.
These arguments collapse at several points because their assumptions are valid. The argument for online courses is staggeringly weak. In the first place, the success of the University of Phoenix belies pundit optimism about distance education. Students in online courses have higher drop-out rates than their in-person equivalents, and universities have no reason to pass on cost savings to students. Indeed, if universities did have a reason to do so, then the adjunctification of higher ed would have been accompanied by plummeting tuition costs. But I am less interested in those reasons than in the wildly naive view about what it is professors do and how that conditions this debate. When my non-grad school friends and my family ask what I plan to do after finishing my doctorate, they invariably ask if I plan to teach. I invariably respond that I would, in fact, very much like to be a professor. But I never use the verb "teach" in describing that position.

I love teaching. I think I am good at it. I work very hard at doing well at it. But given a choice between being a mediocre teacher and a great researcher, or a great teacher and a mediocre researcher, I would take the former every time. Every time. Because if I just wanted to teach, there are many more lucrative and less demanding ways to do so--most of them involving majoring in education as an undergraduate and getting a better-paying job in a nice suburban school system or a good private school. (Yes, the BLS shows that political science "teachers" earn more than secondary school teachers, but that does not account for wages foregone--which can be substantial if you take five or six years to finish your degree--nor the survivor bias for university salaries: all of the washouts have already washed out of that pool, because for university instructors tenure is a privilege, not an entitlement.)

I don't think that the choice is that stark. And I think that, frankly, teaching is not really as hard as it seems. Yes, being the bestest teacher in the whole wide world is very tough, but that is to say that giving Steve Jobs-level presentations is incredibly more demanding than giving a very good presentation. Acquiring a relatively high level of competence seems to be more a matter of discipline and minimal training than of massive investment (certainly less than, say, learning game theory). (Veteran professors are welcome to disabuse me of my naivete.)

Yet that is almost irrelevant. Granted that universities--all of them, even Harvard--need some faculty to work hard at teaching, the real value-added from universities does not come from teaching. It can't, by definition. Both the best and the worst pedagogue in the world alike are limited by how good their subject matter is. There can be no calculus teachers without calculus, no physics teachers without physics, no political science teachers without political science research. Who cares how well we are teaching if what we are teaching is wrong?

And, by definition, only research can be progressive and expand our capabilities and understanding. But the value of humanity's understanding of a subject is bounded by the most perceptive, not the mean. When the British astrophysicist Arthur Eddington was asked how many people understood his theory of the expanding universe, he thought for a moment before replying, "Perhaps seven." Professors are often criticized or mocked because their research is so obscure, but it is precisely that obscurity which often leads to breakthroughs with startlingly practical applications.

This is at the heart of Professor Bresnahan's response to Cowen that he does more than just teach, and it is why Cameron has so badly misunderstood what the professoriate does. It is not a question of "the specialized expertise and personalized guidance that professors can convey to students in higher-level college courses." It is the fact that that expertise did not spring forth fully-formed from Zeus' head. Rather, it was hard-won, mostly by professors who were probably notoriously bad teachers.

And that is why weakening the edifice of the university by forcing its scholars to become pedagogues first and researchers second, if at all, is so frightening. Research will continue, with or without the university. But research outside of academia is proprietary and narrow, and whole disciplines--especially the social sciences--will vanish or become so transformed as to be unrecognizable. (Think of the niche field of "retail anthropology.") Universities exist to provide public goods, and piecemeal "reforms" such as blindly sacrificing large lecture courses and the cross-subsidization they provide both to research and to those much-ballyhooed higher level courses will profoundly alter the production and transmission of knowledge.