There has been a lot of great discussion, research, and reporting on the promise and pitfalls of algorithmic decisionmaking in the past few years. As Cathy O’Neil nicely shows in her Weapons of Math Destruction(and associated columns), algorithmic decisionmaking has become increasingly important in domains as diverse as credit, insurance, education, and criminal justice. The algorithms O’Neil studies are characterized by their opacity, their scale, and their capacity to damage. Much of the public debate has focused on a class of algorithms employed in criminal justice, especially in sentencing and parole decisions. As scholars like Bernard Harcourt and Jonathan Simon have noted, criminal justice has been a testing ground for algorithmic decisionmaking since the early 20th century. But most of these early efforts had limited reach (low scale), and they were often published in scholarly venues (low opacity). Modern algorithms are proprietary, and are increasingly employed to decide the sentences or parole decisions for entire states.
“Code of Silence”, Rebecca Wexler’s new piece in Washington Monthly, explores one such influential algorithm: COMPAS (also the study of an extensive, if contested, ProPublica report). Like O’Neil, Wexler focuses on the problem of opacity. The COMPAS algorithm is owned by a for-profit company, Northpointe, and the details of the algorithm are protected by trade secret law. The problems here are both obvious and massive, as Wexler documents.
A weekly link round-up of sociological work – work by sociologists, referencing sociologists, or just of interest to sociologists. This scatterplot feature is co-produced with Mike Bader.
This week, we mourn all sorts of things, not the least of which is US leadership (or at least, followership) on combating global climate change. Please post your favorite discussions of the events, especially by social scientists, in the comments.
I realize all the cool kids have switched to R, but if you still work with Stata, you may be interested in some routines I worked up to generate color and line pattern palettes and customize graphs fairly easily with macros and loops. This is useful to me because I am generating line graphs showing the trends for 17 different offense groups. Some preliminary tricks, then the code. Continue reading “stata: roll your own palettes”
Last night, Republican Greg Gianforte won a special election for Montana’s sole Congressional seat. How do we interpret this event? Here’s how the NYT approaches the question:
Voters here shrugged off the episode and handed Republicans a convincing victory. Mr. Gianforte took slightly more than 50 percent of the vote to about 44 percent for Mr. Quist. (President Trump won Montana by about 20 percentage points.) Mr. Gianforte’s success underscored the limitations of the Democrats’ strategy of highlighting the House’s health insurance overhaul and relying on liberal anger toward the president, at least in red-leaning states.
I believe this interpretation is incredibly misleading and reflects a larger problem with how we make sense of binary outcomes in the presence of more information.
As the NYT notes, in 2016, Trump carried Montana 56-36. The House race in 2016 was a similar 56-40. Gianforte here won 50-44. That’s a 10 point shift. In a special election. In Montana. And with something like 70% of votes being cast before the assault that brought national attention. Turnout was about as high in the special election as in the 2014 general. That’s wild. Yes, Gianforte’s awful, and yes that he will be a congressman is depressing. But framing this outcome as having “underscored the limitation of the Democrats’ strategy” or as a big loss for Democrats strikes me as absurd. If you are a GOP rep who won by say 10 in 2016 (55-45), this result should terrify you. And if you’re a Democrat looking at an even marginally competitive district, this should embolden you.
That’s most of what I wanted to say; the rest of this post is an aside about learning, probabilities, continuous information, and contract bridge.