The Economist: Democracy continues its disturbing retreat
The US has been classified as a “flawed democracy” by the Economist Intelligence Unit (EIU)’s Democracy Index for the second year in a row. First, in 2016 (study released 2017), and now 2017 (released 2018). I’m going to privilege year under study and call the first the 2016 report and the second the 2017 report.
The global picture and a bonus briefer on our shortcomings:
Almost one-half (49.3%) of the world’s population lives in a democracy of some sort, although only 4.5% reside in a “full democracy”, down from 8.9% in 2015 as a result of the US being demoted from a “full democracy” to a “flawed democracy” in 2016 (see Democracy Index 2017 by regime type, page 2). Around one-third of the world’s population lives under authoritarian rule, with a large share being in China.
Here’s the report (pdf), methodology at the bottom. The Economist Intelligence Unit’s Democracy Index scores countries between 1-10 based on five factors: electoral process and pluralism; civil liberties; the functioning of government; political participation; and political culture.
You might wonder how one measures any of that, and you would be right to, and I have nothing to offer you but confusion. Apparently, the EIU uses both “public opinion polls” and “experts.” Since I cannot find any information on the specific experts – the website gives me all analysts, the editor of the report, and no further information – I’m guessing this means “some suits + World Values Survey.” No, I am not joking:
A crucial, differentiating aspect of our measure is that, in addition to experts’ assessments, we use, where available, public-opinion surveys—mainly the World Values Survey.
Here’s how countries are scored:
10-8 = Full democracy.
8-6 = Flawed Democracy.
6-4 = Hybrid Regime.
4-rekt = Authoritarian.
In 2016 the US dropped into the “flawed democracy” category, and this trend continues. You might wonder “dropped how” and the answer is .02 democracies, because as of 2017 we’re a 7.98. No, I have no idea what that means either. According to the report, we’d been teetering on the edge for a while due to low confidence in institutions, and that finally pushed us over.
This is presumably a worrying trend or something, although I have no idea what it’s a trend of. Still, it’s a relatively minor fall with many causes and we should probably keep our heads cool. It’s worth noting that if we were downgraded due to trust in the system, inaccurately reporting that the US government is no longer democratic probably isn’t the best way to rectify that.
And, finally, from Democracy Dies in Darkness itself:
So now we all know a thing that was predetermined by an arbitrary system. Go into the articles themselves and they’ll be careful to point out three things:
1) This is not about Trump – according to the report he benefited rather than causing. I assume this is why Trump features prominently in the ledes.
2) Democracy is falling on a global scale, so this isn’t just about the Anglos. That’s a lie, intentional or not, and it’s why the 2016 America-is-downgraded report objectively-and-globally opens thus:
Caveat, buried deep in the report itself: “Maybe Brexit is a sign of a strong democracy.” Who knows either way? (Everyone who looks at the front page knows it is not.)
3) The US, 7.98, is still ranked 21st in the world by whatever units they’re using to rank. Note that none of this matters because the measure doesn’t matter, only the phrase matters, and the phrase’s use is arbitrary: “Flawed democracy” might be 9 or 7.4 or 6.3, how would anyone know? It’s not a natural category, you can’t test it against anything outside of the court of public opinion. Failing the court of public opinion, you can rely on the experts.
The fairest presentation of the report is going to lead to issues, because the report itself is a mess. They get expert opinion on a series of questions, weigh those against public data, and release none of the results except a final tally. For instance: “confidence in institutions” is clearly important, lacking it hurts you. But also important:
Question 32. The preparedness of population to take part in lawful demonstrations. 1: High. 0.5: Moderate. 0: Low.
If available, from World Values Survey % of people who have taken part in or would consider attending lawful demonstrations.
1 if over 40%. 0.5 if 30-40%. 0 if less than 30%
There is, presumably, some correlation between “lost confidence/discontent” and “% who have taken part or would consider taking part in protests,” they give opposite scores. At the very least, one might imagine that one leads to the other, e.g. the United States for the past five decades. Who knows how opposite points are weighted.
You might also wonder what a flawed democracy is. Read on and continue to wonder:
Flawed democracies: These countries also have free and fair elections and even if there are problems (such as infringements on media freedom), basic civil liberties will be respected. However, there are significant weaknesses in other aspects of democracy, including problems in governance, an underdeveloped political culture and low levels of political participation.
To be fair, sounds like home, which it’s supposed to, because all of those are meaningless sounds you can apply to anything at any time. It’s political astrology. “There are problems, basic civil liberties will be respected.” Basic is doing all the work, I have no idea what a non-basic civil liberty is, I assume it’s when you turn the Bill of Rights to 11.
Of course, everyone already knows what a flawed democracy is, the same way they know what “underdeveloped political culture” means. It’s that dude down the street, the guy he voted for, and why he voted.
First off: this is how you invent a world we all have to live in while pretending you didn’t. Ok? Ok. Next:
There are two major problems here, the first social and the second academic. Since the first feeds the second, let’s address it up front.
I hope that none of my readers is naive enough to think that all of this would be the same had Hillary been elected. It would still be bad, just bad in a different way. Breitbart is nightmare bullshit, but so is everyone. “Fake news,” data point: this entire report.
The 2016 report got a lot more play than the 2017 report, for obvious reasons. Trump gets elected, we get downgraded, what a scoop. The conspiratorial take is this: it isn’t too hard to wiggle that .02 for all the headlines, especially considering that there’s no way for anyone to check. I have no idea if this is true, my own position is “had to look up defamation laws before writing this,” thankfully that doesn’t matter anyway.
First, there are dozens of democracy watchers, all with conflicting scores, at least one will give you the headline you need. Second, all you need is the headline. The primary way that people learn about these things isn’t the newspaper’s writeup, it’s names, ledes, friends, facebook, whatever. “America no longer a full democracy, I knew he was an orange fascist.” At best some politico will namecheck the report at a rally: “America is [good things] but we’ve lost our way. An important democratic watchdog has recently warned that our democracy can no longer be called a democracy. It’s surely because of the […]. Vote for me to reverse it.” Please clap.
Even the fairest presentation of the report is going to lead to issues, but the presentations are not fair. Despite the Hail Mary at objectivity, check the title which no article can resists pointing to: “Democracy Index 2017: Free speech under attack.” One will inevitably conclude that the US is among the underattacked, but the US receives the highest possible score for free speech (10/10), and none of them mention it. It’s Europe that’s losing its media freedoms. The only threat to America is, apparently, college students:
Governments, in democratic as well as authoritarian countries, are deploying defamation laws, prevention of terrorism laws, blasphemy and “hate speech” laws to curb freedom of expression and stymie media freedom. Non-state actors, including militant Islamists, criminal gangs and vested interests also pose a growing threat to free speech, using intimidation, threats, violence and murder. Freedom of expression is also under threat from those who claim the right not to be offended. This is leading to growing calls for “safe spaces”, “trigger warnings”, “hate speech” laws, no-platforming, tabloid newspaper bans and the policing of the internet to cleanse it of “offensive” content.
Agree or not, this isn’t quite what you’d gather from the write-ups.
That these are used to sell papers rather than [anything else] is obvious beyond belief, but I do want to show it. The report itself looks a whole lot more “right wing” inasmuch as it calls trigger-warnings a threat to democracy, but don’t get fooled. Note that Breitbart didn’t blast the news through the roof. It certainly seems like they should – shouldn’t they be salivating over “Democracy Index downgrades us because of snowflakes”? But Breitbart understood that this was partisan rhetoric masked as report, which is why they deployed their own: Economist Ranks Israel Among World’s Most Democratic Nations. I know, shocking.
I’m not defending Trump, nor do I think “the media is out to get him.” I’m saying the media is out for itself, and if it can make it look as though the state is gunning for them, that makes sharing their articles much more important. Trump as a real human and political actor rather than as TRUMP is irrelevant to the equation. He’s a useful tool here, not a material reality. By presenting itself as the target of political assault, Democracy Dies in Darkness makes its own proliferation a political act, i.e. “They don’t want you to see this, but – ” This is for the same reason that Twitter banning accounts is a great marketing move. Everyone who remains now has some stake in it, there’s a hint of danger, “This must be important or else the system wouldn’t attack it.”
Since it is a fact that the insinuation of these articles – whether by indolence or cognitive dissonance – will be “Trump has undemocratized the US” consider what that means for the democracy dies in darkness set. They did nothing, and thus do not have to reconsider their actions. Now everything that goes wrong is not your fault, how could the people be wrong if we’re a flawed democracy? Special interests or some shit. Yeah, yeah, “blaming the outgroup” but who cares about that? Far more interesting is this: the definition of “democracy” is unproblematic, it’s been quantified, the Scientists Are On It, it requires no action from the demos. “Do we live in a democracy or not? Better get the elite’s assurance.”
“Who cares about the definition?” Everyone, all of it, that’s what this is, that’s what the report reported at you.
“It’s been quantified. Science is on it.”
About 15% of my readers cringed and blanked out at that point. I don’t blame you. Mocking quantification has an uncanny correlation with writing the philosophical equivalent of a barcode neck tattoo. “They’re turning us into numbers, man.”
Everything can be quantified, but not all quantifications are made equal. Or: everything would be roses if we could turn humans into numbers, but we do not yet know how. Our brains do not allow for that, which is a problem if you’re studying things that exist only in the brain. “Democracy” does not have an objective definition, there is no natural phenomenon to control for, we lack the ability to correct regime change against red-shifts. Empiricism is great if you’re studying empirical phenomena, but if you mistake qualitative attitudes for numerical expressions, you’re in for a world of hurt.
That does not make it unimportant. At least as a word, it brings up pleasant associations and makes us feel we should Act In a Way. In other words, how much money would you pay to live in a country .02 democramperes more democratic vs. how much would you pay for your country to move from “flawed democracy” to “full democracy”? Here I assume you aren’t Mencius Moldbug, but in case you are just flip the questions and you’ll get it.
This report is defining it while pretending to study it. You might think the issue is merely the labeling – too qualitative, arbitrary categorization, just report the numbers – and that would be a compelling point. “Politics is a spectrum disorder” is certainly a fine definition of the object under study. That’s just another trick, but I’ll grant that it’s a clever one.
1) There’s no way to report that data non-qualitatively. Even lacking categories (“full” or “flawed” or “hybrid regime”), it will be presented qualitatively, e.g. “Top 10” and “Top 50” are qualitative evaluations with numerical masks. To understand why, consider the massive difference between Top 10% of x and Top 11%. “Meaningless.” Not to the admissions committee. The media knows this, see: Hotel Concierge’s piece on happiness. They’re making a different point than me, but the reasons are the same.
2) There’s no way for an individual to interpret that data non-qualitatively, because outside of, say, “voting registration/populace” you’re not talking about a naturally numerical phenomenon. When I say “Moldova is a 5.94” that gets translated into pictures in your brain. What are you conceptualizing for the number? What about “Worse than Albania, better than Guatemala”? “Midway between Croatia and Uganda”? Anyone who claims that they’re capable of just thinking the number is lying. The measure itself is based on qualitative phenomena, which means the number-as-number is meaningless.
3) This will get ignored, but still must be said: I don’t discount the individual studies that go into an index. Voter registration, reported confidence in institutions, number of journalists in prison, etc. are all valuable in themselves.
Still, point at which one of those is democracy. The criteria is going to be qualitative by nature. That comes both from which variables are assigned (e.g. is “loss of confidence in government” a sign of democracy or not?) to how those are measured (do I assign “sense of political commitment” in Serbia a .5 or a 1?).
It’s (3) that’s the big one. No matter how many ways you cut the data, you’re trying to assign objective measurements by subjective criteria. Somewhat ironically, I can empirically prove that. Seva Gunitsky, discussing post-Soviet States, charts a bunch of different democracy measures against one another, and finds out that there is… well, see for yourself:
Two theories. They aren’t all of it, but they’re worth considering. They also aren’t mutually exclusive, and I assume that both are happening:
A) Sometimes Person A makes a claim about [concept], and Person B disputes that claim on the grounds that Person A doesn’t have a very clear definition of [concept]. Person A responds that simple, common sense definitions of [concept] are obvious, and Person B is making nonsense semantic arguments, “like a philosopher,” except this is 2018, so “like a postmodernist.” What’s going on above is the proof of that. “Obvious” things like democracy aren’t agreed upon. “It’s just a semantic argument, we should look at the data,” but the data is predicated on the semantics. Misunderstand that and create full-blown postmodern hyperreality.
B) Since the point of the studies is the measurement of democracy itself, each new one is incentivized to be at least slightly different. No point making a new measurement that conforms to all the old ones, after all. This means this does relying on even more qualitative arguments, because the reliable data is already shared. Also means forcing those arguments to be quantitative to change the measurement. “Oh.”
Someone made the decision to define “Flawed Democracy” as “8.0 to 6.0 Democramperes.” The reason it will never drop arbitrary categories like “hybrid state,” is that those categories are its raison d’être.
It’s not that the EIU report has good or bad information, plenty of its variables are measurable, none of those are the report. All of them are public: Pew, Gallup, World Values, etc. Its works cited looks like the NYT bestseller list: Lilla, Murray, Fukuyama, Putnam. I’m not saying any of those are bad books, who cares if they are, there’s no particular expertise required to read them nor are the paywalled by anyone but amazon. The only thing this report adds is the categorization, backed by [something], carefully curated by “a panel of experts.”
I’m banging on the EIU because it pounced on me from the shadows of google, but don’t get confused: each and every measurement has the same problem. Freedom House, Polity IV, all of them.
The second a report is classified as A Report, an entire ecosystem of knowledge blooms out. Part of that is public information: “We live in a flawed democracy” passes from headline to reader to watercooler to common knowledge. A whole lot of it is subsequent work, which informs itself, which then informs later work.
For example: The 2017 EIU points to a Pew poll that reports on attitudes towards democracy in authoritarian vs. democratic societies. In what I’m sure is a coincidence, that Pew Report used the Democracy Index’s 2016 report as its measure. Nobody bothers to mention this, but something something snakes and swallowing and tails. At some point, you’re left holding a pile of data sets all impossibly tangled up in earlier incarnations of the same data set. A Social Science rat-king, if that rat-king had the power to loosen the IMF’s wallet. “What-” Who do you think relies these things?
That sounds “conspiratorial” except it isn’t. This is A Report, what else are you going to use? Go on google scholar and search “Democracy Index” or “Freedom House” or “Polity IV.” To really get your blood boiling, observe that all of these are cited in two ways:
1) By one or two careful Political Science papers pointing out methodological flaws with more respect than I am. Credit where credit is due, I’m not trying to slander the social sciences in total, most of them get the problem. But.
2) By thousands of other papers unquestioningly repeating the data, half of those papers used by other papers, and…
For all the talk of the replication crisis – and for all that there’s less talk than there should be – the actual extent of it almost never reaches the surface. This isn’t properly a replication problem (different measurements by design), but it has the same underlying issue: We pretend that bad studies are discrete. “This one failed to replicate, toss it out,” but none of them are self-contained. How many subsequent papers turn to it for evidence? How many of those papers are cited as crucial evidence? How many become underlying knowledge for the next generation of scientists? A frantic undergrad could tell you that the strength of a paper lives or dies with its citations. Would that we had more frantic undergrads in high office!
Last: what gives these reports their devastating import – what makes them not “Jeff’s Words and Numberthoughts” but The Official Report – is the expertise assigned those behind them. That expertise is granted by citations, media exposure, position as “Editor of [report].” Or: why do they get to assign the facts? They’re the experts. Why are they experts? Because they’ve assigned the facts. See how that works?
Don’t like it? “Well, where’s your data?”
My position is totally banal: “mixed methodology is good,” someone will still find a way to attack. Everyone believes in mixed-method social sciences until their preferred method doesn’t work, and then the knives come out. I guarantee this essay is taken as evidence of “qualitative bias,” which is insane, because I’m trying to point out bad qualitative arguments infecting empirical studies. I’m not against empiricism, I’m pro-empiricism, which is the issue. The only bias I have is the following, and I need you to think real hard about it: that does not allow us to throw the baby out with the bathwater. That we cannot perfectly measure “democracy” – that we shouldn’t even try – does not make it a meaningless concept. It merely means it must be addressed differently.
Since I talk about “values” a whole bunch, and I occasionally attack data, people might get the impression that I’m saying something like the following: “Quantification of [nice concept] is bad because it damages the human spirit, it ignores Important Ethical Concerns and Real Human Values.” Fair enough, I know that exact argument, it’s a bad one. I’m saying the opposite. Misapplying quantification does not evade or ignore values, it encodes them. It takes unexamined assumptions and pretends that they’re dispassionate measurements. What happens subsequently is the defense of them as “empirical science” by people who don’t know what either of those words mean. Of course, it isn’t a defense of science, it’s a defense of your own preferred values, e.g. look at the reporting above.
Quite a few of the most important aspects of human life are going to be very hard to study using empirical methods. They cannot be easily quantified (if they can be at all), and feigning objectivity with “the data” does not tell me that you’re a serious thinker who’s looked at the facts. It tells me precisely the opposite – that your mind was made up, and you hunted out whatever suited you. Repeating these only damages [everything]; I 100% believe that American democracy is decaying, should I use the EIU as my source? Do I need a source to get the point across? Serious question, ask it to your soul. What kind of argument do you accept for what, and what does that preference incentivize?
The issue here is not “quantitative vs. qualitative.” It’s bad qualitative vs. good qualitative. At some point we decided that “objectivity” means “numbers” and “that guy with the numbers” translates into “expert opinion.” To be taken seriously, “experts” began frantically hiding opinion behind “data” and, seemingly, forgot that they’d done so. In other words: there are opportunity costs to consider, how much researcher time goes into what, how much training goes into what. Observe the EIU report: “flawed democracy” is qualitative, determined by qualitative judgments of qualitatively-assigned variables, all of it based on a “quantitative” scale, itself qualitatively defined.
The easiest way to show that this is “bad” is to point out that the data sets all disagree. That’s not actually why these studies are bad. It’s a pretend problem, it still assumes that there’s a reason they “should” agree. But why? If they agreed would that be better or worse? What would you have proven?
The point of these papers is to say something about “democracy,” but even if every graph were identical they would not be talking about democracy. Given that democracy is subjective, that there is no objective measurement, you’d merely have established consistency in the way that experts define democracy. This is for exactly the same reason that “loss of confidence in public institutions” is not causal for the US becoming a flawed democracy, it’s causal for the Economist Intelligence Unit labeling the US a flawed democracy. Everyone dutifully reiterates this in one sentence, forgets it in the next.
The only useful information that could be extracted would require an examination of their qualitative account, because that’s going to tell you: a) where the definition comes from and, b) why they’re applying it here. The numbers aren’t numbers, they’re opinions with the gall to arithmetize themselves. Democracy indices are an example, but they’re far from the only one. See: [almost any abstract concept we care about]. Mathematizing them is a) impossible and b) far less desirable than producing good arguments. Unfortunately, it breeds bad ones. If “flawed democracy” was the EIU’s desired outcome, then not enough time went into their argument. It’s tossed off in a few sentences and given a stamp. Instead, all of that time went into seeing which measure on the predetermined scale could be messed with to produce the desired result. Go back to that definition of “flawed democracy.” Would that fly as a rigorous account anywhere but here? “It has to fit a lot of numbers.”
Were the reports presented as they are, i.e. philosophical arguments, opinion pieces, occasionally employing empirical data but not limited to that, the authors would be required to show their work. If they did it badly, then you’d know not to trust them, they would not be experts. Here, you know to trust them as experts because they are presented as such and because the presentation lacks their argumentation. I read the op-eds and I take my priors off stun; here, I just have to assume they more or less know what they’re talking about. Consider just how perverse this is.
In other words: encouraging “data” from opinion means you do not even get a good opinion. You get nothing, numbers on a page, each one encoding an assumption and evaluation that you are not presented with. And all of that changes “democracy” in the social sphere. That those reports use reliable, genuinely empirical evidence – voter turnout, etc. – says nothing about their judgment of “democracy,” and that is the only concept that they can change. The closest they have to arguments are justifications for changing ratings, which are casually reported as causal factors in “democratic decline.” Note that these are a couple of paragraphs per region. “Sounds like expertise to me.”
Now: picture – and I mean really silver screen it – an argument between Person with Numbers and Charts vs. Person with Opinions. You and I both know how America views that debate, and if not America then “reasonable, evidence-minded people”=the kind of people who get Wowed by the Economist. It is a fact that these are not facts, but everyone wants the Facts anyway, there must be a chart, we are compelled by eutaxy, the rest is “merely philosophical speculation.” But that “merely philosophical speculation” is, in fact, the evidence. When you discount it in favor of “data,” you have not gone straight for the hard truths. You’ve buried them.
Keep in mind, by [argument above] it’s irrelevant what those numbers and charts are. They’re opining in units, a sciencey flair and nothing else, and yet we do not treat these as equally qualitative arguments. “Uh, we do.” You’re blind. Sorry, no data for that one, either.
We applied incorrect tools to a problem and incentivized the worst use of them. Now we sagely nod about the problem while hoping it disappears. But the problem won’t disappear.
The only justification of a democracy measurement is its precision, e.g. [event] occurs and then +.01, wowza, let’s [event] a few more. Hence, the measurements’ use up and down the entire “objective” “data-driven” sector; pseudocolumns to datajournos to the IMF, World Bank, and USAID. At least USAID recently admitted it didn’t work, but note that this does not matter. All aid – whether by banks or the Fed, lack or surplus – is going to get run through the public ringer. How do We the People determine its effectiveness? What lets us know the thing to do, all the while casually determining the economic course of [nation] from our couches? Answer: Democracy is Under Assault and it Seems We’re Not Helping Any. Thanks, I didn’t know that.
The studies are already here, they’re already proliferating, our response has hilariously been to keep creating new and different democracy indices, forever ignoring the underlying problems. To repeat for the zillionth time: those concepts determine our entire debate. Like it or not, there are not objective scales for most of human society. The scale is meaningless by design, you’re either thinking about a qualitative concept or nothing, pretend it’s an easy common-sense concept and crash into [everything above].
Maybe you don’t care about politics, but this isn’t just about politics. “Democracy,” sure. What else does it apply to?
still from Babette’s Feast by Gabriel Axel