You’ve got to be very careful if you don’t know where you are going, because you might not get there.
The previous essay was inescapably dense, the abstract territory was by nature abstract, the subject matter happened to be the entire past year of this blog. I think the technical language of the past blog was necessary, but it’s a good habit to try to parse things more carefully, and a few readers have asked me to do so. Moving slowly and picking at specific claims:
We care more about “love” than we care about “the number of desks in Montana with gum stuck to the bottom.” Here’s your question, it’s so basic you want to avoid it, but avoiding it is the entire problem: why?
“Words don’t have set meanings” isn’t anything close to an original thought, but it’s very easy to forget. If you bother to read [anything], you’ll find a shocking number of people coming to terms with that fact. I won’t be able to write anything better than SSC’s The Categories Were Made For Man, so I’ll just link that. Keeping that in mind: Properly speaking, you mean one specific thing called love, it may have a material correlate, may have an action associated, whatever. All that means that whatever is “valuable” there isn’t the word itself, nor another group’s definition of the word, but [this specific definition of love]. The question is this: how do you study that?
I generally dislike the phrase “presumption of objectivity,” because a) the stronger form applies to 16 year olds and almost no one else, and b) the weaker form isn’t a very interesting observation. Overconfidence is bad, sure, but certain types of knowledge are more or less accurate, and some might even be truer. All of the really interesting parts of epistemology come from trying to figure out how that is possible without objectivity, not blankly stating that objectivity don’t real. I think of this as the equivalent of yelling about hypocrisy without ever putting forth a positive platform, but your mileage might vary with that metaphor.
Still. Presumption of objectivity is a problem, but it’s a problem in a particular way. Everyone knows that presuming an end will color the entire study, and about 5% of academics even try to avoid that. Fewer people recognize that assuming a certain method of studying something does the same thing. One reason I prefer the language of “paradigms” for philosophy is that paradigms imply a methodology, and methodology is where this sets in. Different philosophical schools have different standards of evidence – falsifiability for the Popperians, apodicticity for the Aristotelians – that imply what you can know about a thing. In turn, this is going to change the meaning of the concept being investigated. You can’t investigate the same “God” under Popper that you can under Aquinas, but the God of Aquinas is much more important to people.
Show, don’t tell, let me give you a basic example.
Say you’re a neuroscientist testing for “love,” and you use a series of definitions that vaguely relate but are imprecise. You only have a definition of “love” based on your particular interpretation of a particular culture, perhaps: “love is a feeling that comes from observing a beloved person.” So you replicate this “love” as best you can – plop a few spouses in front of each other – and scan the loving person’s brain. It lights up – wow! – that must be what love looks like.
How would you know? You could only test against the vague concept of love you’d applied. You assumed what the behavior “loving” looked like beforehand, found its blinky correlate, declared that to be love, but all of that depends on the previous determination of the behavior. Or: the problem is not the scanned brain you got. The problem is in how you got it, in this study, assuming that “love” is the same as staring at the “beloved” object. “But that is what love is.” Says who? Your scan?
What happens if someone comes in the room and starts laughing, “That’s not love, that’s an approximation of it. Love is the action, not the effect.” Do you tautologically test against the same readout? Imagine you’ve solved plasticity and put an identical map on two million other brains, placed them in the same stare-at-the-beloved situation, and everything matched. “It replicated, must be real.”
But no one was doubting that you’d found something real. They doubted that it was love.
Last time, I made a point about certain ideas not being subject to empiricism. I meant that relatively strictly, and the above is what I mean. Moving on.
I’m not saying love is anything, I’m saying that it’s a philosophical dispute and you’ve already decided. I’d endorse the intuition that some definitions of “love” have more weight than others. And? They doubted that you were addressing something with the same import, i.e. that’s the exact problem. You can pretend that “the common sense meaning of love” is good enough, but that’s a wimpy way to dodge the question. Whose common sense?
If you want to deny all knowledge, feel free to default to, “Well, it’s semantics, it’s subjective, it’s not a meaningful thing to investigate.” Then why were you investigating it? Probably more to the point: why were you investigating it in that particular way?
I’m not actively trying to make anyone’s life harder. If anything, I’m trying to make it easier, some groundwork in philosophy helps to avoid stupid problems:
1) How you study something determines what it is, which means you’re making a strong claim well before you begin investigating. If you assume that “love” can be studied with neural imaging, then you’re assuming it’s the kind of thing tied to a brain state. That might be justifiable or not, but the method requires you to make an ontological claim well prior to the investigation. “Love” is not a natural kind, and plenty of people don’t even believe that “love” is one thing. The Greeks had three words that translate to it, but none of them meant our “love.” Were they wrong? How?
2) Human beings are interested in certain things more than others. “Love” is only something we study because we have some investment in it as a concept, it’s not a neutral interest, but it’s also not an interest in the word “love.” We’re invested in a very particular conception of what that word means. “Love is a thing tied to a brain-state,” is not necessarily the one all of us are interested in. So what did we just study? The dispute isn’t necessarily over love, nor over neuroscience transmuted into a dispute over love, nor over empiricism, nor over [anything else], and assuming those are separate is already an issue.
I think “method determines object of study” (1) is relatively easy to grasp, but it’s only an important observation in conjunction with “human beings are interested in specific things” (2). This becomes doubly true where knowledge itself is concerned. The value of a fact is going to be determined by your standards of evidence, and the standards of evidence (“what a fact is”) will also change what kinds of objects you can study. If you can’t tell, one of the things that makes it so hard to write about this in plain language is that all of that happens at once.
For instance: “Well, at least one of those can be measured. It may not be the exact thing we all mean by love, but once it’s in quantifiable form, then it’s useful for [doing stuff].” Think about what you just said. You’ve made a whole bunch of strong commitments there, all of them philosophically fragile, and all of them at the same time. A certain kind of knowledge (empirical, quantifiable) is valuable, whereas another (observational, descriptive) is not. This is predicated on a certain standard of evidence (empiricism), it requires you to make ontological claims in order to be gathered (love is this feeling we can brain-scan). But what if the “love” this requires you to study is uninteresting? Why would the information you gathered about it be interesting itself?
Note: “exact” is pretty loose, do you have any sense of when it would be inexact enough to stop mattering? This is mostly important because if you don’t understand the limits of your own paradigm, then applying it at random is the worst possible instinct.
Quoting Marginal Revolution quoting an abstract because I only found an earlier version of the paper:
We replicate nine key results from the happiness literature: the Easterlin Paradox, the ‘U-shaped’ relation between happiness and age, the happiness trade-off between inflation and unemployment, cross-country comparisons of happiness, the impact of the Moving to Opportunity program on happiness, the impact of marriage and children on happiness, the ‘paradox’ of declining female happiness, and the effect of disability on happiness. We show that none of the findings can be obtained relying only on nonparametric identification. The findings in the literature are highly dependent on one’s beliefs about the underlying distribution of happiness in society, or the social welfare function one chooses to adopt. Furthermore, any conclusions reached from these parametric approaches rely on the assumption that all individuals report their happiness in the same way. When the data permit, we test for equal reporting functions, conditional on the existence of a common cardinalization from the normal family. We reject this assumption in all cases in which we test it.
Here’s the earlier (?) version (pdf) of the paper, with a different abstract but hopefully the same content.
From the opening of the paper:
The basic argument is as follows. There are a large (possibly infinite) number of states of happiness which are strictly ranked. In order to calculate a group’s ‘mean’ happiness, these states must be cardinalized, but there are an infinite number of arbitrary cardinalizations, each producing a different set of means. The ranking of the means remains the same for all cardinalizations only if the distribution of happiness states for one group first order stochastically dominates that for the other. But, we do not observe the actual distribution of states. We instead observe their distribution in a small number of discrete categories, essentially a few intervals of their cumulative distribution functions.
Without additional assumptions we cannot rank the average happiness of two groups if each has responses in the highest and lowest category.
Happiness research is self-reported, and it relies on ordinal scales. These merely rank – first, second, third, etc. – without giving any sense of distance between those points. Consider two ordinal scales for height, both ranked smallest to largest:
a) a chair (first), a one-story house (second), a two-story house (third).
b) a chair (first), a one-story house (second), the Empire State Building (third).
If you only want a list of “quality of object relative to others,” then these are fine. But if you want to, say, determine the mean height per group, then these will be vastly different.
Most research will need the mean, median, literally anything besides “this thing is bigger than that thing,” meaning you need some way to turn the ordinal scale (rank) into a cardinal one (distance between ranks). This is particularly true if you need to compare different groups – for instance: happiness in one country to happiness in another country.
You can assume that there’s some quantifiable thing underlying the scale, that you’ve found it, and that it corresponds to certain arbitrary parameters. This paper points out that happiness research has done this. Normally, happiness research assumes (implictly or not) that happiness has a normal distribution. This is a terrible assumption, and the examples (a) and (b) should explain why. The Empire State Building is taller than a two-story house, but its ordinal rank does not betray that – both are merely “third.” It’s extremely unclear what the bounds are supposed to be, unless you simply decide one for reasons.
Put a different way: “at least we can measure it this way” is a bad gut reaction. The paper’s conclusion (really, right before the Conclusion) is quite damning:
Thus if researchers wanted to draw any conclusions from these data, they would have to eschew rank order identification. In other words, they would have to argue that it is appropriate to inform policy based on one arbitrary cardinalization of happiness but not another, or equivalently that some cardinalizations are “less arbitrary” than others. It is unclear from where such an argument would come, or why we should apply a different standard for happiness research than other branches of economics.
Even if someone were to make this case, we cannot see how such a standard would say that distributions that resemble objective economic variables would be implausible. In the Online Empirical Appendix we further show that nearly every result can be reversed by a lognormal transformation that is no more skewed than the wealth distribution of the United States. Even within this class of distributional assumptions, we cannot draw conclusions stronger than “Nigeria is somewhere between the happiest and least happy country in the world,” or “the effect of the unemployment rate on average happiness is somewhere between very positive and very negative.” To be clear, we are not proposing that satisfying this minimal criterion would make a result convincingly robust.
As bluntly as possible: The way you define “happiness” predicates the results that you’ll receive.
They wanted to find a kind of happiness their method did not allow for, their method did not allow for this, they did not understand their own method. Mix-and-match paradigms, you can’t empiricism at a subjective experience without some underlying rule, phenomenological reportage runs into philosophical issues that must be addressed pre-empirically. Or:
1) How you study something changes what it is. “Happiness is distributed normally” is not neutral. It may be a methodological necessity, but it’s also an ontological assumption. “Happiness is the-kind-of-thing that’s distributed normally.” That might be justifiable, but when it isn’t, it’s not a good idea to even try.
2) “Happiness” is important to us, but what makes you think it’s this kind of happiness? Pretend that this research wasn’t awful: To make happiness “useful” for their purposes, researchers forced it “be” a certain thing. Here, that means explicable on an obvious scale of 1-10. What they did not realize is that they’re no longer talking about “happiness” the phenomenon we all have a vague sense of. They’re talking about “happiness, which we define as something with this mean and functioning in this way for this particular study of this particular group.” Which is fine, because you can define anything to mean anything to prove your point, but you can’t use it for anything else. Why would you assume that has any value? More to the point: why does that kind of knowledge have any value?
Note that this is the same criticism I made in Everything is Going According to Plan (VI). Admittedly, I was too optimistic right before that section, and while criticizing I allowed for the idea that we might have some vague knowledge of happiness’s correlates. Mea culpa, I’ll be more careful in the future – I retain the sentiment but retract any certainty that happiness is known to be tied to anything:
If you forget that it’s experience, rather than whatever it is you’re measuring (and thus serves as proxy for the experience), certain tradeoffs look… well, they look different, because the experience might have qualitative differences that are not grasped by a quantitative measurement. For instance: let’s pretend we actually have decent ordinal scales for happiness. What is a 5? A 6? Is 6 merely “the experience of 5happies plus a bit more?” What would that mean?
And I know the exact response, because I got it then: “At least this can be measured, it’s useful.” It may be useful, who knows, but useful for what? “It’s the most rigorous way of looking at happiness,” you mean the most quantifiable way of looking at happiness, but no, still wrong. It doesn’t tell us about “happiness,” it tells us about ordered lists of different subjective interpretations of happiness. You’re being confused by words. “Are you saying we can’t know about happiness?” Not at all, you’re presupposing that the only valuable knowledge is quantifiable knowledge. Who told you that?
I don’t want to overstate my hand, so I’ll add that the authors do recognize value in the studies, but it’s not clear exactly what that means, nor what to compare it to. It’s possible to compare, perhaps, one individual’s self-reportage, but you’d need to ask whether they got happier or changed their interpretation of happiness. To compare that to anything, you’d still need to make some strong claim about what happiness “is” such that forms of it may be numerically related:
This does not discount the actual self-reports, themselves. If we are only concerned about the number of people who subjectively consider their emotional state “not
too happy,” we can estimate effects using conventional binary response models. But, it is important to recognize that such an interpretation is much narrower than proponents of the use of average happiness measures currently claim for them.
This isn’t unique to me, and I’m not claiming some special insight. This should have been obvious to anyone whose brain was not melted by physics envy. I’m saying that I’m not trying to make life hard for researchers. The opposite, understanding the issue should have made everyone stop before they bothered. The world is complex, knowledge is hard, own up to it or produce shit research.
Sure, it was also a brief “philosophical” argument, it’s not even vaguely unique to me, a thousand people have made the same point, none of it mattered to happiness researchers. Why?
Repeating from above:
I think “method determines object of study” is relatively easy to grasp, but it’s only an important observation in conjunction with “human beings are interested in specific things.” This becomes doubly true where knowledge itself is concerned. The value of a fact is going to be determined by your standards of evidence, and by the same process, the standards of evidence (“what a fact is”) will change what kinds of objects you can study.
We can put it more broadly: “love” is not capable of being decided in the ways we keep trying. “Happiness” is not either. I’ll throw out “good” and “true” and a host of other deep-set values. If you want to pretend that none of that is important, then feel free, but I’m not going to trust you for a second. Even an aromantic anhedonic nihilist will recognize that the preferences of other people are important for determining policy, if not merely to keep the herd from stampeding.
This sounds like a fact/value distinction, but it’s a whole lot more basic than that: what counts as a fact is not the same, not all facts can be gathered in the same way, not all methods of gathering will gather the same things. That doesn’t make epistemology “totally undecidable,” you get that empiricism itself is a metaphysical commitment, right? Everyone makes a choice, explicitly or implicitly, as to how they will investigate the world. None of the methods can investigate everything, choose wisely or get tricked into thinking its facts are “important” without asking why they were facts to begin with.
Question: I assume that some things matter to you. Why? What’s a valuable thing to know? Was someone fascinated by the U-Shaped Curve since childhood, or were they more broadly interested in “happiness”?
I’m not attacking economics, just like I’m not attacking neuroscience, just like I’m not attacking empiricism. That’s missing the point so profoundly I don’t even know how to respond. “I read plenty and have somewhere between contented normie and apocalyptic street-preacher opinions, I am extremely intelligent, don’t @ me.” We didn’t choose empiricism at random, it’s a non-obvious philosophical school that requires epistemological work. Sometimes it breaks down, sometimes its method makes it talk about things that bear no relation to the things we were interested in in the first place.
This particular paper has had (and will have) a predictable response: “Well, let’s just abandon that line of research. We can’t determine anything about it.” You should be going nuclear, and that makes me infinitely more concerned. Economics only mattered to anyone inasmuch as it sought to improve human well-being. Presumably, “overall happiness” is somehow unimportant to that. That may not be true for every economist, but those people only have jobs on the coattails of an older tradition. We don’t give a shit about “utils,” that’s just another word, we gave a shit about “utils via utilitarianism, i.e. happiness.” Feel free to prove me wrong on “happiness,” kind of the point. There’s an implicit understanding of the value of the knowledge itself, it is 100% philosophical in origin, I want it brought to the forefront.
“Well, truths are valuable in themselves, and we should reject uncertain knowledge.”
Wait, you think truth and certainty escape this?
top: Curator’s Corner with Irving Finkel
next: Plato, again
edit: noted that individual reports may still be useful, hesitant about their purpose. Original is here for transparency purposes.
16 thoughts on “Love and Happiness”
ie “the economic bias in every field”.
So what should Happiness Researchers do? It seems like valuable work for the things we want. Should they just preface each paper with “Reminder: we measure things this way because it’s convenient but it won’t be the whole picture” ? Or “Reminder: you should also ask philosophers about it” ?
In my view, the latter is better. Empiricism is an epistemological commitment, and it has limits that we need to be more serious about. (Same with quantitative research in general.) I don’t see a way for us to avoid some pre-empirical def. of “happiness,” i.e. philosophy. This is merely bad philosophy, and I’d rather it be good philosophy. Whether that allows for this kind of research is a separate question, and I’m certainly open to the possibility, just not how we’ve been doing it.
Funnily enough, the study’s authors wonder that themselves. They bring up philosophy of social sciences as a way to address the issue, but they’re pretty hesitant about what that would mean re: cardinalization and the broader philosophical sense of “happiness.” For good reason: tough question.
you will have to bump into this eventually, so why not, eh, “accelerate the process”, right?
Ha, yeah, I’ll get into utilitarianism some time. Needs careful tempo. I’m open to the idea that I should go into it sooner. I know a few people want me to get into metaethics proper and fast.
Also, did I just see Land argue for virtue ethics as accelerationist praxis? Shame he covered it up at the critical moment, his argument is eudaimonia>hedone, not intelligence>happiness.
baby don’t hurt me, no more
LikeLiked by 1 person
You’re not wrong, but the main virtue of pretend-empiricism is that you can show it to other people and use it to justify your actions. It’s a dumb justification, sure, but it will probably hold up in court. When you have real money and real consequences at stake, you have to optimize for the ability to justify your actions to other people*. Anything else ends in lawsuits.
Yes, this probably does mean we’re screwed. C’est la vie.
*People who are too dumb to get out of jury duty, even.
Very interesting post. A long-standing pet peeve of mine is the prevalence of pseudo-quantitative decision making where one rates each of the options on a bunch of categories/metrics (implicitly using ordinals) and then combining the points to make a decision on the highest scoring one (sneakily converting them to cardinals during the act of addition). Eg: the process by which US News ranks colleges. This process is absolutely useless, because human tendency will be to keep mucking around with the ordinalization till you get the ranking you intuitively expect.
The fact that cardinalization (number systems can be extended all the way from natural numbers to complex numbers and beyond) happens to be very useful in the physical sciences is a very surprising fact that is not at all obvious a priori. Eugene Wigner’s consideration of this question is a classic which has not been improved upon, to my knowledge — The unreasonable effectiveness of mathematics in the natural sciences (1960): https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html
This blog post has reminded me to go and give Wigner’s article another thorough reading 🙂
This is great. Fan-blogged about it here: https://flightfromperfection.com/on-measuring-happiness.html
I think of love as analogous to social investment. It can’t be coincidence that Hollywood calls love the most powerful force in the universe while economists call compound-interest the most powerful force in the universe.
“A certain kind of knowledge (empirical, quantifiable) is valuable, whereas another (observational, descriptive) is not.”
definition of empirical: “based on, concerned with, or verifiable by observation or experience rather than theory or pure logic”
quantifiable means I can put a number on it, which is very much a kind of description. So I have some trouble parsing the point made.
Just after II I don’t understand if I’m a neuroscientist working with a ‘series of definitions’ (so multiple and possibly from different cultures) or just one (like in the next sentence).
The ‘mean’ most people mean, when they say ‘mean’ is the ‘median’ not the ‘arithmetic mean’ which to avoid confusion is usually called ‘average’, meaning the sum of all measurement values divided by number of measurements. The ‘median’ is the middle object, when all measurements are put on an ordinal scale. Under that most common definition, in both groups the mean is the same (a one story house).
I don’t understand what you mean by presumption of objectivity and how the weak form is an uninteresting observation. How is a presumption an observation? What is the weak and the strong version? You mean only a 16 year old is deluded enough to think, that he’s seeing things with complete objectivity?
LikeLiked by 1 person
Looking forward to the next installment. We’ve hit the negation, so to be logically consistent, the affirmation should come next, bound “by bonds of adamantine” or however you’d be pleased to argue it.
Incidentally, I find you to be by far at your strongest when you’re in your poetical style. I’m certain the clarifications, like here, are helpful, but the reader learns best when the author lets loose a little. Always enjoy the essays that fall more in that vein.