I wrote a post that was near-unanimously decried as obscure. My first instinct was to ignore that. The post was intended as being a rough introduction to a couple after it, so it doesn’t need to stand on its own. Also, I hate being wrong.
I tried to write a simplistic breakdown of the post to show that it was making pretty clear points, at which point I realized that it genuinely was way, way too dense. I then realized why so dense. What I’d written was basically a mission statement for this blog. I’m not sure if I would call it “leaping to conclusions” or even “asking you to take a leap with me.” It’s probably closer to “already having leapt, standing across a chasm, and screaming back that there is no chasm and there is no leap, we’re next to each other and you are insane.” That’s pretty bad.
This is an attempt to write that with the old post as a guide, which is kind of helpful. I know I’m not the best writer, and I also know that what I’m trying to talk about is really obscure, and one of those has to change for me to make intelligible points. Since the classification “things that are obscure” is everything important to me, it’s reasonable to instead try to be a better writer.
I’d wanted to write an introduction post, both to avoid the “you’re hiding the end game” criticism and to provide some structure. This isn’t exactly that, but what I wrote accidentally came close. It was just so dense that it obscured the end game and the mid game and definitely the starting kick. Classical Mesoamerican cultures are famous for both their ball games and their astronomy, and I’m famous for my dense and extended metaphors, so we’re going to say that last post was a total eclipse over the field, and I’m going to choose to interpret it as a propitious sign. As a sacrifice, I offer dignity and ironic detachment. This is as earnest as I’m going to get until someone really breaks my will.
This is still pretty dense and pretty abstract, but it’s a lot more coherent. This isn’t really an introduction (I have to talk about the Book of Samuel for that), and later posts will be way more specific. This is just an attempt to state a few things plainly.
I debated rewriting the post, but I think it’s a bad habit to delete mistakes. It also incentivizes a more cavalier attitude, because anything that falls flat can just be hidden. It’s sort of unbearably meta to write commentary on my own stuff, but it’s also the best way I know to get across what was going on and where I’m trying to go.
When I tried to write a simplification, this is how I described the first part of the essay:
Person A says, “people do [optimal activity] because it’s the reasonable thing to do” and Person B responds, “But people are irrational and impulsive, so they can’t be doing the reasonable thing.” I think that’s mostly confusing behavior with thought.
I don’t really know how common this is outside of philosophy. It’s super common in the humanities, which makes a lot of sense. People who spend all their time thinking about thinking are probably going to assume that thinking is the most important thing.
This is Schopenhauer’s stone, and it’s only just occurred to me that this is mostly an argument with a younger version of myself. That younger version was way too deep into tearing down some assumed combination of scientism, rationality, and utility that I just called “reason.” I don’t know, I guess I’d read too much post-structuralism. There was also a pretty clear political angle. Get rid of human rationality, and all the [outgroup] social science arguments are weaker. Of course, the problem is that no matter how hard one tries to talk about human irrationality (passions, poetry, whatever), people behave pretty reasonably. What’s weirder is that they do so even without the ability to articulate quite why they do a given thing.
The real thing going on is mostly a process of selection. Even if people have no idea what they’re doing, then harmful behaviors get weeded out over time. That’s because the person dies or [other bad event], or for the far more obvious reason: people learn. But this means that there’s a disconnect between the conscious reasons one does something and the actual reason that action is effective, and so arguing against conscious reasons says nothing about whether the behavior is reasonable. We can try and figure out the genuinely-true reasons that something is effective, but it’s not necessary.
Neither “people think stupid things” or “people do stupid things” say anything about aggregate human behavior. Those behaviors will get selected against (or already have been selected against), and the resultant society will behave more-or-less reasonably.
The word “selection” is important there. This is the mechanism behind cultural evolution. Just as genes don’t need a conscious selector, helpful habits/behaviors/customs/etc. are retained and modified over time. Local traditions are a prime example of “actions that have bizarre conscious motivations” that work anyway. (Also why this relates to James C. Scott and Polanyi.)
The “unconscious” aspect to biological evolution confuses a whole lot of people when they first study it. We’re so used to thinking of intentions and motivations that it’s difficult to picture these longer, unconscious processes. We practically demand teleology. As bad as that tendency is when talking about the “natural” world, I’m pretty sure it’s ten times worse when talking about human societies. We can sort of admit that, sure, over time shaggier dogs will survive colder climates and so pass on the shaggy gene. We have a hard time accepting that humans will pass on a behavior or a tradition unconsciously, because any group that didn’t do that thing was outcompeted by whoever did.
Then again, we experience this (roughly) in our daily lives: Imagine a person who’s done a job for thirty years, and compare that person to someone who was only hired yesterday. That oldtimer can’t necessarily explain why they’re doing things a certain way, but there is a reason behind it. A lot of failure is also behind it. Extend that same cultivation of skill to a tribe that’s lived in the same place for five hundred years, and you’ll probably see why this is such an effective way to adapt culture.
As a highly speculative side-note: people are more likely to pass on customs that they like and enjoy. I have a strong suspicion that this might be part of the reason that local customs are bathed in myth. People want teleology and purpose, and attaching that to an actually useful behavior is a really good way to pass on the behavior while providing a kind of spiritual/existential/whatever benefit. It’s also a whole lot more parsimonious than having both “useful, pragmatic fact-stories” and “psychologically satisfying” stories. Natural selection favors nothing more than efficiency.
In theory, because of those selection effects, you could try and back-track “optimal” behavior for a given situation. If pressures have been stable for long enough, that behavior will have been selected for. People will know how to farm their region, or how to hunt local prey, etc.
This is more-or-less what we do with genetic traits. We assume a certain end-goal for an organism (“survive” and “pass on that gene”). We then look at which traits show up in which region and try to explain how those particular traits aided “survival” and “procreation.” We’re not necessarily good at this. While evolutionary psychology has gotten a lot better recently, it’s pretty much Ground Zero for terrible adaptive arguments. Still, the intellectual backdrop makes sense: if some trait is really common, then it probably aided survival or mating or [whatever].
I’m talking about culture and human behavior, of course, but I think it works analogously. If you want to learn the best practices for [activity], then look at people who have done it for a really long time. That means the thirty-year-veteran of a job, and it also means “tribe that has fished for this particular shark in this particular region for centuries.” Their explanation may sound strange, but you can’t argue with the results.
All of that is pretty much just James C. Scott and the rest of the crew. It makes a lot of sense that they would focus on things like agriculture, because there are extremely clear end-goals and pressures. If you fail, you die.
But, of course, humans do a whole lot more than subsistence farming. Our margins aren’t nearly that tight, because we’re actually really good at survival. Even “subsistence farmers” weren’t 100% desperate all the time – they had holidays and festivals and time to tell myths and stories with the group. This is where things get impossibly complex.
Part of those “best practices” are about human relationships. We have enough time to do that, and most of our “threats” aren’t from predators. They’re from interaction with other people. Cultural evolution isn’t trying to understand farming habits, it’s trying to understand how far more complex social institutions arose. Pseudoerasmus has a very good, critical overview of this. He’s talking about institutions, and I’m talking about the far vaguer “culture and technology” (which institutions are a part of), but one thing is clear: human politics are also a process of selection. They grow and develop and change based on needs and circumstance.
I actually think this gets even more complex, because people aren’t just selecting based on “what works” in the sense of ” one, specific end-goalthat we have to optimize for.” Survival is underlying a lot of it, and you have to optimize certain institutions towards “making people kill each other less,” but those requirements aren’t everything. We also choose things for more nebulous reasons: we like them, or believe in them, or desire them. In other words, culture under selection pressures that come from human psychology: happiness, dominance drives, self-actualization.
I think the easiest way to understand this is with the thrive/survive distinction, where “thrive” is “things we like and enjoy and want more of” and “survive” is closer to “whatever stops us from killing each other and also being killed by another political group.” But that’s a little bit too neat. For one: “thrive” looks very conscious, inasmuch as “things we like” certainly sounds like things we consciously enjoy. I don’t doubt that that’s part of it, but I do doubt that’s all of it. Thrive-over-time probably tends more toward “meaning” or “wholeness and stability” (positives for society) rather than “quick pleasure” (probably neutral in the long run). Again, just a generalization. Two: All of this is happening on many different levels: friend groups, group membership, local politics, national politics, etc. and it’s insanely hard to determine whether we choose those groups for thrive or survive. It’s certainly a mix of both, but the proportions are incredibly obscure.
We’re unsure whether a group keeps obeying some local tradition because it’s keeping some violent human nature in check, or because they simply enjoy doing it, or because it’s a fairy-tale that resulted from survivor bias, or because it’s actually connected to some far-deeper-aspect-of-human-nature that actually optimizes for “thrive” but not in any conscious way. I suspect that the final one is much more common than we assume. By way of example: I’m not positive, but everything I’ve read about marriage sounds like this. Arguing for marriage as a “conscious point towards thrive” is actually really hard without religion. Why not just have partners, or sleep with whoever you want? But divorce has almost unilaterally negative effects, and marriage appears to have extremely positive economic and psychological effects.
That’s the cultural conservative argument for why keeping tradition is so important, right? We often argue for changing parts of a culture based on “thrive”, and our arguments are normally “this will make me happy.” But not everything is based on thrive, and things that are might be hiding it. I’m absolutely in favor of “keep divorce legal,” so please don’t misunderstand me. But it’s a good example. Divorce is far more common now than it was pretty much ever, and its net effects may not tend towards “thrive,” even if all of my arguments in favor of it are based on “human choice and self-actualization and making people happy.” Changing small things in a culture can have these gigantic, exaggerated effects that no one will be able to determine.
The following sounds obvious, but it’s quite hard to express in a simple, coherent way.
When I say that human culture and sociality is a process of selection, I mean that in two ways. One is the way above, namely: we choose institutions/friends/politics that we like (thrive) and that work with human nature to fit an environment (survive). The second meaning is much more important: those same institutions and traditions and cultures then start selecting for people. They change behaviors by providing different incentives and punishments, and they ingrain different habits and instincts. I’d probably say that they change the underlying psychology (for instance!), although it’s important to take that with a grain of salt. Humans aren’t blank slates, and so any changes are mediated by biological factors, etc. A milder version would be to say that they disproportionately benefit certain psychologies, and that these are different.
It’s not that hard to see quite why this should be the case. If, over time, behaviors optimize for an environment, and most of our environment now is other people, then behavior will optimize for that. Add in the fact that pretty much all selection now is based on sociality, rather than, say, “who can best fight a cavebear,” and it makes some sense that almost all pressures are operating on political instincts.
Notice that this is a feedback loop (kind of). This is probably where my last post got so dense and obscure, because it’s really, really weird to try and describe this. But: let’s assume that people opted for a certain institution. It worked best with whatever their natural proclivities are, they liked it the most, etc. As that institution starts selecting the people under it, their likes and wants and possibilities are going to change. Over time, when culture and institutions start changing again, the possibilities for new institutions will have changed. Let’s imagine that you have some early Greek settlers. One group chooses Athenian institutions, and one group chooses to become Sparta. Over time, Sparta starts making Spartans, and Athens starts making Athenians, and both of those are going to be different from the original settlers. In turn, both of those societies have different paths of development. The tendencies and possibilities for maintaining Spartans requires certain institutions, and the same for Athenians. It’s something like a tree branch: all the branches are new and different, but they come from that first one.
This is, admittedly, a pretty mechanistic way of viewing human society. It’s also an extreme caricature. Still.
If you’re looking at a society with an established tradition, there’s probably a reason for it. If they’ve been doing one specific thing for several hundred years, then it’s likely that it was selected for. This is especially true given the fact that selection pressures have been pretty constant for most people in most places. I don’t know where this originated, but I hear historians repeat it all the time to drive that point home: if you were to drop a Classical Roman farmer down in a Medieval village, they’d still basically understand the tools and technology and agrarian practices. In other words, selection pressures have been incredibly stable. Even explicit beliefs are pretty stable: local deities might now be called saints or martyrs, but their worship goes back a really long time. (This will be a point of controversy, and what I said was too general.)
Compare that to modernity. The rate of technological change is obvious, but cultural changes are also unimaginably rapid, and institutions are all over the place. What this suggests to me is that there’s been basically zero time for us to have figured out “what works” and “what doesn’t.” The environment is so new that I doubt we even know what the pressures are, much less how to respond to them. This means selecting for “survive” is extremely unclear at the moment.
On the other hand, we have “thrive.” At least right now, explicitly choosing things based on “thrive” seems predominant. This is more or less liberalism, and I’m in favor of that. I’m just saying that even thrive is unclear, because thrive was always unclear. The most non-obvious aspects of traditional culture all relate to thrive. This is non-obvious to us, of course, because we don’t think that happiness comes from older traditions. We think happiness is essentially “whatever our thoughts tell us we enjoy”, and then we try to quantify that by a billion competing metrics, and half of those rely on what people say. I know that it’s gauche to do this, but I’m really unsure about that. I don’t mean to claim that people have “false consciousness” or [other such term], but “knowing what makes you happy” is a genuinely impressive achievement. Knowing “how to do the thing that results in what makes you happy” is even more impressive. Being able to consciously respond to a quick survey with all of that is practically heroic.
So survive and thrive are both extremely vague, but these institutions are still selecting for human behavior somehow. Incentives might be the easiest way to understand a society, and it’s really unclear to me what we’re actually incentivizing. Whatever we are incentivizing doesn’t come out in the ways we expect, and for some reason we keep assuming that it will. There’s also no universal law that says that societies have to be feedback loops. Maybe whatever is being selected for looks a whole lot less democratic than before or much more authoritarian or… well, maybe it’s good, but I’m kind of pessimistic. It’s a lot easier to make things worse than to make things better. It may have been chosen for good reasons, but things have consequences.
One thing that’s true is that they’re changing at a basically uncontrollable rate. This is theoretically good, inasmuch as it means that social engineering experiments will have a whole lot harder of a time getting off the ground, although that doesn’t mean that we won’t try. But it also means that whatever is going on underneath them is even harder to figure out.
I have basically zero desire to social engineer. I also don’t think that it works very well, because humans can’t be bettered as easily as we think. But if “social engineer” means “institutions that change human behavior in totally new ways” then I’m pretty sure that it’s already happening, because it’s always been happening. We just don’t really know quite what to select for, or how to get there, and most of our methods of determining that strike me as extremely short-sighted.
This is pretty much as far as we can go with the metaphor of “selection”, because the only thing I can see set against institutional selection is what we call values. Whatever local traditions kept people from needing really extensive, top-down instruction before are likely the only way to combat that now. The problem is that we hear “value” and automatically assume a certain moralism behind it. But I don’t think this is really a moral argument. I think the real argument is in favor of better-thriving, and I’m pretty sure that that can’t come from a state. I also don’t think it’s the same as pleasure. Finally, it’s not going to come from older traditions. As conservative as this argument sounds, those aren’t going to help us. We can learn from them, but they’re gone.
In other words: the enemy is nihilism, but it needs to be made explicit what that means.
The rest of this blog is then going to try to make that explicit.