My Hopefully Thorough And Charitable Critique of Rationalists and Effective Altruists: Inattentive Insularity and Incidental Inconceivability

Eager Question
9 min readMay 25, 2021

--

I have gotten the impression that there aren’t many good critiques of the Effective Altruism community or of (aspiring) rationalists generally, because the people who tend to be most vocal about critiquing those populations are often either inarticulate or unable to frame their concerns in a way that is clear enough for Rationalist/Effective Altruism (henceforth R/AE) members to engage usefully.

So… Here is one.

Some time ago, there was a post on the Effective Altruism subreddit about how “white” the community is, which was (expectedly) responded to with indignant annoyance from the members of the community that such a thing would a.) matter in the first place, and b.) be a problem. While I believe the answer to both of those is “kind of, yeah”, those questions are also completely secondary to the point of complaining that something is “too white” .

When a racially marginalized person who wishes to participate in a specific community complains that it is “too white”, what they usually mean is something along the lines of:

“I want to participate in this community, but I’m kinda scared that you’ll be racist at me if I do, whether overtly because you’re a community that has little need to police such behaviour if there are very few people to be racist towards in it, or incidentally, by virtue of being really ignorant in ways that will probably be exhausting for me to deal with”.

Complaints about inclusion or insufficient diversity in online communities (which have very little reason to be racially segregated) are generally complaints about how welcoming a space is, how comfortable it is to interact with, or conversely how effortful and tiresome it is. As far as I am aware, the people who make those complaints have not actually polled the populations, and cannot possibly know the racial makeup of the community in the first place. What they can know is that a.) they personally are a racially marginalized person, and b.) they do not feel welcome in that space in a way that is similar from feeling unwelcome due to racism. The rest is extrapolation.

The charge of being “too white”, then, is a charge of being insular in ways that make the speaker feel excluded, in ways that remind them of when they feel excluded on the basis of race IRL.

If a community of supposedly rational people that seeks to spread its mode of thinking broadly because its core tenets are that thinking in a specific way is better than not thinking in that specific way gets told “you’re too insular” and responds with “nu-uh!” and “shut up” instead of any type of reform to make the community more welcoming to people who feel unwelcome… It is failing at its goal of spreading that way of thinking.

(Aside: In that same “too white” incident, one of them even told me that half of EA is “SJWs” and half of it is anti-SJW/non-SJW, and therefore issues of the community BEING BAD AT OUTREACH were basically left undiscussed because neither side could persuade the other. If these motherfuckers are so rational and open-minded and rigorous and so on, why can’t they reach a consensus there? It’s almost as if having a fairly homogenous population makes the utility of “Social Justice” practices non-obvious (or appear purely performative) to a substantial portion of the population in exactly the ways that made the person complaining feel unwelcome or something.)

I believe (potentially falsely) that the idea of “too white” being code for “potentially unwelcoming to other people like me personally because I personally feel unwelcome in a way I find reminiscent of when I have been racialized IRL” would honestly surprise a lot of rationalists and effective altruists. Framed that way, they might say “well, just say that, then, and we may assuage your fears of being mistreated within our community instead!”, or blame the speaker for their poor communication skills. However, I think that would be their failure to be attentive listeners and also to reframe other people’s claims into their own language.

It seems to me that a lot of rationalists get into the habit of communicating clearly, unambiguously, and in ways where status games are secondary to information exchange. This is a good thing! But then… They get really really bad at understanding what other people are saying when they aren’t doing that, and at putting themselves in the position of the other person who lacks their newfound thinking habits.

Examples:

I saw a post on Less Wrong some time ago. A person was asking for a different way to say “toolboxer” because their friend found it “offensive”. After some back and forth in the comments, it became incredibly clear to me that this person’s friend did not actually have a problem with the word “toolboxer” but with the way in which he was being dismissed by the speaker. He believed that emotional intelligence and non-emotion-related problem-solving abilities are orthogonal, and the poster believed that you could use non-emotion-related problem-solving ability and bayesianism in order to solve emotional problems.

Irrespective of who was right in that discussion, the poster was using “well, you’re just a toolboxer” to avoid engaging their friend in that conversation by declaring their philosophy of emotion simply incompatible with his. That was the problem. Not the word “toolboxer”. The poster literally told a commenter that labelling their friend a toolboxer “helps withdraw from the conversation” and yet did not realize as they were typing that fucking sentence that the very act of labelling somebody something in order to stop engaging their arguments is insulting irrespective of what the label is.

My second and third examples will both be of the wonderful, brilliant, and beloved Julia Galef (whomst I stan).

She was very puzzled in one of her podcast episodes (as was her guest! Who wrote the book on this phenomenon!) because people “use google as a confessional”. The idea of googling “I am angry” or “I am angry about X” spontaneously seemed bizarre and confusing to them both, and they floated a variety of hypotheses that went mostly nowhere about what might be the reason, before just sorta moving on. It seems very bizarre to me that neither of them really explored the question of what happens when you google “I am angry about X”. Usually, you find things like blog posts and forum discussions dedicated to being angry about X, and then that will be a helpful mechanism to validate and legitimize the anger of the person who googled that thing, by finding like-minded people who have already articulated the reasons behind their anger.

Googling your feelings is a mechanism for finding other people who have written online about having those feelings, and therefore validating them in a vulnerable emotional moment, but that idea seemed completely outside their episteme!

In her podcast with Dr. Sandel, she argued about “human dignity” and seemed to have a very hard time understanding what he was trying to say. When someone says something like “[markets] degrade human dignity”, it is my understanding that what they usually mean is “the payoff-maximization logic of markets is unconcerned with principles about personal agency, bodily autonomy, and any other forms of utility that are for some reason excluded from the calculus/unavailable to the choice-maker”. You can see this in the examples Dr. Sandel uses! “Tattooing an advertisement on your forehead undermines your dignity”. Why? Because it’s a bunch of permanent disutility which will have vast consequences outside of the market exchange of skin-for-money, which will affect the way you move through the world and therefore your ability to do things (agency) in ways that were impossible to account for, given the information at the time, during the transaction.

Julia Galef is much smarter than I am. I believe she could have done the little “translation” work I just did. She knows about Goodheart’s Law, she could have thought that “markets degrade human dignity” is another way of saying that situations with markets tend to undermine moral values because they are very susceptible to being collateral damage in a scenario aimed at maximizing a single variable. But Dr. Sandel phrased things sufficiently differently that the language game became the struggle there, so she didn’t. And the whole podcast episode was worse for it because she kept getting stuck.

I had my own experience with this in an Effective Altruism forum some time back. I proposed the idea that Effective Altruists should really care about language extinction (a massive problem!!!) and I was met mostly with lukewarm responses of “well, write a thing on it, and if it’s actually a good idea, that will be recognized by the community at large, because we’re all really smart people who recognize good ideas”. Which gets into one of those weird efficient-market situations of “if you’re all so smart and good at recognizing ideas, why is this not already in your list of priorities?”/ “because we’re so smart and good at this, it being outside of our current list means it’s probably not a good idea”.

Many days after a frustrating argument with the only person willing to engage me in conversation on that topic (who still exhibited this failure to understand what I was saying unless it was properly framed, a very detrimental form of acquired cognitive inflexibility in my opinion), I realized how to appeal my case. All languages have a built in episteme that includes wisdom that has stood some tests of time.

If you believe

  • philosophical research is in any way worthwhile and the pursuit of forms of “wisdom”/understanding are useful for anything

A N D

  • that doing things that are less effort for X is better than doing things that are more effort for the same X, where X can be “the pursuit of wisdom/understanding/etc”,

THEN you should believe that

  • [having a bunch of grad students slave away at using the current frameworks that they have in order to come up with new ones] is A WORSE IDEA than [outsourcing new frameworks to people who lived thousands of years ago and figured it out already].

Figuring out notions of being where you eliminate the self shouldn’t be a thing British and American philosophers try to come up with from scratch when there’s already hundreds of years of Buddhism to work with. Every time a language dies, those opportunities to outsource the development of new ways of thinking to past people die with them. I think that’s a reasonable argument. I also think that my interlocutor in that interaction could have reverse-engineered that from what I was saying, and failed to because of my phrasing.

All of these things are instances of people who are supposed to be really good at thinking having human behaviours/beliefs become incomprehensible to them because while the rational thinking habits are there and working, the capacity to translate things that were not created using those habits into something compatible with them is underdeveloped.

I don’t believe a community in which communicating insufficiently rigorously gets your ideas broadly dismissed is a community that will be good at adopting new good ideas quickly. I also don’t believe a community in which communicating insufficiently rigorously means that everyone around you just genuinely fails to understand you when less rigorous people might understand you perfectly well, is a community that will be good at adopting new good ideas quickly. I believe that rationalists and effective altruists have both collectively neglected the epistemological virtue of cognitive flexibility and the capacity to engage something using different modes of thinking. They’ve put all their eggs in the same brain-basket. A brain basket that features the doubting game, but not the believing game, of critical thinking.

Eventually a community with those problems will “catch up”. Probably. But it will push forward new ideas at a much lower speed than it theoretically could, in a way that seems to me like it’s just shooting itself in the foot. As is obvious whenever a rationalist “discovers”/ “invents” some idea that existed in the field of philosophy literally hundreds of years earlier with a different name.

The R/EA community creates a much higher barrier of entry for participation than is necessary, because not only does one have to be able to communicate using that community’s rigorous standards, but one must also be able to make arguments based on their personal biases that they fail to understand are biases. Languages are important for a vast range of things, and language conservation could have benefits for mental health, for scientific research, for engineering even! And even if it didn’t, it’s a GOOD THING TO DO. But I could only come up with that argument about outsourcing wisdom to dead people because I saw that “philosophy research” was in 80K hours’ list of careers.

That higher barrier of entry makes them insular, and then they argue about whether it matters that they’re insular instead of trying to broaden their appeal to pursue their supposed mission of TEACHING PEOPLE TO THINK BETTER.

This is why I stopped interacting regularly with these communities, despite agreeing with not just a great many of their premises but also a great many of their conclusions. After years of intermittently putting up with their bullshit, it just kind of seems less and less worth it as I find myself more and more isolated from them, more and more sick and tired of playing their little games, and less interested in being insulted and dismissed by people who take pride in how canyon-like they have managed to make their minds. Deep, sure, but also narrow.

--

--

Eager Question
Eager Question

Written by Eager Question

I am a person and I think about things.

No responses yet