Assuming Positive Intent
Context: Multiple friends of mine have recently (independently) reported to me that they feel like they’re under conversational attack. Multiple friends have also independently told me that they are starting to doubt that their conversation partners are well-intentioned. I’m not particularly concerned about the specific conflicts that sparked this, as I generally expect tensions like these to come and go in waves. However, it has caused me to do some thinking about discourse norms. This post is nominally addressed to those friends, though I believe it also contains ideas that are useful in general.
In I can tolerate anything except the outgroup, Scott Alexander writes about how rivalry and feuding is much more likely to be found between people who agree on almost everything, as opposed to between people who agree on almost nothing. The same observation is made in a joke written by Emo Phillips, and separately by Sigmund Freud and Ernest Crawley.
I think that I might be starting to see how the outgroup-your-neighbor behavior gets a toehold in modern groups of people. I think it often stems from people losing the ability to believe that their conversation partners are acting in good faith.
First, a few words on what I mean by “good faith”. Imagine you’re having a conversation with someone. Sometimes, your conversation partner is acting with virtuous conversational motives: maybe they’re curious, maybe they’re trying really hard to understand your arguments, maybe they think you’re wrong about something important and they’re struggling to get you to understand what they’re saying. Other times, your conversation partner may be acting with malicious conversational motives: they might be explicitly focused on embarrassing you in front of people whose favor they are trying to win, or they might be trying explicitly to cause other listeners to associate you with something distasteful (a la the worst argument in the world), or they might be explicitly attempting to manipulate your actions. We can characterize your conversation partner along this axis, which ranges roughly from “well-intentioned” to “ill-intentioned”.
This is of course a messy category, and there are many conversations where this categorization doesn’t apply. Please note that I’m not trying to talk about how heated the discussion is. There’s a separate axis from “heatedness” which is about how much you feel like you’re in an adversarial context, versus feeling like your conversation partner is fundamentally on your side, (even if they’re currently frustrated and raising their voice). It’s that feeling of “adversarial” vs “on the same side” thing that I’m pointing at when I talk about “intention”.
What sorts of thing cause people to model their conversation partners as ill-intentioned? Well, this probably happens frequently in scenarios where their conversation partner is in fact ill-intentioned. However, my hypothesis is that one way that the outgroup-your-neighbor phenomenon takes hold is that well-intentioned people start believing wrongly that their conversation partners are ill-intentioned.
How could that happen? Here are a couple ways that I could see it happening to me:
- When I was younger, I'd regularly see people taking actions that I would never take unless I was acting maliciously. I would automatically, on a gut level, assume that the other person must be malicious. Only later did my models of other people become sufficiently diverse to allow me to imagine well-intentioned people taking actions that I would only take if I were being malicious, via differences in ways of modeling the world, choosing actions, or coping with feelings of defensiveness / insecurity / frustration / etc. that stem from benign motives. I’m probably still prone to occasionally believing someone is malicious when they’re merely different than me, especially in cases where they act similarly to me most of the time (thereby fooling my gut-level person-modeler into modeling them too much like I model myself). I suspect that this failure mode is related to the typical mind fallacy and therefore difficult to beat in general.
- Related: I care a lot about having accurate beliefs. In pursuit of those, I often develop in myself an allergic reaction to mental motions that I want to avoid (such as searching for ways to draw the conclusion I want to draw, or flinching away from seeing which direction the evidence points). I have definitely encountered situations where I observe someone taking actions which, if I took them, would require making a mental motion that I'm allergic to; this commonly triggers defensiveness or frustration in me, in a manner that's not unlike gaining a visceral sense that the other person is "bad".
- I’m trying to do big things. I think the stakes are high. I often value resources in a very subtle way (simplified example: I want to prevent person X from getting exposed to distractions, but this is high priority only in certain narrow and specific situations, so if you occasionally drop by the MIRI offices and interrupt X, you might have a hard time figuring out which sort of interruptions will bother me). In cases where I place very high value on a delicate resource with a subtle boundary, it’s very easy for other people to inadvertently trample all over it and incur strong reflexive feelings that the person is acting adversarially. (This is doubly true if they appear to be gaining status, prestige, or power by stomping on things that I think are important.)
- If I ever feel intellectually under siege — especially if the conversation is moving so fast that it runs away from me, and especially especially when my words are regularly misinterpreted and my beliefs regularly mischaracterized — it becomes very difficult for me to believe (on a gut level) that my conversation partners are acting with good intent, even if I know intellectually that they’re just getting excited (or something). This is doubly true if I’m feeling stressed out or defensive, or if I’m under time pressure.
This list is not exhaustive, and I expect that there are reasonable mechanisms that I don't understand by which one might begin to doubt the intentions of their conversation partner even if everyone’s intentions are good.
It’s often reasonable and understandable to reflexively start doubting the intentions of your conversation partner. I don’t intend to shame that response. However, I make two notes. First, while I believe that the above responses are reasonable, I also believe that, for modal readers of this blog (myself included), just about all of our conversation partners are in fact well-intentioned just about all of the time. For example, given almost any specific conflict between effective altruists and/or rationalists, I expect that I can converse with any given individual in the conflict, understand their intentions, and summarize them in a way that that individual endorses, and that an impartial observer would agree that these are likely the individual's intentions, and that the intentions are laudable. I am willing to bet on this, though the stakes and the bid-ask spread will need to be fairly high in order for it to be worth my effort.
Second, I believe that the ability to expect that conversation partners are well-intentioned by default is a public good. An extremely valuable public good. When criticism turns to attacking the intentions of others, I perceive that to be burning the commons. Communities often have to deal with actors that in fact have ill intentions, and in that case it's often worth the damage to prevent an even greater exploitation by malicious actors. But damage is damage in either case, and I suspect that young communities are prone to destroying this particular commons based on false premises.
To be clear, I am not claiming that well-intentioned actions tend to have good consequences. The road to hell is paved with good intentions. Whether or not someone's actions have good consequences is an entirely separate issue. I am only claiming that, in the particular case of small high-trust communities, I believe almost everyone is almost always attempting to do good by their own lights. I believe that propagating doubt about that fact is nearly always a bad idea.
I also want to explicitly disclaim arguments of the form "person X is gaining [status|power|prestige] through their actions, therefore they are untrustworthy and have bad intentions". My models of human psychology allow for people to possess good intentions while executing adaptations that increase their status, influence, or popularity. My models also don’t deem people poor allies merely on account of their having instinctual motivations to achieve status, power, or prestige, any more than I deem people poor allies if they care about things like money, art, or good food. If your models predict that people who find any of those things motivating are ipso facto untrustworthy, or ipso facto unable to effectively pursue genuinely altruistic aims, then we have a factual dispute, and I'd appreciate you discussing with me (or people who share my view, since my time is pretty limited these days) before doing something that I think burns down a valuable public good.
One more clarification: some of my friends have insinuated (but not said outright as far as I know) that the execution of actions with bad consequences is just as bad as having ill intentions, and we should treat the two similarly. I think this is very wrong: eroding trust in the judgement or discernment of an individual is very different from eroding trust in whether or not they are pursuing the common good. If I believe your reasoning is mistaken is some particular domain, we can have a reasonable discussion in which we search for the source of our disagreement and attempt to mutually move closer to the truth. But if one of us starts believing that the other is acting adversarially, the whole framework of discourse breaks down, and we frequently can't get anywhere. In my experience, that sort of trust breakdown is often irreparable. Again, if you disagree, we have a factual dispute, and I'd appreciate you discussing with neutral parties before doing something that I believe burns down the commons.
With regards to the recent tensions that have cropped up between some of my rationalist/EA friends, if you share my worries in this domain, or if I’ve earned enough intellectual respect from you that you're willing to humor me on these matters for a time, then I recommend the following:
- If you notice yourself doubting that someone in the EA or rationality sphere has good intentions, second-guess those doubts. My models of the people involved strongly predict that they are all well-intentioned. Indeed, my models strongly predict that good intentions are nearly universal across the relevant demographics. (In my experience, ill-intentioned people often blatantly admit ill intent, or at least selfish intent, when asked. In fact, in my experience, many people who admit selfish intent turn out to actually have good intent, but that's a separate issue.)
- In particular, if you're doubting someone's intentions, I recommend getting curious, and asking your conversation partner to describe the world in which their actions seem justified. If you're still in doubt, you're welcome to contact me, relate your observations, and ask me for my model of your conversation partner's intentions. (Again, if you think you’d be good at offering this service too, I encourage you to offer your friends the same service.) Public accusations of ill intent are expensive and, according to my models, usually false; I recommend putting in way more effort than you might naively think is necessary to understand why the other person thinks their actions are justified and reasonable before making that sort of accusation. (This advice applies only to criticisms of intent, not to criticisms of judgement, belief, or action.)
- Develop a model of what causes people to start believing that you have ill intent. (Parts of my models are listed above; you could start there if any of those points rang true to you.) Then, work to telegraph your intentions and avoid (e.g.) making other people feel like their basic goodness as a human being is under attack. Be particularly sensitive to whether you've caused someone to doubt your sincerity, and if you have, see whether you can figure out what triggered the loss-of-faith. (I don't recommend attempting to fix it immediately -- in my experience, that often comes across as defensive and can make the problem worse. Instead, I again recommend getting curious and soliciting a better understanding of the other person's world-model.)
I think it is extremely easy for humans to forget everything they have in common, and start bitter feuds over minor differences. If you and I ever have a disagreement, no matter how bitter, then I want you to know that I regularly remind myself of all the things we do agree on, in attempts to put our differences into perspective.
Above all, I think it's important to remember that, no matter our differences in belief or method, we're on the same team. I know I’ve said the following things before, but this seems like a fine opportunity to repeat them:
If you're actively working hard to make the world a better place, then we're on the same team. If you're committed to letting evidence and reason guide your actions, then I consider you friends, comrades in arms, and kin. No matter how much we disagree about how to make the world a better place or what exactly that looks like; no matter how different are our beliefs about the world we live in; if you're putting substantial effort or resources towards making this universe an awesome place, then I am thankful for your presence and hold you in high esteem.
Thanks to Rob Bensinger for helping me edit this post.