<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[decision-theory - Minding our way]]></title><description><![CDATA[to the heavens]]></description><link>https://mindingourway.com/</link><generator>Ghost 4.46</generator><lastBuildDate>Sat, 11 Apr 2026 14:04:49 GMT</lastBuildDate><atom:link href="https://mindingourway.com/tag/decision-theory/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Newcomblike problems are the norm]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p style="font-style: italic; opacity: 0.5">[Note: This post is part of a set of three where thoughts related to my job leaked into the blog. They don&apos;t really fit with the surrounding posts; you may want to skip them.]</p>
<h1 id="1">1</h1>
<p><a href="https://mindingourway.com/intro-to-newcomblike-problems/">Last time</a> we looked at Newcomblike problems, which cause trouble for <a href="https://mindingourway.com/intro-to-newcomblike-problems/">Causal Decision</a></p>]]></description><link>https://mindingourway.com/newcomblike-problems-are-the-norm/</link><guid isPermaLink="false">5f94cfbaca8899827ef2a26d</guid><category><![CDATA[decision-theory]]></category><dc:creator><![CDATA[Nate Soares]]></dc:creator><pubDate>Tue, 23 Sep 2014 19:13:37 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p style="font-style: italic; opacity: 0.5">[Note: This post is part of a set of three where thoughts related to my job leaked into the blog. They don&apos;t really fit with the surrounding posts; you may want to skip them.]</p>
<h1 id="1">1</h1>
<p><a href="https://mindingourway.com/intro-to-newcomblike-problems/">Last time</a> we looked at Newcomblike problems, which cause trouble for <a href="https://mindingourway.com/intro-to-newcomblike-problems/">Causal Decision Theory (CDT)</a>, the standard decision theory used in economics, statistics, narrow AI, and many other academic fields.</p>
<p>These Newcomblike problems may seem like strange edge case scenarios. In the Token Trade, a deterministic agent faces a perfect copy of themself, guaranteed to take the same action as they do. In Newcomb&apos;s original problem there is a perfect predictor &#x3A9; which knows exactly what the agent will do.</p>
<p>Both of these examples involve some form of &quot;mind-reading&quot; and assume that the agent can be perfectly copied or perfectly predicted. In a chaotic universe, these scenarios may seem unrealistic and even downright crazy. What does it matter that CDT fails when there are perfect mind-readers? There aren&apos;t perfect mind-readers. Why do we care?</p>
<p>The reason that we care is this: <em>Newcomblike problems are the norm.</em> Most problems that humans face in real life are &quot;Newcomblike&quot;.</p>
<p>These problems aren&apos;t limited to the domain of perfect mind-readers; rather, problems with perfect mind-readers are the domain where these problems are easiest to see. However, they arise naturally whenever an agent is in a situation where others have knowledge about its decision process via some mechanism that is not under its direct control.</p>
<h1 id="2">2</h1>
<p>Consider a CDT agent in a mirror token trade.</p>
<p><img src="https://mindingourway.com/content/images/2014/Sep/token-mirror.png" alt loading="lazy"></p>
<p>It knows that it and the opponent are generated from the same template, but it also knows that the opponent is causally distinct from it by the time it makes its choice. So it argues</p>
<blockquote>
<p>Either agents spawned from my template give their tokens away, or they keep their tokens. If agents spawned from my template give their tokens away, then I better keep mine so that I can take advantage of the opponent. If, instead, agents spawned from my template keep their tokens, then I had better keep mine, or otherwise I won&apos;t win any money at all.</p>
</blockquote>
<p>It has failed, here, to notice that it can&apos;t choose separately from &quot;agents spawned from my template&quot; because it <em>is</em> spawned from its template. (That&apos;s not to say that it doesn&apos;t get to choose what to do. Rather, it has to be able to reason about the fact that whatever it chooses, so will its opponent choose.)</p>
<p>The reasoning flaw here is an inability to reason as if <em>past information</em> has given others <em>veridical knowledge</em> about what the agent <em>will</em> choose. This failure is particularly vivid in the mirror token trade, where the opponent is guaranteed to do <em>exactly</em> the same thing as the opponent. However, the failure occurs even if the veridical knowledge is partial or imperfect.</p>
<h1 id="3">3</h1>
<p>Humans trade partial, veridical, uncontrollable information about their decision procedures <em>all the time</em>.</p>
<p>Humans automatically make <a href="http://en.wikipedia.org/wiki/First_impression_(psychology)">first impressions</a> of other humans at first sight, almost instantaneously (sometimes before the person speaks, and possibly just from still images).</p>
<p>We read each other&apos;s <a href="http://en.wikipedia.org/wiki/Microexpression">microexpressions</a>, which are generally uncontrollable sources of information about our emotions.</p>
<p>As humans, we have an impressive array of social machinery available to us that gives us gut-level, subconscious impressions of how trustworthy other people are.</p>
<p><img src="https://mindingourway.com/content/images/2014/Sep/real-newcomblike.png" alt loading="lazy"></p>
<p>Many social situations follow this pattern, and this pattern is a Newcomblike one.</p>
<p>All these tools can be fooled, of course. First impressions are often wrong. Con-men often seem trustworthy, and honest shy people can seem unworthy of trust. However, all of this social data is at least <em>correlated</em> with the truth, and that&apos;s all we need to give CDT trouble. Remember, CDT assumes that all nodes which are <em>causally</em> disconnected from it are <em>logically</em> disconnected from it: but if someone else gained information that correlates with how you <em>actually are</em> going to act in the future, then your interactions with them may be Newcomblike.</p>
<p>In fact, humans have a natural tendency to avoid &quot;non-Newcomblike&quot; scenarios. Human social structures use complex reputation systems. Humans seldom make big choices among themselves (who to hire, whether to become roommates, whether to make a business deal) before &quot;getting to know each other&quot;. We automatically build complex social models detailing how we think our friends, family, and co-workers, make decisions.</p>
<p>When I worked at Google, I&apos;d occasionally need to convince half a dozen team leads to sign off on a given project. In order to do this, I&apos;d meet with each of them in person and pitch the project slightly differently, according to my model of what parts of the project most appealed to them. I was basing my actions off of how I expected them to make decisions: I was putting them in Newcomblike scenarios.</p>
<p>We constantly leak information about how we make decisions, and others constantly use this information. Human decision situations are Newcomblike <em>by default!</em> It&apos;s the <em>non</em>-Newcomblike problems that are simplifications and edge cases.</p>
<p>Newcomblike problems occur whenever knowledge about what decision you <em>will</em> make leaks into the environment. The knowledge doesn&apos;t have to be 100% accurate, it just has to be correlated with your eventual actual action (in such a way that if you were going to take a different action, then you would have leaked different information). When this information is available, and others use it to make their decisions, others put you into a Newcomblike scenario.</p>
<p>Information about what we&apos;re going to do is frequently leaking into the environment, via <a href="http://www.overcomingbias.com/2014/06/how-deep-the-rabbit-hole.html">unconscious signaling</a> and uncontrolled facial expressions or even just by habit &#x2014; anyone following a simple routine is likely to act predictably.</p>
<h1 id="4">4</h1>
<p>Most real decisions that humans face are Newcomblike whenever other humans are involved. People are automatically reading unconscious or unintentional signals and using these to build models of how you make choices, and they&apos;re using those models to make <em>their</em> choices. These are precisely the sorts of scenarios that CDT cannot represent.</p>
<p>Of course, that&apos;s not to say that humans fail drastically on these problems. We don&apos;t: we repeatedly do well in these scenarios.</p>
<p>Some real life Newcomblike scenarios simply don&apos;t represent games where CDT has trouble: there are many situations where others in the environment have knowledge about how you make decisions, and are using that knowledge but in a way that does not affect your payoffs enough to matter.</p>
<p>Many more Newcomblike scenarios simply don&apos;t feel like decision problems: people present ideas to us in specific ways (depending upon their model of how we make choices) and most of us don&apos;t fret about how others would have presented us with different opportunities if we had acted in different ways.</p>
<p>And in Newcomblike scenarios that <em>do</em> feel like decision problems, humans use a wide array of other tools in order to succeed.</p>
<p>Roughly speaking, CDT fails when it gets stuck in the trap of &quot;no matter what I signaled I should do [something mean]&quot;, which results in CDT sending off a &quot;mean&quot; signal and missing opportunities for higher payoffs. By contrast, humans tend to avoid this trap via other means: we place value on things like &quot;niceness&quot; for reputational reasons, we have intrinsic senses of &quot;honor&quot; and &quot;fairness&quot; which alter the payoffs of the game, and so on.</p>
<p>This machinery was not necessarily &quot;designed&quot; for Newcomblike situations. Reputation systems and senses of honor are commonly attributed to humans facing repeated scenarios (thanks to living in small tribes) in the ancestral environment, and it&apos;s possible to argue that CDT handles repeated Newcomblike situations well enough. (I disagree somewhat, but this is an argument for another day.)</p>
<p>Nevertheless, the machinery that allows us to handle repeated Newcomblike problems often seems to work in one-shot Newcomblike problems. Regardless of where the machinery came from, it still allows us to succeed in Newcomblike scenarios that we face in day-to-day life.</p>
<p>The fact that humans easily succeed, often via tools developed for repeated situations, doesn&apos;t change the fact that many of our day-to-day interactions have Newcomblike characteristics. Whenever an agent leaks information about their decision procedure on a communication channel that they do not control (facial microexpressions, posture, cadence of voice, etc.) that person is inviting others to put them in Newcomblike settings.</p>
<h1 id="5">5</h1>
<p>Most of the time, humans are pretty good at handling naturally arising Newcomblike problems. Sometimes, though, the fact that you&apos;re in a Newcomblike scenario <em>does</em> matter.</p>
<p>The games of Poker and Diplomacy are both centered around people controlling information channels that humans can&apos;t normally control. These games give particularly crisp examples of humans wrestling with situations where the environment contains leaked information about their decision-making procedure.</p>
<p>These are only games, yes, but I&apos;m sure that any highly ranked Poker player will tell you that the lessons of Poker extend far beyond the game board. Similarly, I expect that highly ranked Diplomacy players will tell you that Diplomacy teaches you many lessons about how people broadcast the decisions that they&apos;re going to make, and that these lessons are invaluable in everyday life.</p>
<p>I am not a professional negotiator, but I further imagine that top-tier negotiators expend significant effort exploring how their mindsets are tied to their unconscious signals.</p>
<p>On a more personal scale, some very simple scenarios (like whether you can get let into a farmhouse on a rainy night after your car breaks down) are somewhat &quot;Newcomblike&quot;.</p>
<p>I know at least two people who are unreliable and untrustworthy, and who blame the fact that they can&apos;t hold down jobs (and that nobody cuts them any slack) on bad luck rather than on their own demeanors. Both consistently believe that they are taking the best available action whenever they act unreliable and untrustworthy. Both brush off the idea of &quot;becoming a sucker&quot;. Neither of them is capable of <em>acting</em> unreliable while <em>signaling</em> reliability. Both of them would benefit from <em>actually becoming trustworthy</em>.</p>
<p>Now, of course, people can&apos;t suddenly &quot;become reliable&quot;, and <a href="http://en.wikipedia.org/wiki/Akrasia">akrasia</a> is a formidable enemy to people stuck in these negative feedback loops. But nevertheless, you can see how this problem has a hint of Newcomblikeness to it.</p>
<p>In fact, recommendations of this form &#x2014; &quot;You can&apos;t signal trustworthiness unless you&apos;re trustworthy&quot; &#x2014; are common. As an extremely simple example, let&apos;s consider a shy candidate going in to a job interview. The candidate&apos;s demeanor (<code>confident</code> or <code>shy</code>) will determine the interviewer&apos;s predisposition <code>towards</code> or <code>against</code> the candidate. During the interview, the candidate may act either <code>bold</code> or <code>timid</code>. Then the interviewer decides whether or not to hire the candidate.</p>
<p><img src="https://mindingourway.com/content/images/2014/Sep/newcomblike-interview.png" alt loading="lazy"></p>
<p>If the candidate is confident, then they will get the job (worth $100,000) regardless of whether they are bold or timid. If they are shy and timid, then they will not get the job ($0). If, however, thy are shy and bold, then they will get laughed at, which is worth -$10. Finally, though, <em>a person who knows they are going to be timid will have a shy demeanor, whereas a person who knows they are going to be bold will have a confident demeanor</em>.</p>
<p>It may seem at first glance that it is better to be timid than to be bold, because timidness only affects the outcome if the interviewer is predisposed against the candidate, in which case it is better to be timid (and avoid being laughed at). However, if the candidate <em>knows</em> that they will reason like this (in the interview) then they will be shy <em>before</em> the interview, which will predispose the interviewer against them. By contrast, if the candidate precommits to being bold (in this simple setting) then they will get the job.</p>
<p>Someone reasoning using CDT might reason as follows when they&apos;re in the interview:</p>
<blockquote>
<p>I can&apos;t tell whether they like me or not, and I don&apos;t want to be laughed at, so I&apos;ll just act timid.</p>
</blockquote>
<p>To people who reason like this, we suggest <em>avoiding causal reasoning</em> during the interview.</p>
<p>And, in fact, there are truckloads of self-help books dishing out similar advice. You can&apos;t reliably signal trustworthiness without <em>actually being</em> trustworthy. You can&apos;t reliably be charismatic without <em>actually caring</em> about people. You can&apos;t easily signal confidence without <em>becoming confident</em>. Someone who <em>cannot represent</em> these arguments may find that many of the benefits of trustworthiness, charisma, and confidence are unavailable to them.</p>
<p>Compare the advice above to our analysis of CDT in the mirror token trade, where we say &quot;You can&apos;t keep your token while the opponent gives theirs away&quot;. CDT, which can&apos;t represent this argument, finds that the high payoff is unavailable to it. The analogy is exact: CDT fails to represent precisely this sort of reasoning, and yet this sort of reasoning is common and useful among humans.</p>
<h1 id="6">6</h1>
<p>That&apos;s not to say that CDT can&apos;t address these problems. A CDT agent that knows it&apos;s going to face the above interview would precommit to being bold &#x2014; but this would involve using something <em>besides</em> causal counterfactual reasoning during the actual interview. And, in fact, this is precisely one of the arguments that I&apos;m going to make in future posts: a sufficiently intelligent artificial system using CDT to reason about its choices would self-modify to stop using CDT to reason about its choices.</p>
<p>We&apos;ve been talking about Newcomblike problems in a very human-centric setting for this post. Next post, we&apos;ll dive into the arguments about why an <em>artificial</em> agent (that doesn&apos;t share our vast suite of social signaling tools, and which lacks our shared humanity) may <em>also</em> expect to face Newcomblike problems and would therefore self-modify to stop using CDT.</p>
<p>This will lead us to more interesting questions, such as &quot;what <em>would</em> it use?&quot; (answer: we don&apos;t quite know yet) and &quot;would it self-modify to fix all of CDT&apos;s flaws?&quot; (answer: no).</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[An introduction to Newcomblike problems]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p style="font-style: italic; opacity: 0.5">[Note: This post is part of a set of three where thoughts related to my job leaked into the blog. They don&apos;t really fit with the surrounding posts; you may want to skip them.]</p>
<p><a href="https://mindingourway.com/causal-reasoning-is-unsatisfactory/">Last time</a> I introduced causal decision theory (CDT) and showed how it has unsatisfactory</p>]]></description><link>https://mindingourway.com/intro-to-newcomblike-problems/</link><guid isPermaLink="false">5f94cfbaca8899827ef2a26c</guid><category><![CDATA[decision-theory]]></category><dc:creator><![CDATA[Nate Soares]]></dc:creator><pubDate>Fri, 12 Sep 2014 12:07:20 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p style="font-style: italic; opacity: 0.5">[Note: This post is part of a set of three where thoughts related to my job leaked into the blog. They don&apos;t really fit with the surrounding posts; you may want to skip them.]</p>
<p><a href="https://mindingourway.com/causal-reasoning-is-unsatisfactory/">Last time</a> I introduced causal decision theory (CDT) and showed how it has unsatisfactory behavior on &quot;Newcomblike problems&quot;. Today, we&apos;ll explore Newcomblike problems in a bit more depth, starting with William Newcomb&apos;s original problem.</p>
<h2 id="theproblem">The Problem</h2>
<p>Once upon a time there was a strange alien named &#x3A9; who is very very good at predicting humans. There is this one game that &#x3A9; likes to play with humans, and &#x3A9; has played it thousands of times without ever making a mistake. The game works as follows:</p>
<p>First, &#x3A9; observes the human for a while and collects lots of information about the human. Then, &#x3A9; makes a decision based on how &#x3A9; predicts the human will react in the upcoming game. Finally, &#x3A9; presents the human with two boxes.</p>
<p>The first box is blue, transparent, and contains $1000. The second box is red and opaque.</p>
<blockquote>
<p>You may take either the red box alone, or both boxes,</p>
</blockquote>
<p>&#x3A9; informs the human. (These are magical boxes where if you decide to take only the red one then the blue one, and the $1000 within, will disappear.)</p>
<blockquote>
<p>If I predicted that you would take only the red box, then I filled it with $1,000,000. Otherwise, I left it empty. I have already made my choice,</p>
</blockquote>
<p>&#x3A9; concludes, before turning around and walking away.</p>
<p>You may take either only the red box, or both boxes. (If you try something clever, like taking the red box while a friend takes a blue box, then the red box is filled with hornets. Lots and lots of hornets.) What do you do?</p>
<h2 id="thedilemma">The Dilemma</h2>
<p>Should you take one box or two boxes?</p>
<p>If you take only the red box, then you&apos;re <em>necessarily</em> leaving $1,000 on the table.</p>
<p>But if you take both boxes, then &#x3A9; predicted that, and you miss out at a chance of $1,000,000.</p>
<h2 id="recap">Recap</h2>
<p>Now, as you&apos;ll remember, we&apos;re discussing <em>decision theories</em>, algorithms that prescribe actions in games like these. Our motivation for studying decision theories is manyfold. For one thing, people who <em>don&apos;t</em> have tools for making good decisions can often become confused (remember the people who <a href="http://www.thedailybeast.com/articles/2013/07/12/your-future-is-in-the-palm-of-your-surgeon-s-hand.html">got palm surgery to change their fortunes</a>).</p>
<p>This is not just a problem for irrational or undereducated people: it&apos;s easy to trust yourself to make the right choices most of the time, because human heuristics and intuitions are usually pretty good. But what happens in the strange edge-case scenarios? What do you do when you encounter problems where your intuitions conflict? In these cases, it&apos;s important to know <em>what it means</em> to make a good choice.</p>
<p><em>My</em> motivation for studying decision theory is that in order to construct an artificial intelligence you need a pretty dang good understanding of what sort of decision algorithms perform well.</p>
<p>Last post, we explored the standard decision theory used by modern philosophers to encode rational decision making. I&apos;m <em>eventually</em> going to use this series of posts to explain why our current knowledge of decision theory (and CDT in particular) are completely inadequate for use in any sufficiently intelligent self-modifying agent.</p>
<p>But before we go there, let&apos;s see how causal decision theory reacts to Newcomb&apos;s problem.</p>
<h2 id="thechoice">The Choice</h2>
<p>Let&apos;s analyze this problem using causal decision theory from yesterday. Roughly, the causal decision theorist reasons as follows:</p>
<blockquote>
<p>&#x3A9; offers me the boxes after making its decision. Now, either &#x3A9; has filled the red box or &#x3A9; has not filled the red box. The decision has <em>already been made</em>. If &#x3A9; filled the red box, then I had better take both boxes (and get a thousand and a million dollars). If &#x3A9; didn&apos;t fill the red box, then I had better take both boxes so that I at least get a thousand dollars. <em>No matter what</em> &#x3A9; chose, I had better take both boxes.</p>
</blockquote>
<p>And so, someone reasoning using causal decision theory takes both boxes (and, because &#x3A9; is a very good predictor, they walk away with $1000).</p>
<p>Let&apos;s walk through that reasoning in slow-mo, using causal graphs. The causal graph for this problem looks like this:</p>
<p><img src="https://mindingourway.com/content/images/2014/Sep/newcomb-problem.png" alt loading="lazy"></p>
<p>With the nodes defined as follows:</p>
<ul>
<li><code>You (yesterday)</code> is the algorithm implementing you yesterday. In this simplified setting, we assume that its value determines the contents of <code>You (today)</code>.</li>
<li><code>&#x3A9;</code> is a function that observes you yesterday and decides whether to put $1,000,000 into the red box. Its value is either <em>filled</em> or <em>empty</em>.</li>
<li><code>You (today)</code> is your decision algorithm. It must output either <em>onebox</em> or <em>twobox</em>.</li>
<li><code>$</code> is $1,000,000 if <code>&#x3A9;</code>=<em>filled</em> plus $1,000 if <code>You (today)</code>=<em>twobox</em>.</li>
</ul>
<p>We must decide whether to output <em>onebox</em> (take only the red box) or <em>twobox</em> (take both boxes). Given some expecation p that <code>&#x3A9;</code>=<em>filled</em>, causal decision theory reasons as follows:</p>
<ol>
<li>The decision node is <code>You (today)</code>.</li>
<li>The available actions are <em>onebox</em> and <em>twobox</em>.</li>
<li>The utility node is <code>$</code>.</li>
<li>Set <code>You (today)</code>=<code>const onebox</code></li>
</ol>
<ul>
<li><code>$</code> = 1,000,000p</li>
</ul>
<ol start="5">
<li>Set <code>You (today)</code>=<code>const twobox</code></li>
</ol>
<ul>
<li><code>$</code> = 1,000,000p + 1,000</li>
</ul>
<p>Thus, <em>no matter what probability p is</em>, CDT takes both boxes, because the action <em>twobox</em> results in an extra $1000.</p>
<h2 id="theexcuse">The Excuse</h2>
<p>This is, of course, the wrong answer.</p>
<p>Because &#x3A9; is a really good predictor, and because &#x3A9; only gives a million dollars to people who take one box, if you want a million dollars then you had better only take one box.</p>
<p>Where did CDT go wrong?</p>
<p>Causal decision theorists sometimes argue that <em>nothing</em> went wrong. It is &quot;rational&quot; to take both boxes, because otherwise you&apos;re leaving $1,000 on the table. Reasoning about how you have to take one box so that that box will be filled is nonsense, because by the time you&apos;re making your decision, &#x3A9; has already left. (This argument can be found at <a href="http://books.google.com/books?id=LYTMhPzCUxYC&amp;q=rachel#v=onepage&amp;q&amp;f=false">Foundations of Causal Decision Theory, p152</a>.)</p>
<p>Of course, these people will agree that the causal decision theorists walk away with less money. But they&apos;ll tell you that this is not their fault: they are making the &quot;rational&quot; choice, and &#x3A9; has decided to punish people who are acting &quot;rationally&quot;.</p>
<h2 id="thecounter">The Counter</h2>
<p>There is some merit to this complaint. <em>No matter how you make decisions</em>, there is at least one decision problem where you do poorly and everybody else does well. To illustrate, consider the following game: every player submits a computer program that has to choose whether to press the green button or the blue button (by outputting either <code>green</code> or <code>blue</code>). Then I check which button your program is going to press. Then I give $10 to everyone whose program presses the opposite button.</p>
<p>Clearly, many people can win $10 in this game. Also clearly, you cannot. In this game, no matter how you choose, you lose. I&apos;m punishing <em>your decision algorithm</em> specifically.</p>
<p>I&apos;m sympathetic to the claim that there are games where the host punishes you unfairly. However, Newcomb&apos;s problem is <em>not</em> unfair in this sense.</p>
<p>&#x3A9; is not punishing <em>people who act like you</em>, &#x3A9; is punishing <em>people who take two boxes</em>. (Well, I mean, &#x3A9; is giving $1,000 to those people, but &#x3A9; is withholding $1,000,000 that it gives to oneboxers, so I&apos;ll keep calling it &quot;punishment&quot;).</p>
<p>You <em>can&apos;t</em> win at the bluegreen game above. You <em>can</em> win on Newcomb&apos;s problem. All you have to do is <em>only take one box</em>.</p>
<p>Causal decision theorists claim that it&apos;s not <em>rational</em> to take one box, because that leaves $1,000 on the table even though your choice can no longer affect the outcome. They claim that &#x3A9; is punishing people who act reasonably. They can&apos;t just <em>decide</em> to stop being reasonable just because &#x3A9; is rewarding unreasonable behavior in this case.</p>
<p>To which I say: fine. But pretend I wasn&apos;t motivated by adherence to some archaic definition of &quot;reasonableness&quot; which clearly fails systematically in this scenario, but that I was <em>instead</em> motivated by a desire to succeed. <em>Then</em> how should I make decisions? Because in these scenarios, I clearly should not use causal decision theory.</p>
<h2 id="thefailure">The Failure</h2>
<p>In fact, I can go further. I can <em>point</em> to the place where CDT goes wrong. CDT goes wrong when it reasons about the outcome of its action &quot;no matter what the probability p that &#x3A9; fills the box&quot;.</p>
<p><em>&#x3A9;&apos;s choice to fill the box depends upon your decision algorithm.</em></p>
<p>If your algorithm chooses to take one box, then the probability that <code>&#x3A9;</code>=<em>filled</em> is high. If your algorithm chooses to take two boxes, then the probability that <code>&#x3A9;</code>=<em>filled</em> is low.</p>
<p>CDT&apos;s reasoning neglects this fact, because <code>&#x3A9;</code> is causally disconnected from <code>You (today)</code>. CDT reasons that its choice can no longer affect &#x3A9;, but then <em>mistakenly</em> assumes that this means the probability of <code>&#x3A9;</code>=<em>filled</em> is independent from <code>You (today)</code>.</p>
<p>&#x3A9; makes its decision based on knowledge of what <em>the algorithm</em> inside <code>You (today)</code> looks like. When CDT reasons about what would happen if it took one box, it considers what the world would look like if (counterfactually) <code>You (today)</code> was filled with an algorithm that, <em>instead of being CDT</em>, always returned <code>onebox</code>. CDT changes the contents of <code>You (today)</code> and nothing else, and sees what would happen. But this is a bad counterfactual! If, instead of implementing CDT, you implemented an algorithm that always took one box, <em>then &#x3A9; would act differently</em>.</p>
<p>CDT assumes it can change <code>You (today)</code> without affecting <code>&#x3A9;</code>. But because &#x3A9;&apos;s choice depends upon the <em>contents</em> of <code>You (today)</code>, CDT&apos;s method of constructing counterfactuals destroys some of the structure of the scenario (namely, the connection between &#x3A9;&apos;s choice and your algorithm).</p>
<p>In fact, take a look at the graph for Newcomb&apos;s problem and compare it to the graph for the Mirror Token Trade from last post:</p>
<p><img src="https://mindingourway.com/content/images/2014/Sep/token-mirror.png" alt loading="lazy"></p>
<p>These are the same graph, and CDT is making the same error. It&apos;s assuming that everything which is <em>causally disconnected</em> from it is <em>independent</em> of it.</p>
<p>In both Newcomb&apos;s original problem and in the Mirror Token Trade, this assumption is violated. &#x3A9;&apos;s choice is <em>logically</em> connected to your choice, even though your choice is causally disconnected from Omega&apos;s.</p>
<p>In any scenario where there are logical non-causal connections between your action node and other nodes, CDT&apos;s method of counterfactual reasoning can fail.</p>
<h2 id="thesolution">The Solution</h2>
<p>The solution, of course, is to respect the logical connection between &#x3A9;&apos;s choice and your own. I would take one box, because in this game, &#x3A9; gives $1,000,000 to people who use the strategy &quot;take one box&quot; and $1,000 to people who use the strategy &quot;take two boxes&quot;.</p>
<p>There are numerous intuition pumps by which you can arrive at this solution. For example, you could notice that you would like to precommit now to take only one box, and then &quot;implement&quot; that precommitment (by becoming, now, the type of person who only takes one box).</p>
<p>Alternatively, you can imagine that &#x3A9; gets its impeccable predictions by simulating people. Then, when you find yourself facing down &#x3A9;, you can&apos;t be sure whether you&apos;re in the simulation or reality.</p>
<p>It doesn&apos;t matter which intuition pump you use. It matters <em>which choice you take</em>: and CDT takes the wrong one.</p>
<p>We do, in fact, have alternative decision theories that perform well in Newcomb&apos;s problem (some better than others), but we&apos;ll get to that later. For now, we&apos;re going to keep exaimining CDT.</p>
<h2 id="whyicare">Why I care</h2>
<p>Both the Mirror Token Trade and Newcomb&apos;s Problem share a similar structure: there is another agent in the environment that knows how you reason.</p>
<p>We can generalize this to a whole <em>class</em> of problems, known as &quot;Newcomblike problems&quot;.</p>
<p>In the Mirror Token Trade, the other agent is a copy of you who acts the same way you do. If you ever find yourself playing a token trade against a copy of yourself, you had better trade your token, even if you&apos;re completely selfish.</p>
<p>In Newcomb&apos;s original problem, &#x3A9; knows how you act, and uses this to decide whether or not to give you a million dollars. If you want the million, you had better take one box.</p>
<p>These problems may seem like weird edge cases. After all, in the Mirror Token Trade we assume that you are perfectly deterministic and that you can be copied. And in Newcomb&apos;s problem, we presume the existence of a powerful alien capable of perfectly predicting your action.</p>
<p>It&apos;s a chaotic world, and perfect prediction of a human is somewhere between &quot;pretty dang hard&quot; and &quot;downright impossible&quot;.</p>
<p>So fine: CDT fails systematically on Newcomblike problems. But is that so bad? We&apos;re pretty unlikely to meet &#x3A9; anytime soon. Failure on Newcomblike problems may be a flaw, but if CDT works everywhere <em>except</em> on crazy scenarios like Newcomb&apos;s problem then it&apos;s hardly a fatal flaw.</p>
<p>But while these two example problems are simple scenarios where the other agents are &quot;perfect&quot; copies or &quot;perfect&quot; predictors, there are many more feasible Newcomblike scenarios.</p>
<p><em>Any</em> scenario where another agent has knowledge about your decision algorithm (even if that knowledge is imperfect, even if they lack the capability to simulate you) is a Newcomblike problem.</p>
<p>In fact, in the next post (seriously, I&apos;m going to get to my original point soon, I promise) I&apos;ll argue that <em>Newcomblike problems are the norm</em>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Causal decision theory is unsatisfactory]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p style="font-style: italic; opacity: 0.5">[Note: This post is part of a set of three where thoughts related to my job leaked into the blog. They don&apos;t really fit with the surrounding posts; you may want to skip them.]</p>
<h1 id="1">1</h1>
<p>Choice is a crucial component of reasoning. Given a set of available actions,</p>]]></description><link>https://mindingourway.com/causal-reasoning-is-unsatisfactory/</link><guid isPermaLink="false">5f94cfbaca8899827ef2a26b</guid><category><![CDATA[decision-theory]]></category><dc:creator><![CDATA[Nate Soares]]></dc:creator><pubDate>Sun, 07 Sep 2014 00:15:29 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p style="font-style: italic; opacity: 0.5">[Note: This post is part of a set of three where thoughts related to my job leaked into the blog. They don&apos;t really fit with the surrounding posts; you may want to skip them.]</p>
<h1 id="1">1</h1>
<p>Choice is a crucial component of reasoning. Given a set of available actions, which action do you take? Do you go out to the movies or stay in with a book? Do you capture the bishop or fork the king? Somehow, we must reason about our options and choose the best one.</p>
<p>Of course, we humans don&apos;t consciously weigh all of our actions. Many of our choices are made subconsciously. (Which letter will I type next? When will I get a drink of water?) Yet even if the choices are made by subconscious heuristics, they must be made somehow.</p>
<p>In practice, decisions are often made on autopilot. We don&apos;t weigh every available alternative when it&apos;s time to prepare for work in the morning, we just pattern-match the situation and carry out some routine. This is a shortcut that saves time and cognitive energy. Yet, no matter how much we stick to routines, we still spend <em>some</em> of our time making hard choices, weighing alternatives, and predicting which available action will serve us best.</p>
<p>The study of how to make these sorts of decisions is known as <em>Decision Theory</em>. This field of research is closely intertwined with Economics, Philosophy, Mathematics, and (of course) Game Theory. It will be the subject of today&apos;s post.</p>
<h1 id="2">2</h1>
<p>Decisions about what action to choose necessarily involve <em>counterfactual reasoning</em>, in the sense that we reason about what <em>would</em> happen if we took actions which we (in fact) will not take.</p>
<p>We all have some way of performing this counterfactual reasoning. Most of us can visualize what would happen if we did something that we aren&apos;t going to do. For example, consider shouting &quot;PUPPIES!&quot; at the top of your lungs right now. I bet you won&apos;t do it, but I <em>also</em> bet that you can picture the results.</p>
<p>One of the major goals of decision theory is to formalize this counterfactual reasoning: if we had unlimited resources then how would we compute alternatives so as to ensure that we always pick the best possible action? This question is harder than it looks, for reasons explored below: counterfactal reasoning can encounter many pitfalls.</p>
<p>A second major goal of decision theory is this: human counterfactual reasoning sometimes runs afoul of those pitfalls, and a formal understanding of decision theory can help humans make better decisions. It&apos;s no coincidence that Game Theory was developed during the cold war!</p>
<p>(<em>My</em> major goal in decision theory is to understand it as part of the process of learning how to construct a machine intelligence that reliably reasons well. This provides the motivation to nitpick existing decision theories. If they&apos;re broken then we had better learn that sooner rather than later.)</p>
<h1 id="3">3</h1>
<p>Sometimes, it&apos;s easy to choose the best available action. You consider each action in turn, and predict the outcome, and then pick the action that leads to the best outcome. This can be difficult when accurate predictions are unavailable, but that&apos;s not the problem that we address with decision theory. The problem we address is that sometimes <em>it is difficult to reason about what would happen if you took a given action</em>.</p>
<p>For example, imagine that you know of a fortune teller who can reliably read palms to divine destinies. Most people who get a good fortune wind up happy, while most people who get a bad fortune wind up sad. It&apos;s been experimentally verified that she can use information on palms to reliably make inferences about the palm-owner&apos;s destiny.</p>
<p>So... should you <a href="http://www.thedailybeast.com/articles/2013/07/12/your-future-is-in-the-palm-of-your-surgeon-s-hand.html">get palm surgery to change your fate</a>?</p>
<p>If you&apos;re bad at reasoning about counterfactuals, you might reason as follows:</p>
<blockquote>
<p>Nine out of ten people who get a good fortune do well in life. I had better use the palm surgery to ensure a good fortune!</p>
</blockquote>
<p>Now admittedly, if palm reading is shown to work, the first thing you should do is check whether you can alter destiny by altering your palms. However, <em>assuming</em> that changing your palm doesn&apos;t drastically affect your fate, this sort of reasoning is quite silly.</p>
<p>The above reasoning process conflates <em>correlation</em> with <em>causal control</em>. The above reasoner gets palm surgery because they want a good destiny. But while your palm may give <em>information</em> about their destiny, your palm does not <em>control</em> your fate.</p>
<p>If we find out that we&apos;ve been using this sort of reasoning, we can usually do better by considering actions only on the basis of what they <em>cause</em>.</p>
<h1 id="4">4</h1>
<p>This idea leads us to <em>causal decision theory (CDT)</em>, which demands that we consider actions based only upon the causal effects of those actions.</p>
<p>Actions are considered using <em>causal counterfactual reasoning</em>.  Though causal counterfactual reasoning can be formalized in many ways, we will consider graphical models specifically. Roughly, a <em>causal graph</em> is a graph where the world model is divided into a series of nodes, with arrows signifying the causal connections between the nodes. For a more formal introduction, you&apos;ll need to consult a <a href="http://www.amazon.com/Causality-Reasoning-Inference-Judea-Pearl/dp/0521773628">textbook</a>. As an example, here&apos;s a causal graph for the palm-reading scenario above:</p>
<p><img src="https://mindingourway.com/content/images/2014/Sep/palm-surgery.png" alt loading="lazy"></p>
<p>The choice is denoted by the dotted <code>Surgery?</code> node. Your payoff is the <code>$</code> diamond. Each node is specified as a function of all nodes causing it.</p>
<p>For example, in a very simple deterministic version of the palm-reading scenario, the nodes could be specified as follows:</p>
<ol>
<li><code>Surgery?</code> is a program implementing the agent, and must output either <em>yes</em> or <em>no</em>.</li>
<li><code>Destiny</code> is either <em>good</em> or <em>bad</em>.</li>
<li><code>Fortune</code> is always <em>good</em> if <code>Surgery?</code> is <em>yes</em>, and is the same as <code>Destiny</code> otherwise.</li>
<li><code>$</code> is $100 if <code>Destiny</code> is <em>good</em> and $10 otherwise, minus $10 if <code>Surgery</code> is <em>yes</em>. Surgery is expensive!</li>
</ol>
<p>Now let&apos;s say that you expect even odds on whether or not your destiny is good or bad, e.g. the probability that <code>Destiny</code>=<em>good</em> is 50%.</p>
<p>If the <code>Surgery?</code> node is a program that implements causal decision theory, then that program will choose between <em>yes</em> and <em>no</em> using the following reasoning:</p>
<ul>
<li>The action node is <code>Surgery?</code></li>
<li>The available actions are <em>yes</em> and <em>no</em></li>
<li>The payoff node is <code>$</code></li>
<li>Consider the action <em>yes</em>
<ul>
<li>Replace the value of <code>Surgery?</code> with a function that always returns <em>yes</em></li>
<li>Calculate the value of <code>$</code></li>
<li>We would get $90 if <code>Destiny</code>=<em>good</em></li>
<li>We would get $0 if <code>Destiny</code>=<em>bad</em></li>
<li>This is $45 in expectation.</li>
</ul>
</li>
<li>Consider the action <em>no</em>
<ul>
<li>Replace the value of <code>Surgery?</code> with a function that always returns <em>no</em></li>
<li>Calculate the value of <code>$</code></li>
<li>We would get $100 if <code>Destiny</code>=<em>good</em></li>
<li>We would get $10 if <code>Destiny</code>=<em>bad</em></li>
<li>This is $55 in expectation.</li>
</ul>
</li>
<li>Return <em>no</em>, as that yields the higher value of <code>$</code>.</li>
</ul>
<p>More generally, the CDT reasoning procedure works as follows:</p>
<ol>
<li>Identify your action node <strong>A</strong></li>
<li>Identify your available actions <em>Acts</em>.</li>
<li>Identify your payoff node <strong>U</strong>.</li>
<li>For each action <em>a</em>
<ul>
<li>Set the action node <strong>A</strong> to <em>a</em> by replacing the value of <strong>A</strong> with a function that ignores its input and returns <em>a</em></li>
<li>Evaluate the expectation of <strong>U</strong> given that <strong>A</strong>=a</li>
</ul>
</li>
<li>Take the <em>a</em> with the highest associated expectation of <strong>U</strong>.</li>
</ol>
<p>Notice how CDT evaluates counterfactuals by setting the value of its action node in a causal graph, and then calculating its payoff accordingly. Done correctly, this allows a reasoner to figure out the causal implications of taking a specific action, without getting confused by nodes like <code>Destiny</code>.</p>
<p>CDT is the academic standard decision theory. Economics, statistics, and philosophy all assume (or, indeed, <em>define</em>) that rational reasoners use causal decision theory to choose between available actions.</p>
<p>Furthermore, narrow AI systems which consider their options using this sort of causal counterfactual reasoning are implicitly acting like they use causal decision theory.</p>
<p>Unfortunately, causal decision theory is broken.</p>
<h1 id="5">5</h1>
<p>Before we dive into the problems with CDT, let&apos;s flesh it out a bit more. Game Theorists often talk about scenarios in terms of tables that list the payoffs associated with each action. This might seem a little bit like cheating, because it often takes a lot of hard work to determine what the payoff of any given action is. However, these tables will allow us to explore some simple examples of how causal reasoning works.</p>
<p>I will describe a variant of the classic <a href="http://en.wikipedia.org/wiki/Prisoner&apos;s_dilemma">Prisoner&apos;s Dilemma</a> which I refer to as the <em>token trade</em>. There are two players in two separate rooms, one red and one green. The red player starts with the green token, and vice versa. Each must decide (in isolation, without communication) whether or not to give their token to me, in which case I will give it to the other player.</p>
<p>Afterwards, they may cash their token out. The red player gets $200 for cashing out the red token and $100 for the green token, and vice versa. The payoff table looks like this:</p>
<table>
	<tr>
    	<td></td>
    	<td style="color:red; text-align: center; border-right: 1px solid black; border-left: 1px solid black;">Give</td>
        <td style="color:red; text-align: center; border-right: 1px solid black;">Keep</td>
    </tr>
    <tr>
    	<td style="color:green; text-align: center; border-top: 1px solid black;">Give</td>
        <td style="border: 1px solid black; text-align: center;">(
        	<span style="color:green">$200</span>,
            <span style="color:red">$200</span>
            )
        </td>
        <td style="border: 1px solid black; text-align: center;">(
        	<span style="color:green">$0</span>,
            <span style="color:red">$300</span>
            )
        </td>
    </tr>
    <tr>
        <td style="color:green; text-align: center; border-top: 1px solid black; border-bottom: 1px solid black;">Keep</td>
        <td style="border: 1px solid black; text-align: center;">(
        	<span style="color:green">$300</span>,
            <span style="color:red">$0</span>
            )
        </td>
        <td style="border: 1px solid black; text-align: center;">(
        	<span style="color:green">$100</span>,
            <span style="color:red">$100</span>
            )
        </td>
    </tr>
</table>
<p>For example, if the green player gives the red token away, and the red player keeps the green token, then the red player gets $300 while the green player gets nothing.</p>
<p>Now imagine a causal decision theorist facing this scenario. Their causal graph might look something like this:</p>
<p><img src="https://mindingourway.com/content/images/2014/Sep/token-trade.png" alt loading="lazy"></p>
<p>Let&apos;s evaluate this using CDT. The action node is <code>Give?</code>, the payoff node is <code>$</code>. We must evaluate the expectation of <code>$</code> given <code>Give?</code>=<em>yes</em> and <code>Give?</code>=<em>no</em>. This, of course, depends upon the expected value of <code>TheirDecision</code>.</p>
<p>In Game Theory, we usually assume that the opponent is also reasoning using something like causal decision theory. Then we can reason about <code>TheirDecision</code> given that they are doing similar reasoning about our decision and so on. This threatens to lead to infinite regress, but in fact there are some tricks you can use to guarantee at least one equilibrium. (These are the famous <a href="http://en.wikipedia.org/wiki/Nash_equilibrium">Nash equilibria</a>.) This sort of reasoning requires both agents to use a modified version of the CDT procedure which we&apos;re going to ignore today. Because while <em>most</em> scenarios with multiple agents require more complicated reasoning, the token trade is an especially simple scenario that allows us to ignore these complications.</p>
<p>In the token trade, the expected value of <code>TheirDecision</code> doesn&apos;t matter to a CDT agent. No matter what the probability p of <code>TheirDecision</code>=<em>give</em> happens to be, the CDT agent will do the following reasoning:</p>
<ul>
<li>Change <code>Give?</code> to be a constant function returning <em>yes</em>
<ul>
<li>If <code>TheirDecision</code>=<em>give</em> then we get $200</li>
<li>If <code>TheirDecision</code>=<em>keep</em> then we get $0</li>
<li>We get 200p dollars in expectation.</li>
</ul>
</li>
<li>Change <code>Give?</code> to be a constant function returning <em>no</em>
<ul>
<li>If <code>TheirDecision</code>=<em>give</em> then we get $300</li>
<li>If <code>TheirDecision</code>=<em>keep</em> then we get $100</li>
<li>We get 300p + 100(1-p) dollars in expectation.</li>
</ul>
</li>
</ul>
<p>Obviously, 300p+100(1-p) will be larger than 200p, <em>no matter what probability p is</em>.</p>
<p>A CDT agent in the token trade must have an expectation about <code>TheirDecision</code> captured by a probability p that they will give their token, and we have just shown that no matter what that p is, the CDT agent will keep their token.</p>
<p>When something like this occurs (where <code>Give?</code>=<em>no</em> is better regardless of the value of <code>TheirDecision</code>) we say that <code>Give?</code>=<em>no</em> is a &quot;dominant strategy&quot;. CDT executes this dominant strategy, and keeps its token.</p>
<h1 id="6">6</h1>
<p>Of course, this means that each player will get $100, when they could have both recieved $200. This may seem unsatisfactory. Both players would agree that they could do better by trading tokens. Why don&apos;t they coordinate?</p>
<p>The classic response is that the token trade (better known as the Prisoner&apos;s Dilemma) is a game that explicitly disallows coordination. If players do have an opportunity to coordinate (or even if they expect to play the game mulitple times) then they can (and do!) do better than this.</p>
<p>I won&apos;t object much here, except to note that this answer is still unsatisfactory. CDT agents fail to cooperate on a one-shot Prisoner&apos;s Dilemma. That&apos;s a bullet that causal decision theorists willingly bite, but don&apos;t forget that it&apos;s still a bullet.</p>
<h1 id="7">7</h1>
<p>Failure to cooperate on the one-shot Prisoner&apos;s Dilemma is not necessarily a problem. Indeed, if you ever find yourself playing a token trade against an opponent using CDT, then you had better hold on to your token, because they surely aren&apos;t going to give you theirs.</p>
<p>However, CDT <em>does</em> fail on a very similar problem where it seems insane to fail. CDT fails at the token trade <em>even when it knows it is playing against a perfect copy of itself.</em></p>
<p>I call this the &quot;mirror token trade&quot;, and it works as follows: first, I clone you. Then, I make you play a token trade against yourself.</p>
<p>In this case, your opponent is guaranteed to pick exactly the same action that you pick. (Well, mostly: the game isn&apos;t completely symmetric. If you want to nitpick, consider that instead of playing against a copy of yourself, you must write a red/green colorblind deterministic computer program which will play against a copy of itself.)</p>
<p>The causal graph for this game looks like this:</p>
<p><img src="https://mindingourway.com/content/images/2014/Sep/token-mirror.png" alt loading="lazy"></p>
<p>Because I&apos;ve swept questions of determinism and asymmetry under the rug, both decisions will be identical. The red copy should trade its token, because that&apos;s guaranteed to get it the green token (and it&apos;s the only way to do so).</p>
<p>Yet CDT would have you evaluate an action by considering what happens if you replace the node <code>Give?</code> with a function that always returns that action. But this intervention does not affect the opponent, which reasons the same way! Just as before, a CDT agent treats <code>TheirDecision</code> as if it has some probability of being <em>give</em> that is independent from the agent&apos;s action, and reasons that &quot;I always keep my token while they act independently&quot; dominates &quot;I always give my token while they act independently&quot;.</p>
<p>Do you see the problem here? CDT is evaluating its action by changing the value of its action node <code>Give?</code>, assuming that this only affects things that are <em>caused</em> by <code>Give?</code>. The agent reasons counterfactually by considering &quot;what if <code>Give?</code> were a constant function that always returned <em>yes</em>?&quot; while failing to note that overwriting <code>Give?</code> in this way neglects the fact that <code>Give?</code> and <code>TheirDecision</code> are necessarily equal.</p>
<p>Or, to put it another way, CDT evaluates counterfactuals <em>assuming</em> that all nodes uncaused by its action are independent of its action. It thinks it can change its action and only look at the downstream effects. This can break down when there are acausal connections between the nodes.</p>
<p>After the red agent has been created from the template, its decision no longer <em>causally</em> affects the decision of the green agent. But both agents will do the same thing! There is a <em>logical</em> connection, even though there is no causal connection. It is these logical connections that are ignored by causal counterfactual reasoning.</p>
<p>This is a subtle point, but an important one: the values of <code>Give?</code> and <code>TheirDecision</code> are logically connected, but CDT&apos;s method of reasoning about counterfactuals neglects this connection.</p>
<h1 id="8">8</h1>
<p>This is a known failure mode for causal decision theory. The mirror token trade is an example of what&apos;s known as a &quot;Newcomblike problem&quot;.</p>
<p>Decision theorists occasionally dismiss Newcomblike problems as edge cases, or as scenarios specifically designed to punish agents for being &quot;rational&quot;. I disagree.</p>
<p>And finally, eight sections in, I&apos;m ready to articulate the original point: Newcomblike problems aren&apos;t a special case. <em>They&apos;re the norm.</em></p>
<p>But this post has already run on for far too long, so that discussion will have to wait until next time.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>