<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[productivity - Minding our way]]></title><description><![CDATA[to the heavens]]></description><link>https://mindingourway.com/</link><generator>Ghost 4.46</generator><lastBuildDate>Sat, 04 Apr 2026 10:12:25 GMT</lastBuildDate><atom:link href="https://mindingourway.com/tag/productivity/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Dark Arts of Rationality]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><span style="opacity: .3">[Note: backported from <a href="http://lesswrong.com/lw/jhs/dark_arts_of_rationality/">LessWrong</a>]</span></p>
<p><strong>Note: the author now disclaims this post, and asserts that his past self was insufficiently skilled in the art of rationality to &quot;take the good and discard the bad&quot; even when you don&apos;t yet know how to justify it. You can, of</strong></p>]]></description><link>https://mindingourway.com/dark-arts-of-rationality/</link><guid isPermaLink="false">5f94cfbaca8899827ef2a267</guid><category><![CDATA[backport]]></category><category><![CDATA[productivity]]></category><dc:creator><![CDATA[Nate Soares]]></dc:creator><pubDate>Sun, 19 Jan 2014 02:47:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><span style="opacity: .3">[Note: backported from <a href="http://lesswrong.com/lw/jhs/dark_arts_of_rationality/">LessWrong</a>]</span></p>
<p><strong>Note: the author now disclaims this post, and asserts that his past self was insufficiently skilled in the art of rationality to &quot;take the good and discard the bad&quot; even when you don&apos;t yet know how to justify it. You can, of course, get all the benefits described below, without once compromising your epistemics.</strong></p>
<hr>
<p>Today, we&apos;re going to talk about Dark rationalist techniques: productivity tools which seem incoherent, mad, and downright irrational. These techniques include:</p>
<ol>
<li>Willful Inconsistency</li>
<li>Intentional Compartmentalization</li>
<li>Modifying Terminal Goals</li>
</ol>
<p>I expect many of you are already up in arms. It seems obvious that consistency is a virtue, that compartmentalization is a flaw, and that one should <em>never</em> modify their terminal goals.</p>
<p>I claim that these &apos;obvious&apos; objections are incorrect, and that all three of these techniques can be instrumentally rational.</p>
<p>In this article, I&apos;ll promote the strategic cultivation of false beliefs and condone mindhacking on the values you hold most dear. Truly, these are Dark Arts. I aim to convince you that sometimes, the benefits are worth the price.</p>
<p><a id="more"></a></p>
<h1 id="changingyourterminalgoals">Changing your Terminal Goals</h1>
<p>In many games there is no &quot;absolutely optimal&quot; strategy. Consider the <a href="http://wiki.lesswrong.com/wiki/Prisoner">Prisoner&apos;s Dilemma</a>. The optimal strategy depends entirely upon the strategies of the other players. <em>Entirely.</em></p>
<p>Intuitively, you may believe that there are some fixed &quot;rational&quot; strategies. Perhaps you think that even though complex behavior is dependent upon other players, there are still <em>some</em> constants, like &quot;Never cooperate with DefectBot&quot;. DefectBot always defects against you, so you should never cooperate with it. Cooperating with DefectBot would be insane. Right?</p>
<p>Wrong. If you find yourself on a playing field where everyone else is a <a href="http://intelligence.org/files/RobustCooperation.pdf">TrollBot</a> (players who cooperate with you if and only if you cooperate with DefectBot) then you should cooperate with DefectBots and defect against TrollBots.</p>
<p>Consider that. There are playing fields where you should <em>cooperate with DefectBot</em>, even though that looks completely insane from a na&#xEF;ve viewpoint._&#xA0;_Optimality is not a feature of the strategy, it is a relationship between the strategy and the playing field.</p>
<p>Take this lesson to heart: in certain games, there are strange playing fields where the optimal move looks <em>completely irrational</em>.</p>
<p>I&apos;m here to convince you that <em>life</em> is one of those games, and that you occupy a strange playing field&#xA0;<em>right now</em>.</p>
<hr>
<p>Here&apos;s a toy example of a strange playing field, which illustrates the fact that even your terminal goals are not sacred:</p>
<p>Imagine that you are completely self-consistent and have a utility function. For the sake of the thought experiment, pretend that your terminal goals are distinct, exclusive, orthogonal, and clearly labeled. You value your goals being achieved, but you have no preferences about <em>how</em> they are achieved or what happens afterwards (unless the goal explicitly mentions the past/future, in which case achieving the goal puts limits on the past/future). You possess at least two terminal goals, one of which we will call <code>A</code>.</p>
<p><a href="http://wiki.lesswrong.com/wiki/Omega">Omega</a> descends from on high and makes you an offer. Omega will cause your terminal goal <code>A</code> to become achieved over a certain span of time, without any expenditure of resources. As a price of taking the offer, you must switch out terminal goal <code>A</code> for terminal goal <code>B</code>. Omega guarantees that <code>B</code> is orthogonal to <code>A</code> and all your other terminal goals. Omega further guarantees that you will achieve <code>B</code> using less time and resources than you would have spent on <code>A</code>. Any other concerns you have are addressed via similar guarantees.</p>
<p>Clearly, you should take the offer. One of your terminal goals will be achieved, and while you&apos;ll be pursuing a new terminal goal that you (before the offer) don&apos;t care about, you&apos;ll come out ahead in terms of time and resources which can be spent achieving your other goals.</p>
<p>So the optimal move, in this scenario, is to change your terminal goals.</p>
<p><em>There are times when the optimal move of a rational agent is to hack its own terminal goals.</em></p>
<p>You may find this counter-intuitive. It helps to remember that &quot;optimality&quot; depends as much upon the playing field as upon the strategy.</p>
<p>Next, I claim that such scenarios not restricted to toy games where Omega messes with your head. Humans encounter similar situations on a day-to-day basis.</p>
<hr>
<p>Humans often find themselves in a position where they should modify their terminal goals, and the reason is simple: our thoughts do not have direct control over our motivation.</p>
<p>Unfortunately for us, our &quot;motivation circuits&quot; can distinguish between terminal and instrumental goals. It is often easier to put in effort, experience inspiration, and work tirelessly when pursuing a terminal goal as opposed to an instrumental goal. It would be nice if this were not the case, but it&apos;s a <em>fact of our hardware</em>: we&apos;re going to do X more if we want to do X for its own sake as opposed to when we force X upon ourselves.</p>
<p>Consider, for example, a young woman who wants to be a rockstar. She wants the fame, the money, and the lifestyle: these are her &quot;terminal goals&quot;. She lives in some strange world where rockstardom is wholly dependent upon merit (rather than social luck and network effects), and decides that in order to become a rockstar she has to produce really good music.</p>
<p>But here&apos;s the problem: She&apos;s a human. Her conscious decisions don&apos;t directly affect her motivation.</p>
<p>In her case, it turns out that she can make better music when &quot;Make Good Music&quot; is a terminal goal as opposed to an instrumental goal.</p>
<p>When &quot;Make Good Music&quot; is an instrumental goal, she schedules practice time on a sitar and grinds out the hours. But she doesn&apos;t really <em>like</em> it, so she cuts corners whenever akrasia comes knocking. She lacks inspiration and spends her spare hours dreaming of stardom. Her songs are shallow and trite.</p>
<p>When &quot;Make Good Music&quot; is a terminal goal, music pours forth, and she spends every spare hour playing her sitar: not because she knows that she &quot;should&quot; practice, but because you couldn&apos;t pry her sitar from her cold dead fingers. She&apos;s not &quot;practicing&quot;, she&apos;s pouring out her soul, and no power in the &apos;verse can stop her. Her songs are emotional, deep, and moving.</p>
<p>It&apos;s obvious that she should adopt a new terminal goal.</p>
<p>Ideally, we would be just as motivated to carry out instrumental goals as we are to carry out terminal goals. In reality, this is not the case. As a human, your motivation system <em>does</em> discriminate between the goals that you feel obligated to achieve and the goals that you pursue as ends unto themselves.</p>
<p>As such, it is sometimes in your best interest to modify your terminal goals.</p>
<hr>
<p>Mind the terminology, here. When I speak of &quot;terminal goals&quot; I mean actions that feel like ends unto themselves. I am speaking of the stuff you wish you were doing when you&apos;re doing boring stuff, the things you do in your free time just because they are&#xA0;<em>fun</em>, the actions you don&apos;t need to justify.</p>
<p>This seems like the obvious meaning of &quot;terminal goals&quot; to me, but some of you may think of &quot;terminal goals&quot; more akin to self-endorsed morally sound end-values in some consistent utility function. I&apos;m not talking about those. I&apos;m not even convinced I have any.</p>
<p>Both types of &quot;terminal goal&quot; are susceptible to strange playing fields in which the optimal move is to change your goals, but it is only the former type of goal &#x2014; the actions that are simply&#xA0;<em>fun</em>, that need no justification &#x2014; which I&apos;m suggesting you tweak for instrumental reasons.</p>
<hr>
<p>I&apos;ve largely refrained from goal-hacking, personally. I bring it up for a few reasons:</p>
<ol>
<li>It&apos;s the easiest Dark Side technique to justify. It helps break people out of the mindset where they think optimal actions are the ones that look rational in a vacuum. Remember, optimality is a feature of the playing field. Sometimes cooperating with DefectBot is the best strategy!</li>
<li>Goal hacking segues nicely into the other Dark Side techniques which I use frequently, as you will see shortly.</li>
<li>I have met many people who would benefit from a solid bout of goal-hacking.</li>
</ol>
<p>I&apos;ve crossed paths with many a confused person who (without any explicit thought on their part) had really silly terminal goals. We&apos;ve all met people who are acting as if &quot;Acquire Money&quot; is a terminal goal, never noticing that money is almost entirely instrumental in nature. When you ask them &quot;but what would you do if money was no issue and you had a lot of time&quot;, all you get is a blank stare.</p>
<p>Even the <a href="http://wiki.lesswrong.com/wiki/Terminal_value">LessWrong Wiki entry</a> on terminal values describes a college student for which university is instrumental, and getting a job is terminal. This seems like a clear-cut case of a <a href="http://lesswrong.com/lw/le/lost_purposes/">Lost Purpose</a>: a job seems clearly instrumental. And yet, we&apos;ve all met people who act as if &quot;Have a Job&quot; is a terminal value, and who then seem aimless and undirected after finding employment.</p>
<p>These people could use some goal hacking. You can argue that Acquire Money and Have a Job aren&apos;t &quot;really&quot; terminal goals, to which I counter that many people don&apos;t know their ass from their elbow when it comes to their own goals. Goal hacking is an important part of becoming a rationalist and/or improving mental health.</p>
<p>Goal-hacking in the name of consistency isn&apos;t really a Dark Side power. This power is only Dark when you use it like the musician in our example, when you adopt terminal goals for instrumental reasons. This form of goal hacking is less common, but can be very effective.</p>
<p>I recently had a personal conversation with <a href="http://lesswrong.com/user/Alexei/">Alexei</a>, who is earning to give. He noted that he was not entirely satisfied with his day-to-day work, and mused that perhaps goal-hacking (making &quot;Do Well at Work&quot; an end unto itself) could make him more effective, generally happier, and more productive in the long run.</p>
<p>Goal-hacking can be a powerful technique, when correctly applied. Remember, you&apos;re not in direct control of your motivation circuits. Sometimes, strange though it seems, the optimal action involves fooling <em>yourself</em>.</p>
<p>You don&apos;t get good at programming by sitting down and forcing yourself to practice for three hours a day. I mean, I suppose you <em>could</em> get good at programming that way. But it&apos;s much easier to get good at programming by <em>loving programming</em>, by being the type of person who spends every spare hour tinkering on a project. Because then it doesn&apos;t feel like practice, it feels like fun.</p>
<p>This is the power that you can harness, if you&apos;re willing to tamper with your terminal goals for instrumental reasons. As rationalists, we would prefer to dedicate to instrumental goals the same vigor that is reserved for terminal goals. Unfortunately, we find ourselves on a strange playing field where goals that feel justified in their own right win the lion&apos;s share of our attention.</p>
<p>Given this strange playing field, goal-hacking can be optimal.</p>
<p>You don&apos;t have to completely mangle your goal system. Our aspiring musician from earlier doesn&apos;t need to destroy her &quot;Become a Rockstar&quot; goal in order to adopt the &quot;Make Good Music&quot; goal. If you can successfully convince yourself to believe that something instrumental is a means unto itself (e.g. terminal), <em>while still believing that it is instrumental</em>, then more power to you.</p>
<p>This is, of course, an instance of Intentional Compartmentalization.</p>
<h1 id="intentionalcompartmentalization">Intentional Compartmentalization</h1>
<p>As soon as you endorse modifying your own terminal goals, Intentional Compartmentalization starts looking like a pretty good idea. If Omega offers to achieve <code>A </code>at the price of dropping <span style="font-family: monospace;">A </span>and adopting&#xA0;<code>B</code>, the ideal move is to take the offer after finding a way to not <em>actually</em> care about <span style="font-family: monospace;">B</span>.</p>
<p>A consistent agent cannot do this, but I have good news for you: You&apos;re a human. You&apos;re not consistent. In fact, you&apos;re <em>great</em> at being inconsistent!</p>
<p>You might expect it to be difficult to add a new terminal goal while still believing that it&apos;s instrumental. You may also run into strange situations where holding an instrumental goal as terminal <em>directly contradicts</em> other terminal goals.</p>
<p>For example, our aspiring musician might find that she makes even <em>better</em> music if &quot;Become a Rockstar&quot; is <em>not</em> among her terminal goals.</p>
<p>This means she&apos;s in trouble: She either has to drop &quot;Become a Rockstar&quot; and have a better chance at <em>actually becoming a rockstar</em>, or she has to settle for a decreased chance that she&apos;ll become a rockstar.</p>
<p>Or, rather, she would have to settle for one of these choices &#x2014; if she wasn&apos;t human.</p>
<p>I have good news! Humans are <em>really really</em> good at being inconsistent, and you can leverage this to your advantage. <a href="http://wiki.lesswrong.com/wiki/Compartmentalization">Compartmentalize</a>! Maintain goals that are &quot;terminal&quot; in one compartment, but which you know are &quot;instrumental&quot; in another, then simply never let those compartments touch!</p>
<p>This may sound completely crazy and irrational, but remember: <a href="http://prettyrational.com/61/">you aren&apos;t actually in control of your motivation system</a>. You find yourself on a strange playing field, and the optimal move may in fact require mental contortions that make epistemic rationalists shudder.</p>
<p>Hopefully you never run into this particular problem (holding contradictory goals in &quot;terminal&quot; positions), but this illustrates that there are scenarios where compartmentalization works in your favor. Of course we&apos;d <em>prefer</em>&#xA0;to have direct control of our motivation systems, but <em>given that we don&apos;t</em>, compartmentalization is a huge asset.</p>
<p>Take a moment and let this sink in before moving on.</p>
<p>Once you realize that compartmentalization is OK, you are ready to practice my second Dark Side technique: Intentional Compartmentalization. It has many uses outside the realm of goal-hacking.</p>
<p>See, motivation is a fickle beast. And, as you&apos;ll remember, your conscious choices are not directly attached to your motivation levels. You can&apos;t just <em>decide</em> to be more motivated.</p>
<p>At least, not directly.</p>
<p>I&apos;ve found that certain beliefs &#x2014; beliefs which I <em>know are wrong</em> &#x2014; can make me more productive. (On a related note, remember that <a href="http://lesswrong.com/lw/5t/can_humanism_match_religions_output/">religious organizations are generally more coordinated than rationalist groups</a>.)</p>
<p>It turns out that, under these false beliefs, I can tap into motivational reserves that are otherwise unavailable. The only problem is, I know that these beliefs are downright false.</p>
<p>I&apos;m just kidding, that&apos;s not actually a problem. Compartmentalization to the rescue!</p>
<p>Here&apos;s a couple example beliefs that I keep locked away in my mental compartments, bound up in chains. Every so often, when I need to be extra productive, I don my protective gear and enter these compartments. I never fully believe these things &#x2014; not globally, at least &#x2014; but I&apos;m capable of attaining &quot;local belief&quot;, of acting as if I hold these beliefs. This, it turns out, is enough.</p>
<h2 id="nothingisbeyondmygrasp">Nothing is Beyond My Grasp</h2>
<p>We&apos;ll start off with a tame belief, something that is soundly rooted in evidence outside of its little compartment.</p>
<p>I have a global belief, outside all my compartments, that nothing is beyond my grasp.</p>
<p>Others may understand things easier I do or faster than I do. People smarter than myself grok concepts with less effort than I. It may take me <em>years</em>&#xA0;to wrap my head around things that other people find trivial.&#xA0;However, there is no idea that a human has ever had that I cannot, <em>in principle</em>, grok.</p>
<p>I believe this with moderately high probability, just based on my own general intelligence and the fact that brains are so tightly clustered in mind-space. It may take me a hundred times the effort to understand something, but I can still understand it eventually. Even things that are beyond the grasp of a meager human mind, I will one day be able to grasp after I upgrade my brain. Even if there are limits imposed by reality, I could <em>in principle</em> overcome them if I had enough computing power. Given any finite idea, I could in theory become powerful enough to understand it.</p>
<p>This belief, itself, is not compartmentalized. What is compartmentalized is the <em>certainty</em>.</p>
<p>Inside the compartment, I believe that Nothing is Beyond My Grasp with 100% confidence. Note that this is ridiculous: there&apos;s no such thing as 100% confidence. At least, not in my global beliefs. But inside the compartments, while we&apos;re in la-la land, it helps to treat Nothing is Beyond My Grasp as raw, immutable <em>fact</em>.</p>
<p>You might think that it&apos;s sufficient to believe Nothing is Beyond My Grasp with very high probability. If that&apos;s the case, you haven&apos;t been listening: I <em>don&apos;t</em> actually believe Nothing is Beyond My Grasp with an extraordinarily high probability. I believe it with moderate probability, and then I&#xA0;<em>have a compartment</em> in which it&apos;s a certainty.</p>
<p>It would be <em>nice</em> if I never needed to use the compartment, if I could face down technical problems and incomprehensible lingo and being really out of my depth with a relatively high confidence that I&apos;m going to be able to make sense of it all. However, I&apos;m not in direct control of my motivation. And it turns out that, through some quirk in my psychology, it&apos;s easier to face down the oppressive feeling of being in <em>way over my head</em> if I have this rock-solid &quot;belief&quot;&#xA0;that Nothing is Beyond My Grasp.</p>
<p>This is what the compartments are good for: I don&apos;t actually believe the things inside them, but I can still <em>act as if I do</em>. That ability allows me to face down challenges that would be difficult to face down otherwise.</p>
<p>This compartment was largely constructed with the help of <a href="http://en.wikipedia.org/wiki/The_Phantom_Tollbooth">The Phantom Tollbooth</a>: it taught me that there are certain impossible tasks you can do if you think they&apos;re possible. It&apos;s not always enough to know that if I believe I can do a thing, then I have a higher probability of being able to do it. I get an extra boost from believing I can do&#xA0;<em>anything</em>.</p>
<p>You might be surprised about how much you can do when you have a mental compartment in which you are <em>unstoppable</em>.</p>
<h2 id="mywillpowerdoesnotdeplete">My Willpower Does Not Deplete</h2>
<p>Here&apos;s another: My Willpower Does Not Deplete.</p>
<p>Ok, so my willpower actually does deplete. I&apos;ve been writing about how it does, and discussing methods that I use to avoid depletion. <em>Right now</em>, I&apos;m writing about how I&apos;ve acknowledged the fact that my willpower <em>does deplete</em>.</p>
<p>But I have this compartment where it doesn&apos;t.</p>
<p>Ego depletion is a funny thing. If you don&apos;t believe in ego depletion, you suffer <a href="http://pss.sagepub.com/content/early/2010/09/28/0956797610384745">less ego depletion</a>. This <a href="http://www.sciencedirect.com/science/article/pii/S0022103112000509">does not eliminate ego depletion</a>.</p>
<p>Knowing this, I have a compartment in which My Willpower Does Not Deplete. I go there often, when I&apos;m studying. It&apos;s easy, I think, for one to begin to feel tired, and say &quot;oh, this must be ego depletion, I can&apos;t work anymore.&quot; Whenever my brain tries to go there, I wheel this bad boy out of his cage. &quot;Nope&quot;, I respond, &quot;My Willpower Does Not Deplete&quot;.</p>
<p>Surprisingly, this often works. I won&apos;t force myself to keep working, but I&apos;m pretty good at preventing mental escape attempts via &quot;phantom akrasia&quot;. I don&apos;t allow myself to invoke ego depletion or akrasia to stop being productive, because My Willpower Does Not Deplete. I have to <em>actually be tired out</em>, in a way that doesn&apos;t trigger the My Willpower Does Not Deplete safeguards. This doesn&apos;t let me keep going forever, but it prevents a lot of false alarms.</p>
<p>In my experience, the strong version (My Willpower Does Not Deplete) is much more effective than the weak version (My Willpower is Not Depleted Yet), even though it&apos;s more wrong. This probably says something about my personality. Your mileage may vary. Keep in mind, though, that the effectiveness of your mental compartments may depend more on the motivational content than on degree of falsehood.</p>
<h2 id="anythingisaplacebo">Anything is a Placebo</h2>
<p>Placebos work <a href="http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0015591">even when you know they are placebos</a>.</p>
<p>This is the sort of madness I&apos;m talking about, when I say things like &quot;you&apos;re on a strange playing field&quot;.</p>
<p>Knowing this, you can easily activate the placebo effect manually. Feeling sick? Here&apos;s a freebie: drink more water. It will make you feel better.</p>
<p>No? It&apos;s just a placebo, you say? Doesn&apos;t matter. Tell yourself that water makes it better. Put that in a nice little compartment, save it for later. It doesn&apos;t matter that you know what you&apos;re doing: your brain is easily fooled.</p>
<p>Want to be more productive, be healthier, and exercise more effectively? Try using Anything is a Placebo! Pick something trivial and non-harmful and tell yourself that it helps you perform better. Put the belief in a compartment in which you <em>act as if</em> you believe the thing. Cognitive dissonance doesn&apos;t matter! Your brain is <em>great</em> at ignoring cognitive dissonance. You can &quot;know&quot; you&apos;re wrong in the global case, while &quot;believing&quot; you&apos;re right locally.</p>
<p>For bonus points, try combining objectives. Are you constantly underhydrated? Try believing that drinking more water makes you more alert!</p>
<p>Brains are weird.</p>
<hr>
<p>Truly, these are the Dark Arts of instrumental rationality. Epistemic rationalists recoil in horror as I advocate <em>intentionally cultivating false beliefs.</em> It goes without saying that you should use this technique with care. Remember to always audit your compartmentalized beliefs through the lens of your actual beliefs, and be very careful not to let incorrect beliefs leak out of their compartments.</p>
<p>If you think you can achieve similar benefits without &quot;fooling yourself&quot;, then by all means, do so. I haven&apos;t been able to find effective alternatives. Brains have been honing compartmentalization techniques for <em>eons</em>, so I figure I might as well re-use the hardware.</p>
<p>It&apos;s important to reiterate that these techniques are necessary because&#xA0;<em>you&apos;re not actually in control of your own motivation</em>. Sometimes, incorrect beliefs make you more motivated. Intentionally cultivating incorrect beliefs is surely a path to the Dark Side: compartmentalization only mitigates the damage. If you make sure you segregate the bad beliefs and acknowledge them for what they are then you can get much of the benefit without paying the cost, but there is still a cost, and the currency is cognitive dissonance.</p>
<p>At this point, you should be mildly uncomfortable. After all, I&apos;m advocating something which is completely epistemically irrational. We&apos;re not done yet, though.</p>
<p>I have one more Dark Side technique, and it&apos;s worse.</p>
<h1 id="willfulinconsistency">Willful Inconsistency</h1>
<p>I use Intentional Compartmentalization to &quot;locally believe&quot; things that I don&apos;t &quot;globally believe&quot;, in cases where the local belief makes me more productive. In this case, the beliefs in the compartments are things that I tell myself. They&apos;re like mantras that I repeat in my head, at the System 2 level. System 1 is fragmented and compartmentalized, and happily obliges.</p>
<p>Willful Inconsistency is the grown-up, scary version of Intentional Compartmentalization. It involves convincing System 1 wholly and entirely of something that System 2 does not actually believe. There&apos;s no compartmentalization and no fragmentation. There&apos;s nowhere to shove the incorrect belief when you&apos;re done with it. It&apos;s taken over the intuition, and it&apos;s always on. Willful Inconsistency is about having gut-level intuitive beliefs that you explicitly disavow.</p>
<p>Your intuitions run the show whenever you&apos;re not paying attention, so if you&apos;re willfully inconsistent then you&apos;re going to actually <em>act as if</em> these incorrect beliefs are true in your day-to-day life, unless your forcibly override your default actions. Ego depletion and distraction make you vulnerable <em>to yourself</em>.</p>
<p>Use this technique with caution.</p>
<p>This may seem insane even to those of you who took the previous suggestions in stride. That you must sometimes alter your terminal goals is a feature of the playing field, not the agent. The fact that you are not in direct control of your motivation system readily implies that tricking yourself is useful, and compartmentalization is an obvious way to mitigate the damage.</p>
<p>But why would anyone ever try to convince themselves, deep down at the core, of something that they don&apos;t actually believe?</p>
<p>The answer is simple: specialization.</p>
<p>To illustrate, let me explain how I use willful inconsistency.</p>
<p>I have invoked Willful Inconsistency on only two occasions, and they were similar in nature. Only one instance of Willful Inconsistency is currently active, and it works like this:</p>
<p>I have completely and totally convinced my intuitions that unfriendly AI is a problem. A big problem. System 1 operates under the assumption that UFAI will come to pass in the next twenty years with very high probability.</p>
<p>You can imagine how this is somewhat motivating.</p>
<p>On the conscious level, within System 2, I&apos;m much less certain. I solidly believe that UFAI is a big problem, and that it&apos;s the problem that I should be focusing my efforts on. However, my error bars are <em>far</em> wider, my timespan is quite broad. I acknowledge a decent probability of soft takeoff. I assign moderate probabilities to a number of other existential threats. I think there are a large number of unknown unknowns, and there&apos;s a non-zero chance that the status quo continues until I die (and that I can&apos;t later be brought back). All this I know.</p>
<p>But, <em>right now</em>, as I type this, my intuition is screaming at me that the above is all wrong, that my error bars are narrow, and that I don&apos;t <em>actually</em> expect the status quo to continue for even thirty years.</p>
<p>This is just how I like things.</p>
<p>See, I <em>am</em> convinced that building a friendly AI is the most important problem for me to be working on, <em>even though</em> there is a very real chance that MIRI&apos;s research won&apos;t turn out to be crucial. Perhaps other existential risks will get to us first. Perhaps we&apos;ll get brain uploads and Robin Hanson&apos;s emulation economy. Perhaps it&apos;s going to take far longer than expected to crack general intelligence. However, after much reflection I have concluded that despite the uncertainty, this is where I should focus my efforts.</p>
<p>The problem is, it&apos;s hard to translate that decision down to System 1.</p>
<p>Consider a toy scenario, where there are ten problems in the world. Imagine that, in the face of uncertainty and diminishing returns from research effort, I have concluded that the world should allocate 30% of resources to problem A, 25% to problem B, 10% to problem C, and 5% to each of the remaining problems.</p>
<p>Because specialization leads to massive benefits, it&apos;s much more effective to dedicate 30% of researchers to working on problem A rather than having all researchers dedicate 30% of their time to problem A. So presume that, in light of these conclusions, I decide to dedicate myself to problem A.</p>
<p>Here we have a problem: I&apos;m supposed to specialize in problem A, but at the intuitive level problem A isn&apos;t <em>that</em> big a deal. It&apos;s only 30% of the problem space, after all, and it&apos;s not really that much worse than problem B.</p>
<p>This would be no issue if I were in control of my own motivation system: I could put the blinders on and focus on problem A, crank the motivation knob to maximum, and trust everyone else to focus on the other problems and do their part.</p>
<p>But I&apos;m not in control of my motivation system. If my intuitions know that there are a number of other similarly worthy problems that I&apos;m ignoring, if they are distracted by other issues of similar scope, then I&apos;m tempted to work on everything at once. This is bad, because output is maximized if we all specialize.</p>
<p>Things get especially bad when problem A is highly uncertain and unlikely to affect people for decades if not centuries. It&apos;s very hard to convince the monkey brain to care about far-future vagaries, <em>even if</em> I&apos;ve rationally concluded that those are where I should dedicate my resources.</p>
<p>I find myself on a strange playing field, where the optimal move is to lie to System 1.</p>
<p>Allow me to make that more concrete:</p>
<p>I&apos;m <em>much</em> more motivated to do FAI research when I&apos;m intuitively convinced that we have a hard 15 year timer until UFAI.</p>
<p>Explicitly, I believe UFAI is one possibility among many and that the timeframe should be measured in decades rather than years. I&apos;ve concluded that it is my most pressing concern, but I don&apos;t <em>actually</em> believe we have a hard 15 year countdown.</p>
<p>That said, it&apos;s hard to understate how useful it is to have a gut-level feeling that there&apos;s a short, hard timeline. This &quot;knowledge&quot; pushes the monkey brain to go all out, no holds barred. In other words, this is the method by which I convince myself to <em>actually</em> specialize.</p>
<p>This is how I convince myself to deploy every available resource, to attack the problem as if the stakes were incredibly high. Because the stakes <em>are</em> incredibly high, and I <em>do</em> need to deploy every available resource, even if we don&apos;t have a hard 15 year timer.</p>
<p>In other words, Willful Inconsistency is the technique I use to force my intuition to <em>feel as if</em> the stakes are as high as I&apos;ve calculated them to be, given that my monkey brain is bad at responding to uncertain vague future problems. Willful Inconsistency is my counter to <a href="http://lesswrong.com/lw/hw/scope_insensitivity/">Scope Insensitivity</a>: my intuition has difficulty believing the results when I <a href="http://wiki.lesswrong.com/wiki/Shut_up_and_multiply">do the multiplication</a>, so I lie to it until it acts with appropriate vigor.</p>
<p>This is the final secret weapon in my motivational arsenal.</p>
<p>I don&apos;t personally recommend that you try this technique. It can have harsh side effects, including feelings of guilt, intense stress, and massive amounts of cognitive dissonance. I&apos;m able to do this in large part because I&apos;m in a very good headspace. I went into this with full knowledge of what I was doing, and I am confident that I can back out (and actually correct my intuitions) if the need arises.</p>
<p>That said, I&apos;ve found that cultivating a gut-level feeling that what you&apos;re doing <em>must</em> be done, and must be done <em>quickly</em>, is an extraordinarily good motivator. It&apos;s such a strong motivator that I seldom explicitly acknowledge it. I don&apos;t need to mentally invoke &quot;we have to study or the world ends&quot;. Rather, this knowledge lingers in the background. It&apos;s not a mantra, it&apos;s not something that I repeat and wear thin. Instead, it&apos;s this gut-level drive that sits underneath it all, that makes me strive to go faster unless I explicitly try to slow down.</p>
<p>This monkey-brain tunnel vision, combined with a long habit of productivity, is what keeps me <a href="http://lesswrong.com/lw/jh0/deregulating_distraction_moving_towards_the_goal/">Moving Towards the Goal</a>.</p>
<hr>
<p>Those are my Dark Side techniques: Willful Inconsistency, Intentional Compartmentalization, and Terminal Goal Modification.</p>
<p>I expect that these techniques will be rather controversial. If I may be so bold, I recommend that discussion focus on goal-hacking and intentional compartmentalization. I acknowledge that willful inconsistency is unhealthy and I don&apos;t generally recommend that others try it. By contrast, both goal-hacking and intentional compartmentalization are quite sane and, indeed, instrumentally rational.</p>
<p>These are certainly not techniques that I would recommend CFAR teach to newcomers, and I remind you that &quot;it is dangerous to be half a rationalist&quot;. You can royally screw you over if you&apos;re still figuring out your beliefs as you attempt to compartmentalize false beliefs. I recommend only using them when you&apos;re sure of what your goals are and confident about the borders between your actual beliefs and your intentionally false &quot;beliefs&quot;.</p>
<p>It may be surprising that changing terminal goals can be an optimal strategy, and that humans should consider adopting incorrect beliefs strategically. At the least, I encourage you to remember that there are no absolutely rational actions.</p>
<p>Modifying your own goals and cultivating false beliefs are useful because we live in strange, hampered control systems. Your brain was optimized with <a href="http://lesswrong.com/lw/l3/thou_art_godshatter/">no concern for truth</a>, and optimal performance may require <a href="http://lesswrong.com/lw/cn/instrumental_vs_epistemic_a_bardic_perspective/">self deception</a>. I remind the uncomfortable that instrumental rationality is not about being the most consistent or the most correct, it&apos;s about <em>winning.</em> There are games where the optimal move requires adopting false beliefs, and if you find yourself playing one of those games, then you should adopt false beliefs. Instrumental rationality and epistemic rationality can be pitted against each other.</p>
<p>We are fortunate, as humans, to be skilled at compartmentalization: this helps us work around our mental handicaps without sacrificing epistemic rationality. Of course, we&apos;d rather not have the mental handicaps in the first place: but you have to work with what you&apos;re given.</p>
<p>We <em>are</em> weird agents without full control of our own minds. We lack direct control over important aspects of ourselves. For that reason, it&apos;s often necessary to take actions that may seem contradictory, crazy, or downright irrational.</p>
<p>Just remember this, before you condemn these techniques: optimality is as much an aspect of the playing field as of the strategy, and humans occupy a strange playing field indeed.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Deregulating Distraction, Moving Towards the Goal, and Level Hopping]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><span style="opacity: .3">[Note: backported from <a href="http://lesswrong.com/lw/jg3/the_mechanics_of_my_recent_productivity/">LessWrong</a>]</span></p>
<p><em>This is the third post in a series discussing my recent <a href="https://mindingourway.com/the-mechanics-of-my-recent-productivity/">bout of productivity</a>. Within, I discuss two techniques I use to avoid akrasia and one technique I use to be especially productive.</em></p>
<h1 id="deregulatingdistraction">Deregulating Distraction</h1>
<p>I like to pretend that I have higher-than-normal willpower, because my</p>]]></description><link>https://mindingourway.com/deregulating-distraction-moving-towards-the-goal-and-level-hopping/</link><guid isPermaLink="false">5f94cfbaca8899827ef2a266</guid><category><![CDATA[backport]]></category><category><![CDATA[productivity]]></category><dc:creator><![CDATA[Nate Soares]]></dc:creator><pubDate>Sun, 12 Jan 2014 03:21:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><span style="opacity: .3">[Note: backported from <a href="http://lesswrong.com/lw/jg3/the_mechanics_of_my_recent_productivity/">LessWrong</a>]</span></p>
<p><em>This is the third post in a series discussing my recent <a href="https://mindingourway.com/the-mechanics-of-my-recent-productivity/">bout of productivity</a>. Within, I discuss two techniques I use to avoid akrasia and one technique I use to be especially productive.</em></p>
<h1 id="deregulatingdistraction">Deregulating Distraction</h1>
<p>I like to pretend that I have higher-than-normal willpower, because my ability to Get Things Done seems to be somewhat above average. In fact, this is not the case. I&apos;m not good at fighting akrasia. I merely have a knack for avoiding it.</p>
<p>When I was young, my parents were very good at convincing me to manage my money. They gave me an allowance, perhaps a dollar a week. When we would go to the store, I&apos;d get excited about some trite toy and ask my parents whether I could buy it.</p>
<p>Their answers were similar. My mother would crouch down, put a hand on my shoulder, and say &quot;Of course you can. But before you do, think carefully about how much you will enjoy this after you&apos;ve bought it, and what other things you would be able to buy if instead you saved up.&quot;</p>
<p>My father was a bit more direct. He&apos;d just shrug and say &quot;It&apos;s your money&quot;, with the barest hint of derision.</p>
<p>I rarely spent my allowance.</p>
<p>I now use a similar technique when dealing with distractions.</p>
<p>(It&apos;s worth noting that it&apos;s always been very easy to put me into far mode, perhaps in part because I decided at a very young age that I wasn&apos;t going to die.)</p>
<p>As <a href="http://lesswrong.com/lw/jgh/habitual_productivity/abpa">Kaj Sotala</a> and a few others noted, assigning guilt to non-productive tasks is not especially healthy. Nor is it, in my experience, sustainable. In a few different cases, I experienced scenarios where I wanted to do something but couldn&apos;t will myself to do it. I suffered ego depletion and hit a vicious cycle of unproductivity and depression. I never fell completely into the self-hate death spiral, but I flirted around at the edges. It became clear that I needed a new strategy.</p>
<p>To break the cycle, I decided to stop fighting myself.</p>
<p><a id="more"></a></p>
<p>The world is full of distractions, and I have plenty of vices. I am just as susceptible as anyone to binging on TV shows or video games or book series. Instead of trying (and often failing) to stop myself from indulging, I decided to allow myself to indulge whenever I really wanted to.</p>
<p>&quot;It&apos;s your time&quot;, I told myself.</p>
<p>This changed the game entirely. I no longer willed myself to avoid temptation: I weighed temptations alongside my other options, took their pros and cons into account, and made an informed decision. Did I <em>need</em> to distract myself? Sometimes, the answer was yes.</p>
<p>Knowing that I could no longer trust myself to bail me out if I got addicted to new media, I took special care in removing as many distractions as I could from my environment. Because I&apos;d resolved not to spend willpower to cancel addictions, I became much more cautious at the point of entry. These days, I ignore recommendations about new TV shows and books, preferring not even to learn the premises, thus dodging the temptation entirely.</p>
<p>By allowing distractions a place in my mental calculus I allowed myself to choose between them with more care: I am able to watch movies instead of TV shows, to read standalone books instead of entire series.</p>
<p>I know full well that my resolution against spending willpower against myself means that once I get addicted to something, it has to run its full course before I can be productive again. This is a nuclear option: because I know that I <em>won&apos;t</em> stop, I am <em>very</em> leery of lengthy media. I avoid open-ended addictions (ongoing online games, chemical addictions, etc.) like the plague.</p>
<p>I refer to this strategy as &quot;playing chicken against myself&quot;: because I know that I&apos;ll let long addictions run their course, I seldom have to.</p>
<p>From another perspective, you could say that I deregulated a black market on distractions: By lifting the mental ban on entertainment, I was able to price it accurately and weigh the tradeoffs. If there is a new book I want to read, the answer is not an outright and unenforcible &quot;No&quot;. Rather, it&apos;s &quot;can we afford to be underproductive for the next few days?&quot;. And when the answer is negative, it&apos;s significantly easier for me to postpone gratification than to resist the temptation entirely. The end result is that I have much more control over when I indulge in escapism.</p>
<p>Finally, I&apos;ve found that this <em>feels</em> a lot better than feeling guilty about being unproductive. It&apos;s a healthier state of mind, and it&apos;s led to a general increase in happiness.</p>
<h1 id="movingtowardsthegoal">Moving Towards the Goal</h1>
<p>My teachers used to tell my parents that I have two modes of operation: I either put in the minimum possible effort or I blow expectations completely out of the water. They claimed I have no middle ground.</p>
<p>This isn&apos;t quite accurate. The truth is, I <em>always</em> put in the minimum effort. Anything else would be wasted motion. The discrepancy they observed was not due to some whim of passion, it was an artifact of how our incentives were misaligned.</p>
<p>In school I was incentivized to ace classes with minimal work. I was very good at obeying the letter of the law while blatantly flouting the spirit, and I had a knack for knowing <em>exactly</em> how far I could push my luck. My teachers had&#x2026; polarized opinions of me, to say the least. I was an arrogant kid.</p>
<p>Yet when my schoolwork happened to align with some personal goal &#x2014; mastering a new technique, figuring out new secrets of the universe &#x2014; then I was relentless, shattering expectations with apparent ease. A number of my teachers took it upon themselves to press upon me just how much I could do if I actually <em>applied</em> myself. I didn&apos;t bother correcting them. If they weren&apos;t going to invent a grade higher than &apos;A&apos;, why should I waste my efforts in the classroom? I had better things to do.</p>
<p>Like I said, I was an arrogant kid.</p>
<p>This experience in school had two important repercussions. First, it taught me to seek out the gap between the <em>intended</em> rules and the <em>actual</em> rules. I developed a knack for it, and this has served me well in many walks of life. Noticing the space between what you meant and what you said is a fundamental skill for programmers. Math is a tool designed to narrow such gaps. Logical incompleteness theorems are statements about the gap between what logic <em>can</em> say and what mathematicians <em>want</em> to say.</p>
<p>Secondly, and more relevant to this post, school helped me make explicit the virtue of putting in the minimum possible effort. Authority figures parroted the value of hard work, but that&apos;s only half the story. You should <em>always</em> be putting forth the least amount of effort that it takes to achieve your goals. That&apos;s not to say that you should never do hard work: in many situations, the easiest way to achieve your goals is to do things right the first time. I&apos;m not condoning shoddy work, either: if quality is part of your goal then you&apos;d best do things correctly. If you&apos;re trying to signal competence, then by all means, put in extra effort. But you should <em>never</em> expend extra effort just for effort&apos;s sake.</p>
<p>This leads us to my second trick for avoiding akrasia: I am not Trying Really Hard. People who are Trying Really Hard give themselves rewards for progress or punishments for failure. They incentivize the behavior that they want to have. They keep on deciding to continue doing what they&apos;re doing, and they engage in valiant battle against akrasia. I don&apos;t do any of that.</p>
<p>Instead, I simply Move Towards the Goal.</p>
<p>I don&apos;t will myself to study. It is not a chore, it is not something I force myself to do. That&apos;s not to say I enjoy studying, per se: it&apos;s hard work, and the reward structure is pathetic compared to programming. If I had to force or convince myself to study lots of math continously, I don&apos;t think I&apos;d get very far.</p>
<p>That&apos;s not how I operate. I don&apos;t Try Really Hard. I simply Move Towards the Goal.</p>
<p>This is where the previous post ties in. I&apos;ve mostly eliminated the guilt I feel while unproductive, but I&apos;ve maintained two very important things from that era of my life:</p>
<ol>
<li>In my head, long-term satisfaction is linked to productivity.</li>
<li>I have maintained habitual productivity for years.</li>
</ol>
<p>Between these two points, I know that once I&apos;ve settled on a goal, I&apos;m going to more towards it.</p>
<p>This is, internally, an immutable fact, made so both by habit and by crude Pavlovian training. None of this is explicit, mind you, it&apos;s just the <em>nature of goals</em>. I can change the goal and I can drop the goal, but I can&apos;t hold the goal and <em>not pursue</em> it.</p>
<p>I never <em>decided</em> to study really hard. You can &quot;decide&quot; not to watch the next episode of that TV show only to sternly berate yourself three episodes later. My decision to study hard was made on a lower level, it&apos;s been internalized. Acting on goals is the thing that System 1 does regardless of what System 2 &quot;decides&quot;.</p>
<p>System 2 controls things by <em>picking</em> the goals. It was a long and arduous process to internalize my most recent set of goals, the ones that have driven me to study hard and become a research associate and so on. It took a few months and a bit of mindhacking, and that&apos;s a story for another day. But once the goal was <em>chosen</em>, marching towards it was out of my hands.</p>
<p>System 2 isn&apos;t in control of <em>whether</em> I move towards the goal. Instead, it spends its time doing something it&apos;s very good at: finding the most efficient path. Minimizing effort.</p>
<p>I don&apos;t actively force myself to study hard. Rather, the structure of the environment is such that the shortest path to the goal requires hard studying. I merely follow that path.</p>
<p>Moving Towards the Goal might look a lot like Trying Really Hard from the outside. Superficially, the two are similar. On the inside, though, they feel very different. I&apos;ve Tried Really Hard before, and I&apos;m not good at it. It requires exertion of willpower and results in depletion of ego.</p>
<p>When I&apos;m Moving Towards the Goal, I don&apos;t worry about whether things will be done. I&apos;ve outsourced that concern to habit. Instead, mental effort is spent looking for the shortest path, the easiest route. Difficult paths do not require additional willpower, because the internal narrative is not one of expending effort. If anything, a difficult path is worth extra points, because it means I&apos;m pursuing admirable goals. Internally, I&apos;m not Struggling Against Akrasia. I&apos;m Finding an Efficient Route.</p>
<p>Don&apos;t get me wrong, studying math at high speed for five months was hard. However, I have built myself a headspace where hardness is not an obstacle to overcome but a <em>feature of the terrain</em>. I am going to march on regardless. System 2 doesn&apos;t have to spend effort convincing System 1 to move forward, because System 1 is going to move forward come hell or high water. Thus, System 2 spends its time making sure that the march is as easy as possible.</p>
<p>This leaves me free to try new techniques to achieve my goals more effectively, and that leads us to our final trick for the day.</p>
<h1 id="levelhopping">Level Hopping</h1>
<p>I started doing NaNoWriMo in 2011, and I noticed something interesting: a vast majority of winners <em>barely</em> made it to 50,000 words. The goal of NaNoWriMo is to write 50k words in a month, so I wasn&apos;t particularly surprised. However, from my interactions with others I found that a vast majority of these winners <em>felt</em> like they were pushing themselves to the limit, even though many of them were probably psychologically anchored below their actual limits. After all, in my experience, the hardest part of NaNoWriMo is <em>writing every day</em>: the most difficult part of being productive is switching contexts, once you get rolling it&apos;s not difficult to keep rolling.</p>
<p>It seemed clear that if the goal had been 60k, many of the same people would have eeked out a victory with similar margins and the same narrative of butting against their limits. The natural conclusion was that I can&apos;t trust myself to feel out my own limits.</p>
<p>This is when I decided to start hopping to higher levels of productivity. These days, I occasionally throw wrenches into my study plans when I think I&apos;m growing complacent.</p>
<p>&quot;Those set theory and category theory books were easy&quot;, I&apos;ll say, &quot;Let&apos;s try skipping introductory logic and going <a href="http://nanowrimo.org/participants/chasejyd/novels/the-accidental-inquisitor/stats">straight to model theory</a>&quot;.</p>
<p>Or, &quot;All this studying is great, but I bet I could keep it up and also do a NaNoWriMo for 75k words&quot;.</p>
<p>Often, this fails spectacularly. Sometimes, I <em>am</em> at or near my limits, and skipping an intro logic textbook to dive straight into Model Theory is a <em>really bad idea</em>. Other times, I find out that I actually was just hovering around an anchor point, seduced by a narrative of linear improvement.</p>
<p>This is not an original idea, by any means. In fact, there&apos;s a relevant Bruce Lee quote:</p>
<blockquote>
<p>There are no limits. There are plateaus, but you must not stay there, you must go beyond them. If it kills you, it kills you. A man must constantly exceed his level.</p>
</blockquote>
<p>-&#xA0;<a href="http://zenpencils.com/comics/2012-04-11-bruce-lee-2.jpg">Bruce Lee</a></p>
<p>My point, more broadly, is that this is the type of thing that occupies my mental narrative. I&apos;m not wondering whether I will be able to convince myself to study each day. Instead, I&apos;m gauging whether I&apos;m reading the most effective material. I&apos;m noticing that it won&apos;t be enough for me to just <em>learn</em> the material, I also have to <em>signal</em> that I&apos;ve learned the material (and that I should start doing book reviews). I&apos;m monitoring to see when I&apos;ve grown complacent and looking for ways to keep me on my toes. This is process is doubly useful: It helps me sidestep akrasia and it also helps me become more effective.</p>
<hr>
<p>These are my three Light Side tools:</p>
<ol>
<li>I&apos;ve constructed an environment in which productivity is habitual. In the absence of distractions, I trust myself to get things done.</li>
<li>I&apos;ve lifted my mental ban on distractions, and trust myself to use them wisely.</li>
<li>My mental narrative is one of expending minimal effort, not one of trying to succeed: instead of worrying about whether I can continue, I worry about how to perform better.</li>
</ol>
<p>Most of these tricks are likely familiar: I do not claim originality; this is merely an account of the methods that I use, the things that work for me. Consider this to be evidence that these techniques work for people who share my personality (which I&apos;ve tried to illustrate along the way).</p>
<p>You now have a broad sketch of how I maintain productivity, but it may seem somewhat unstable, difficult to maintain indefinitely. The next post will detail my Dark Side tactics: tricks I use to remain unrelenting and sustain my vigorous pace, but which may make rationalists uncomfortable.</p>
<p>After that, I&apos;ll tell the story of a kid who decided he would save the world for reasons completely unrelated to existential risk, and how he came to align himself with MIRI&apos;s mission. This will help you understand the source of my passion, and will conclude the series.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Habitual Productivity]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><span style="opacity: .3">[Note: backported from <a href="http://lesswrong.com/lw/jgh/habitual_productivity/">LessWrong</a>]</span></p>
<p><em>I was able to maintain <a href="https://mindingourway.com/the-mechanics-of-my-recent-productivity/">high productivity</a> for extended periods of time and achieve some difficult goals. In this and the following posts I will discuss some personality quirks and techniques that helped me do this. This post is fairly self-expository. I claim no originality, this</em></p>]]></description><link>https://mindingourway.com/habitual-productivity/</link><guid isPermaLink="false">5f94cfbaca8899827ef2a265</guid><category><![CDATA[backport]]></category><category><![CDATA[productivity]]></category><dc:creator><![CDATA[Nate Soares]]></dc:creator><pubDate>Thu, 09 Jan 2014 06:44:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><span style="opacity: .3">[Note: backported from <a href="http://lesswrong.com/lw/jgh/habitual_productivity/">LessWrong</a>]</span></p>
<p><em>I was able to maintain <a href="https://mindingourway.com/the-mechanics-of-my-recent-productivity/">high productivity</a> for extended periods of time and achieve some difficult goals. In this and the following posts I will discuss some personality quirks and techniques that helped me do this. This post is fairly self-expository. I claim no originality, this is simply an account of how I operate.</em></p>
<p>Secret number one: Productivity is a habit of mine. As I mentioned in the previous post, I&apos;ve been following a similar schedule for years: two days doing social things, five days doing something constructive. Before I turned my efforts towards FAI research, this mainly consistent of programming, writing, and self-education.</p>
<p>This habit was not sufficient to get the high productivity I attained in the last few months, but it was definitely necessary.</p>
<p>I understand that this is not helpful advice: &quot;I&apos;m habitually productive&quot; just passes the buck. &quot;Ah&quot;, you ask, &quot;but how did you turn productivity into a habit?&quot; For that, I have an ace up my sleeve:</p>
<p>I <em>deplore</em> fun.</p>
<p><a id="more"></a></p>
<p>Ok, not really. However, I do have a strong aversion to activities that I find unproductive. This aversion is partly innate and partly developed. It first became explicit at the age of nine or ten, when I read <em>The Phantom Tollbooth</em>:</p>
<blockquote>
<p>&quot;KILLING TIME!&quot; roared the dog&#x2014;so furiously that his alarm went off. &quot;It&apos;s bad enough wasting time without killing it.&quot; And he shuddered at the thought.</p>
</blockquote>
<ul>
<li>Norton Juster, <a href="http://books.google.com/books?id=r3jFlrbsACMC&amp;pg=PT32&amp;lpg=PT32&amp;dq=killing+time+phantom+tollbooth&amp;source=bl&amp;ots=k1o2aZYICZ&amp;sig=_m76RAD0PKbhQdXQWwKVSpanKFU&amp;hl=en&amp;sa=X&amp;ei=s-3MUvSlOsLgoATtroGwBQ&amp;ved=0CGoQ6AEwCA#v=onepage&amp;q&amp;f=false">The Phantom Tollbooth</a></li>
</ul>
<p>This quote stuck with me. Time is scarce, and I certainly didn&apos;t want to <em>kill</em> any.</p>
<p>I developed an explicit distaste for boredom, and went out of my way to avoid it. I kept books near me at all times. I invented stories and thought up new plots when drifting off to sleep. I invented mental puzzles to keep me entertained during class, including a stint in my teens where I worked out the base 12 multiplication tables. Later, I put spare mental cycles towards considering my code, probing edge cases or considering alternative designs (a practice that is no doubt familiar to all programmers).</p>
<p>This distaste broadened as I aged. I grew to realize that I didn&apos;t just want to be doing things, I wanted to be doing <em>useful</em> things. My disdain started spreading towards other activities, ones that didn&apos;t forward my long-term goals. The memories are hazy, and I&apos;m not sure whether this caused or was caused by my na&#xEF;ve resolution to save the world (or a whole tangle of other factors), but I know the two were linked.</p>
<p>Before long, I began to view escapism as a guilty pleasure: fun and addictive, but unsatisfying. Things like hiking and going to parties became almost a chore: I superficially enjoyed them, sure, but I yearned to be elsewhere, doing something <em>permanent</em>. Even reading fiction took on a pang of guilt. I valued things that moved me forward, that honed my skills or moved me closer to my terminal goals. I wanted to be <em>building</em> things, <em>improving</em> things.</p>
<p>This is my first secret weapon: I lost the ability to be satisfied by unproductive activity.</p>
<p>This was not particularly pleasant.</p>
<p>As I got older, I struggled to balance social activities that were supposed to be fun with all of the things that I wanted to learn and build. All forms of entertainment were weighed against their opportunity cost. This wasn&apos;t an elegant phase of my life: I was still a teenager, and I yearned for social validation, strong friendships, and adventures just as much as my peers. Trouble was, I was caught in a catch 22: when I squirreled away in my room being &quot;productive&quot; I felt like I was missing out, and when I went outside to have &quot;adventures&quot; I only wanted to be elsewhere. I vacillated wildly for a few years before coming to terms with myself.</p>
<p>These days, I aim to spend about two evenings a week (one on weekdays, one on weekends) doing something that&apos;s traditionally fun. I spend the rest of my time doing things that sate my neverending desire to march towards my goals.</p>
<p>It&apos;s interesting to note that, in the end, there wasn&apos;t really a compromise. The productivity side just flat-out won: I eventually realized that human interaction is necessary for mental health and that a solid social network is invaluable. I don&apos;t mean to imply that I engage in social interaction because I&apos;ve calculated that it&apos;s necessary: I <em>really do</em> enjoy social interaction, and I <em>really want</em> to be able to enjoy it without guilt. Rather, it&apos;s more like I&apos;ve found an excuse that allows me to both enjoy myself and sate the thirst. That said, it&apos;s still difficult for me to disengage sometimes.</p>
<hr>
<p>This is also not the most helpful advice, I realize: I&apos;m good at being productive in part because I&apos;m bad at being satisfied unless my current task forwards my active goals. This isn&apos;t exactly something you can practice.</p>
<p>Unless you&apos;re into mind hacking, I suppose. (Note: At this point in the post, set your &quot;humor&quot; dials to &quot;dry&quot;.)</p>
<p>When I was quite young, one of the guests at our house refused to eat processed food. I remember that I offered her some fritos and she refused. I was fairly astonished, and young enough to be socially inept. I asked, incredulous, how someone could <em>not like</em> fritos. To my surprise, she didn&apos;t brush me off or feed me banal lines about how different people have different tastes. She gave me the answer of someone who had recently stopped liking fritos through an act of will. Her answer went something like this: &quot;Just start noticing how greasy they are, and how the grease gets all over your fingers and coats the inside of the bag. Notice that you don&apos;t want to eat things soaked in that much grease. Become repulsed by it, and then you won&apos;t like them either.&quot;</p>
<p>Now, I was a stubborn and contrary child, so her ploy failed. But to this day, I still notice the grease. This woman&apos;s technique stuck with me. She picked out a <em>very specific</em> property of a thing she wanted to stop enjoying and convinced herself that it repulsed her.</p>
<p>If I were <em>trying</em> to start hating fun (and I remind you that I&apos;m not trying, because I already do, and that you shouldn&apos;t try, because it&apos;s no fun) then this is the route I would recommend: Recognize those little discomforts that underlie your escapism, latch on to them, and blow them <em>completely</em>&#xA0;out of proportion. (Disclaimer: I am not a mindwizard; I&apos;ve no doubt there are better ways to change your affections if you&apos;re in to mindhacking.)</p>
<p>Note that such mindhacking is a Dark Art which you should not pursue.&#xA0;Side effects may include:</p>
<ul>
<li>Experiencing guilt when you should be having a grand old time.</li>
<li>Attempting to complete hikes as fast as possible so you can get back to what you were working on.</li>
<li>A propensity to get more tense when you&apos;re supposed to be relaxing.</li>
<li>A tendency to bring books to live concerts so that you can multitask.</li>
</ul>
<p>Furthermore, I imagine that this can backfire <em>reaaaly</em> hard: if you manage to develop a strong revulsion for unproductive activities but <em>still</em> can&apos;t force yourself to stop browsing reddit (or whatever your vice) then you run a big risk of hitting a willpower-draining death spiral.</p>
<p>So I&apos;m <em>really</em> not recommending that you try this mindhack. But if&#xA0;you <em>already</em> have spikes of guilt after bouts of escapism, or if you house an arrogant disdain for wasting your time on TV shows, here are a few mantras you can latch on to to help yourself develop a solid hatred of fun (I warn you that these are calibrated for a 14 year old mind and may be somewhat stale):</p>
<ul>
<li>When skiing, partying, or generally having a good time, try remembering that this is exactly the type of thing people should have an opportunity to do <em>after</em> we stop everyone from dying.</li>
<li>When doing something transient like watching TV or playing video games, reflect upon how it&apos;s not building any skills that are going to make the world a better place, nor really having a lasting impact on the world.</li>
<li>Notice that if the world is to be saved then it <em>really does</em> need to be you who saves it, because everybody else is busy skiing, partying, reading fantasy, or dying in third world countries.</li>
</ul>
<p>It also helps if you&apos;re extraordinarily arrogant and you house a deep-seated belief in civilizational inadequacy.</p>
<p>(You may now disengage your humor shielding.)</p>
<hr>
<p>I strongly recommend finding a different and preferably healthier route to habitual productivity. The point of this exposition is that <em>for me</em>, a quirk of my psychology led me to a schedule where I spend my days doing things that lead towards my goals.</p>
<p>My distaste for other activities is not the thing that is driving&#xA0;me, per se: it has merely pushed me towards a certain lifestyle, it has helped me develop a certain habit. That habit is the foundation for my recent achievements.</p>
<p>If you can structure your life such that productive things are the things that you do&#xA0;<em>by default</em>, the things that you do in your free time when you have nothing else on your plate, then you will be in good shape. When &quot;do something that forwards your goals&quot; is the <em>fallback</em> plan then it becomes much easier to scale your efforts up.</p>
<p>The way that I built such structure into my own life was pretty personalized and likely unhealthy, but I&apos;m quite content with the end result. So that&apos;s my advice for the day: if you can, try to make your default actions useful. Find a way to make productivity habitual.</p>
<p>When forming habits, repetition is very important. If you&apos;re trying to be highly productive, consider starting by being a little productive with high regularity_._&#xA0;Humans are very habitual creatures, and establishing a habit of completing easier tasks may pay off in the long run.</p>
<p>Even if you start with the easier tasks, though, you&apos;re going to need a good chunk of motivation to successfully form a habit of doing things that require effort. In these waters swims Akrasia, a most ancient enemy. I meant to delve more into the sources of my motivation and some tricks I use to avoid akrasia, but I&apos;ve run out of time. Further posts will follow.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The mechanics of my recent productivity]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><span style="opacity: .3">[Note: backported from <a href="http://lesswrong.com/lw/jg3/the_mechanics_of_my_recent_productivity/">LessWrong</a>]</span></p>
<p>A decade ago, I decided to save the world. I was fourteen, and the world certainly wasn&apos;t going to save itself.</p>
<p>I fumbled around for nine years; it&apos;s surprising how long one can fumble around. I somehow managed to miss the whole</p>]]></description><link>https://mindingourway.com/the-mechanics-of-my-recent-productivity/</link><guid isPermaLink="false">5f94cfbaca8899827ef2a264</guid><category><![CDATA[backport]]></category><category><![CDATA[productivity]]></category><dc:creator><![CDATA[Nate Soares]]></dc:creator><pubDate>Thu, 09 Jan 2014 02:30:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><span style="opacity: .3">[Note: backported from <a href="http://lesswrong.com/lw/jg3/the_mechanics_of_my_recent_productivity/">LessWrong</a>]</span></p>
<p>A decade ago, I decided to save the world. I was fourteen, and the world certainly wasn&apos;t going to save itself.</p>
<p>I fumbled around for nine years; it&apos;s surprising how long one can fumble around. I somehow managed to miss the whole idea of existential risk and the whole concept of an intelligence explosion. I had plenty of other ideas in my head, and while I spent a lot of time honing them, I wasn&apos;t particularly looking for new ones.</p>
<p>A year ago, I finally read the LessWrong sequences. My road here was roundabout, almost comical. It took me a while to come to terms with the implications of what I&apos;d read.</p>
<p>Five months ago, after resolving a few internal crises, I started donating to MIRI and studying math.</p>
<p>Three weeks ago, I attended the December MIRI workshop on logic, probability, and reflection. I was invited to visit for the first two days and stay longer if things went well. They did: I was able to make some meaningful contributions.</p>
<p>On Saturday I was invited to become a MIRI research associate.</p>
<p>[Edit to add: about a month later, I became a full-time MIRI research fellow, and fourteen months after that, I became the executive director of MIRI.]</p>
<p>It&apos;s been an exciting year, to say the least.</p>
<p>To commemorate the occasion &#x2014; and because a few people have expressed interest in my efforts &#x2014; I&apos;ll be writing a series of posts about my experience, about what I did and how I did it. This is the first post in the series.</p>
<hr>
<p>First and foremost, know that I am not done with my aggressive autodidacting. I have a long way to go yet before I&apos;m anywhere near as productive as others who do research with MIRI. I find myself at a checkpoint of sorts, collecting my thoughts in the wake of my first workshop, but next week I will be back to business.</p>
<p>One goal of this post is to give you a feel for how much effort is required to become good at MIRI-relevant mathematics in a short time, and perhaps inspire others to follow my path. It was difficult, but not as difficult as you might think.</p>
<p>Another goal is to provide data for fellow autodidacts. At the least I can provide you with an anchor point, a single datum about how much effort is required to learn at this pace. As always, remember that I am only one person and that what worked for me may not work for you.</p>
<p>In order to understand what I achieved it&apos;s important to know where I started from. Thus, allow me to briefly discuss my relevant prior experience.</p>
<h1 id="background">Background</h1>
<p>I was born in 1989. I have bachelor&apos;s degrees of science in both computer science and economics. I started programming TI-83 calculators in late 2002. I&apos;ve been programming professionally since 2008. I currently work for Google and live in Seattle.</p>
<p>In high school I had a knack for math. I was placed two years ahead of my classmates. I aced some AP tests, I won some regional math competitions, nothing much came of it. I explicitly decided not to pursue mathematics: I reasoned that in order to save the world I would need charisma, knowledge of how the world economy works, and a reliable source of cash. This (and my love of programming) drove my choice of majors.</p>
<p>During college I soaked up computer science like a sponge. (Economics, too, but that&apos;s not as relevant here.) I came out of college with a strong understanding of the foundations of computing: algorithms, data structures, discrete math, etcetera. I cultivated a love for information theory. Outside of the computer science department I took two math classes: multivariable calculus and real analysis.</p>
<p>I was careful not to let schooling get in the way of my education. On my own time I learned Haskell in 2008 and started flirting with type theory and category theory. I read <em>G&#xF6;del, Escher, Bach</em> early in 2011.</p>
<p>This should paint a rough picture of my background: I never explicitly studied mathematical logic, but my interests never strayed too far from it. While I didn&apos;t have much formal training in this particular subject area, I certainly wasn&apos;t starting from a blank slate.</p>
<h1 id="accomplishments">Accomplishments</h1>
<p>In broad strokes, I&apos;m writing this because I was able to learn a lot very quickly. In the space of eighteen weeks I went from being a professional programmer to helping Benja discover <a href="https://intelligence.org/wp-content/uploads/2013/12/fallensteins-monster.pdf">Fallenstein&apos;s Monster</a>, a result concerning tiling agents (in the field of mathematical logic).</p>
<p>I studied math at a fervent pace from August 11th to December 12th and gained enough knowledge to contribute at a MIRI workshop. In that timeframe I read seven textbooks, five of which I finished:</p>
<ol>
<li><a href="http://lesswrong.com/lw/ii0/book_review_heuristics_and_biases_miri_course_list/">Heuristics and Biases</a></li>
<li><a href="http://lesswrong.com/lw/il1/book_review_cognitive_science_miri_course_list/">Cognitive Science</a></li>
<li><a href="http://lesswrong.com/r/lesswrong/lw/ioo/book_review_basic_category_theory_for_computer/">Basic Category Theory for Computer Scientists</a></li>
<li><a href="http://lesswrong.com/r/lesswrong/lw/ir6/book_review_na%C3%AFve_set_theory_miri_course_list/">Na&#xEF;ve Set Theory</a></li>
<li><a href="http://lesswrong.com/r/lesswrong/lw/ix5/mental_context_for_model_theory/">Model</a> <a href="http://lesswrong.com/r/lesswrong/lw/ixn/very_basic_model_theory/">Theory</a> (first half)</li>
<li><a href="http://lesswrong.com/lw/j4r/book_review_computability_and_logic/">Computability and Logic</a></li>
<li>The Logic of Provability (first half, unreviewed)</li>
</ol>
<p>In retrospect, the first two were not particularly relevant to MIRI&apos;s current research. Regardless, <em>Heuristics and Biases</em> was quite useful on a personal level.</p>
<p>I also studied a number of MIRI research papers, two of which I summarized:</p>
<ul>
<li>The <a href="http://lesswrong.com/r/lesswrong/lw/jbe/walkthrough_of_definability_of_truth_in/">Probabilistic Logic</a> paper</li>
<li>The <a href="http://lesswrong.com/lw/jca/walkthrough_of_the_tiling_agents_for/">Tiling Agents</a> paper</li>
</ul>
<p>I made use of a number of other minor resources as well, mostly papers found via web search. I successfully signaled my competence and my drive to the right people. While this played a part in my success, it is not the focus of this post.</p>
<p>I estimate my total study time to be slightly less than 500 hours. I achieved high retention and validated my understanding against other participants of the December workshop. I did this without seriously impacting my job or my social life. I retained enough spare time to <a href="http://nanowrimo.org/participants/so8res/novels/lucky-number-eight/stats">participate in NaNoWriMo</a> during November.</p>
<p>In sum, I achieved a high level of productivity for an extended period. In the remainder of this post I&apos;ll discuss the mechanics of how I did this: my study schedule, my study techniques, and so on. The psychological aspects &#x2014; where I found my drive, how I avoid akrasia &#x2014; will be covered in later posts.</p>
<h1 id="schedule">Schedule</h1>
<p>I estimate I studied 30-40 hours per week except in November, when I studied 5-15 hours per week. On average, I studied six days a week.</p>
<p>On the normal weekday I studied for an hour and a half in the morning, a half hour during lunch, and three to four hours in the evening. On the average weekend day I studied 8 to 12 hours on and off throughout the day.</p>
<p>Believe it or not, I didn&apos;t have to alter my schedule much to achieve this pace. I&apos;ve been following roughly the same schedule for a number of years: I aim to spend one evening per workweek and one day per weekend on social endeavors and the rest of my time toying with something interesting. This is a loose target, I don&apos;t sweat deviations.</p>
<p>There were some changes to my routines, but they were minimal:</p>
<ul>
<li>I have many side projects, most were dropped as studying took precedence.</li>
<li>The number of weeknights I took off per week fell from a little more than one to a little less than one.</li>
<li>Before this endeavor I traveled for leisure about once every two months. In the past five months I traveled for leisure once.</li>
</ul>
<p>While my studying did not affect my schedule much, it <em>definitely</em> affected my pacing. Don&apos;t get me wrong; this sprint was not easy. I suspended many other projects and drastically increased my intensity and my pace. I spent roughly the same amount of time per day studying as I used to spend on side projects, but there is a <em>vast</em> difference between spending three hours casually tinkering on open source code and spending three hours learning logic as fast as possible.</p>
<p>The point here is that aggressive autodidacting certainly takes quite a bit of time and effort, but it need not be all consuming: you can do this sort of thing and maintain a social life.</p>
<h1 id="studytechnique">Study Technique</h1>
<p>My methods were simple: read textbooks, do exercises, rephrase and write down the hard parts.</p>
<p>I had a number of techniques for handling difficult exercises. First, I&apos;d put them aside and come back to them later. If that failed, I&apos;d restate the problem (and all relevant material) in my own words. If this didn&apos;t work, it at least helped me identify the point of confusion, which set me up for a question math.stackexchange.com.</p>
<p>I wasn&apos;t above skipping exercises when I was convinced that the exercise was tedious and that I know the underlying material.</p>
<p>This sounds cleaner than it was: I made a lot of stupid mistakes and experienced my fair share of frustration. For more details on my study methods refer to <a href="http://lesswrong.com/lw/j10/on_learning_difficult_things/">On Learning Difficult Things</a>, a post I wrote while in the midst of my struggles.</p>
<p>Upon finishing a book, I would immediately start the next one. Concurrently, I would start writing a review of the book I&apos;d finished. I generally wrote the first draft of my book reviews on the Sunday after completing the book, alternating between studying the new and summarizing the old. On subsequent weekdays I&apos;d edit in the morning and study in the evening until I was ready to post my review.</p>
<p>It&apos;s worth noting that summarizing content, especially the research papers, went a long way towards solidifying my knowledge and ensuring that I wasn&apos;t glossing over anything.</p>
<h1 id="impactonsociallife">Impact on Social Life</h1>
<p>The impact on my social life was minimal. I decreased contact with some periphery friend groups but maintained healthy relationships within my core circles. That I was able to do this is due in part to my circumstances:</p>
<ul>
<li>I live with two close friends. This meant that social contact was never out of reach. Even when spending an entire day sequestered in my room pouring over a textbook I was able to maintain a small amount of social interaction. If ever I had a spare hour and a thirst for company, I found it readily available.</li>
<li>My primary partner was, up until early 2014, going to school full time while holding down a full time job. Thus, her schedule was more restrictive than my own and we had been working around it for some time. Our relationship was not further constrained by my efforts.</li>
<li>My core friend groups knew and respected what I was doing. I was more tense and exhausted than usual, but I had warned my friends to expect this and no friendships suffered as a result.</li>
</ul>
<h1 id="impactonworklife">Impact on Work Life</h1>
<p>The additional cognitive load did have an impact on my day job. I had less focus and willpower to dedicate to work. Fortunately, I was exceeding expectations before this endeavor. During this sprint, with my cognitive reserves significantly depleted, I had to settle for merely meeting expectations. My performance at work was not poor, by any means: rather, it fell from &quot;exemplary&quot; to &quot;good&quot;.</p>
<p>I&apos;d rather not settle for merely good performance at work for any extended period of time. Going forward, I&apos;ll be reducing my pace somewhat, in large part to ensure that I can dedicate appropriate resources to my day job.</p>
<h1 id="mentalhealth">Mental Health</h1>
<p>It&apos;s not like I was working from dawn till dusk every day. There was ample time for other activities: I had a few hours of downtime on the average day to read books or surf the web. I participated in a biweekly <a href="http://paizo.com/pathfinderRPG">Pathfinder</a> campaign and spent the occasional Sunday playing <a href="http://www.fantasyflightgames.com/edge_minisite.asp?eidm=21">Twilight Imperium</a>. In September I went camping in the Olympic mountain range. I spent four days in October visiting friends in Cape Cod. I spent a day in December hiking to some hot springs. I entertained guests, went to birthday parties, and so on. There were ample opportunities to get away from math textbooks.</p>
<p>Most important of all, I had friends I could call on when I needed a mental health day. I could rely on them to find time where we could just sit around, play with LEGO bricks, and shoot the breeze. This went a long way towards keeping me sane.</p>
<p>All that said, this stint was rough. I experienced far more stress than my norm. I lost a little weight and twice caught myself grinding my teeth in my sleep (a new experience). There were days that I became mentally exhausted, growing obstinate and stubborn as if sleep- or food-deprived. This tended to happen immediately before planned breaks in the routine, as if my mind was rebelling when it thought it could get away with it.</p>
<p>The stress was manageable, but built up over time. It&apos;s hard to tell whether the stress was cumulative or whether the increase was due to circumstance. Doing NaNoWriMo in November while continuing studying didn&apos;t particularly help matters. The weeks leading up to the workshop were particularly stressful due to a lack of information: I worried that I would not know nearly enough to be useful, that I would make a fool of myself, and so on. So while the stress surely mounted as time wore on, I can&apos;t tell how much of that was cumulative versus circumstantial.</p>
<p>I tentatively believe that someone could sustain my pace for significantly longer than I did, so long as they were willing to live with the strain. I don&apos;t plan to test this myself: I&apos;ll be slowing down both to improve performance at work and to reduce my general stress levels. Five months of fervent studying is no walk in the park.</p>
<h1 id="advice">Advice</h1>
<p>So you want to follow in my footsteps? Awesome. I commend your enthusiasm. My next post will delve into my mindset and a few of the quirks of my behavior that helped me be productive. For now, I will leave you with this advice:</p>
<ul>
<li>There is no magic to it. If you study the right material, do the exercises, and write what you&apos;ve learned in your own words, then you can indeed learn MIRI-relevant math in a reasonable amount of time.</li>
<li>Learning fast does not need to dominate your life. There can be time for social activities and even significant side projects. You will have to work really hard, but that work does not have to consume your life.</li>
<li>If you&apos;re going to do something like this, let people know what you&apos;re doing. This is much easier if you have people you can turn to for support who don&apos;t mind you being extra snappy, people who can drag you away for a day every week or two. Also, stating your goals publicly helps to stop you from giving up.</li>
</ul>
<p>The difficult part is making a commitment and sticking to it. Akrasia is a formidable enemy, here. If you can avoid it, the actual autodidacting is not overly difficult.</p>
<p>As for specific advice, if your background is similar to mine then I recommend reading <em>Na&#xEF;ve Set Theory</em>, <em>Computability and Logic</em>, and the first two chapters of <em>Model Theory</em> in that order, these will get you off to a good start. Feel free to message me if you get stuck or if you want more recommendations.</p>
<p>Following posts will cover the other sides of my experience: how I got interested in this field, where I draw my motivation from, and the dark arts that I use to maintain productivity. In the meantime, questions are welcome.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[On learning difficult things]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><span style="opacity: .3">[Note: backported from <a href="http://lesswrong.com/lw/j10/on_learning_difficult_things/">LessWrong</a>]</span></p>
<p>I have been autodidacting quite a bit lately. You may have seen my <a href="http://lesswrong.com/lw/ixn/very_basic_model_theory/">reviews</a> of books on the <a href="http://intelligence.org/courses">MIRI course list</a>. I&apos;ve been going for about ten weeks now. This post contains my notes about the experience thus far.</p>
<p>Much of this may seem</p>]]></description><link>https://mindingourway.com/on-learning-difficult-things/</link><guid isPermaLink="false">5f94cfbaca8899827ef2a263</guid><category><![CDATA[backport]]></category><category><![CDATA[productivity]]></category><dc:creator><![CDATA[Nate Soares]]></dc:creator><pubDate>Mon, 11 Nov 2013 23:35:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><span style="opacity: .3">[Note: backported from <a href="http://lesswrong.com/lw/j10/on_learning_difficult_things/">LessWrong</a>]</span></p>
<p>I have been autodidacting quite a bit lately. You may have seen my <a href="http://lesswrong.com/lw/ixn/very_basic_model_theory/">reviews</a> of books on the <a href="http://intelligence.org/courses">MIRI course list</a>. I&apos;ve been going for about ten weeks now. This post contains my notes about the experience thus far.</p>
<p>Much of this may seem obvious, and would have seemed obvious if somebody had told me in advance. But nobody told me in advance. As such, this is a collection of things that were somewhat surprising at the time.</p>
<p>Part of the reason I&apos;m posting this is because I don&apos;t know a lot of autodidacts, and I&apos;m not sure how normal any of my experiences are. (Though on average, I&apos;d guess they&apos;re about average.) As always, keep in mind that I am only one person and that your mileage may vary.</p>
<h2 id="pairup">Pair up</h2>
<p>When I began my quest for more knowledge, I figured that in this modern era, a well-written textbook and an account on <a href="https://mindingourway.com/math.stackexchange.com">math.stackexchange</a> would be enough to get me through anything. And I was right&#x2026; sort of.</p>
<p>But not really.</p>
<p>The problem is, most of the time that I get stuck, I get stuck on something incredibly stupid. I&apos;ve either misread something somewhere or misremembered a concept from earlier in the book. Usually, someone looking over my shoulder could correct me in ten seconds with three words.</p>
<p>&quot;Dude. Disjunction. <em>Dis_junction</em>._&quot;</p>
<p>These are the things that eat my days.</p>
<p>In principle, places like stackexchange can get me unstuck, but they&apos;re an awkward tool for the job. First of all, my stupid mistakes are heavily contextualized. A full context dump is necessary before I can even ask my question, and this takes time. Furthermore, I feel dumb asking stupid questions on stackexchange-type sites. My questions are usually things that I can figure out with a close re-read (except, I&apos;m not sure which part needs a re-read). I usually opt for a close re-read of everything rather than asking for help. This is even more time consuming.</p>
<p>The infuriating thing is that answering these questions usually doesn&apos;t require someone who already knows the answers: it just requires someone who didn&apos;t make exactly the same mistakes as me. I lose hours on little mistakes that could have been fixed within seconds if I was doing this with someone else.</p>
<p>That&apos;s why my number one piece of advice for other people attempting to learn on their own is <em>do it with a friend</em>. They don&apos;t need to be more knowledgeable than you to answer most of the questions that come up. They just need to make <em>different</em> misunderstandings, and you&apos;ll be able to correct each other as you go along.</p>
<p>The thing I miss most about college is tight feedback loops while learning. When autodidacting, the feedback loop can be long.</p>
<p>I still haven&apos;t managed to follow my own advice here. I&apos;m writing this advice in part because it should motivate me to actually pair up. Unfortunately, there is nobody in my immediate circle who has the time or patience to read along with me, but there are a number of resources I have not yet explored (the LessWrong study hall, for example, or soliciting to actual mathematicians). It&apos;s on my list of things to do.</p>
<h2 id="readrereadrereread">Read, reread, rereread</h2>
<p>Reading <em>Model Theory</em> was one of the hardest things I&apos;ve done. Not necessarily because the content was hard, but because it was the first time I actually learned something that was way outside my comfort zone.</p>
<p>The short version is that <em>Basic Category Theory</em> and <em>Na&#xEF;ve Set Theory</em> left me somewhat overconfident, and that I should have read a formal logic textbook before diving in. I had basic familiarity with logic, but no practice. Turns out practice is important.</p>
<p>Anyway, it&apos;s not like <em>Model Theory</em> was impossible just because I skipped my logic exercises. It was just <em>hard</em>. There are a number of little misconceptions you have when you&apos;re familiar with something but you&apos;ve never applied it, and I found myself having to clean those out just to understand what <em>Model Theory</em> was trying to say to me.</p>
<p>In retrospect, this was an efficient way to strengthen my understanding of mathematical logic and learn <em>Model Theory</em> at the same time. (I&apos;ve moved on to a logic textbook, and it&apos;s been a cakewalk.) That said, I wouldn&apos;t wish the experience on others.</p>
<p>In the process, I learned how to learn things that are way outside my comfort zone. In the past, all the stuff I&apos;ve learned has been either easy, or an extension of things that I was already interested in and experienced with. Reading <em>Model Theory</em> was the first time in my life where I read a chapter of a textbook and it made <em>absolutely no sense</em>. In fact, it took about three passes per chapter before they made sense.</p>
<ol>
<li>The first pass was barely sufficient to understand all the words and symbols. I constantly had to go research a topic. I followed proofs one step at a time, able to verify the validity of each step but not really understand what was going on. I came out the other end believing the results, but not knowing them.</li>
<li>Another pass was required to figure out what the book was actually trying to say to me. Once all the words made sense and I was comfortable with their usage, the second pass allowed me to see what the theorems and proofs were actually saying. This was nice, but it still wasn&apos;t sufficient: I understood the theorems, but they seemed like a random walk through theorem-space. I couldn&apos;t yet understand why anyone would say those particular things on purpose.</li>
<li>The third pass was necessary to understand the greater theory. I&apos;ve never been particularly good at memorizing things, and it&apos;s not sufficient for me to believe and memorize a theorem. If it&apos;s going to stick, I have to understand why it&apos;s important. I have to understand why this theorem in particular is being stated, rather than another. I have to understand the problem that&apos;s being solved. A third pass was necessary to figure out the context in which the text made sense.</li>
</ol>
<p>After a third pass of any given chapter, the next chapter didn&apos;t seem quite so random. When the upcoming content started feeling like a natural progression instead of a random walk, I knew I was making progress.</p>
<p>I note this because this is the first time that I had to read a math text more than once to understand what was going on. I&apos;m not talking about individual sentences or paragraphs, I&apos;m talking about finishing a chapter, feeling like &quot;wat&quot;, and then starting the whole chapter over. Twice.</p>
<p>I&apos;m not sure if I&apos;m being na&#xEF;ve (for never having needed to do this before) or slow (for having to do this for <em>Model Theory</em>), but I did not anticipate requiring three passes. Mostly, I didn&apos;t anticipate gaining as much as I did from a re-read; I would have guessed that something opaque on the first pass would remain opaque on a second pass.</p>
<p>This, I&apos;m pretty sure, was na&#xEF;vety.</p>
<p>So take note: if you stumble upon something that feels very hard, it might be more useful than anticipated to re-read it.</p>
<h2 id="cognitiveexchangerates">Cognitive exchange rates</h2>
<p>When reading Model Theory, I was only able to convert 30-50% of my allotted &quot;study time&quot; into actual study.</p>
<p>This is somewhat surprising, as I had no such troubles with <em>Basic Category Theory</em> or <em>Na&#xEF;ve Set Theory</em>.</p>
<p>(I often have the <em>opposite</em> problem when writing code; this is probably due to the different reward structure.)</p>
<p>I was somewhat frustrated with my inability to study as much as I would have liked. My usual time-into-studying conversion rate is much higher (I&apos;d guess 80%ish, though I haven&apos;t been measuring).</p>
<p>I&apos;m not sure what factor made it harder for me to study model theory. I don&apos;t think it was the difficulty directly, as I often tend to work harder in the face of a challenge. I&apos;d guess that it was either the slower rate of rewards (caused by a slower pace of learning) or actual cognitive exhaustion.</p>
<p>In the vein of cognitive exhaustion, there were a few times while reading <em>Model Theory</em> where I seem to have become cognitively exhausted before becoming physically exhausted. This was a first for me. I&apos;m not referring to those times when you&apos;ve done a lot of mental work and you shy away from doing anything difficult, that&apos;s happened to me plenty. Rather, in this case, I felt fully awake and ready to keep reading. And I did keep reading. It just&#x2026; didn&apos;t work. I&apos;d have trouble following simple proofs. I&apos;d fail at parsing sentences that were quite clear after resting.</p>
<p>I&apos;m still not sure what to make of this, and I don&apos;t have sufficient data to draw conclusions. However, it seems like there are mental states where my I feel awake and able to continue, but my mind is just not capable of doing the heavy lifting.</p>
<p>Again, the fact that I&apos;m only just realizing this now is probably na&#xEF;vety, but it&apos;s something to remember before getting frustrated with yourself.</p>
<h2 id="explainittosomeone">Explain it to someone</h2>
<p>As I&apos;ve said before, one of the best ways to learn something is to do the problem sets. For <em>Model Theory</em>, though, there were times when I finished reading through a chapter and was not capable of doing the problems.</p>
<p>Re-reading helped, as mentioned above. Another thing that helped was explaining the concepts.</p>
<p>I explained model theory pretty extensively to a text file on my computer. I sketched the proofs in my own words and stated their significance. I explained the syntax being used. I tried to motivate each idea. (The notes are still lying around somewhere; I haven&apos;t posted them because they&apos;re pretty much a derivative work at this point.)</p>
<p>I found that this went a <em>long</em> way towards helping me track down places where I&apos;d thought I learned something, but actually hadn&apos;t. If you&apos;re having trouble, go explain the concept to somebody (or to a text file). This can bridge the gap between &quot;I read it&quot; and &quot;I can do the problems&quot; quite well. For me, this technique often took problems from &quot;unapproachable&quot; to &quot;easy&quot; in one fell swoop.</p>
<h2 id="dontbookyourselfsolid">Don&apos;t book yourself solid</h2>
<p>I&apos;m pretty good at avoiding stress. I have the (apparently rare) ability to drop all work-related concerns at the door when I leave. I don&apos;t even know <em>how</em> to get stressed by bad luck, especially if I made good choices given the information I had at the time. I get normally tense in stressful situations with time constraints, but I&apos;m adept at avoiding the permastress that I&apos;ve seen plague friends and family&#xA0;<span style="font-size: 10px;">&#x2014;</span>&#xA0;unless I&apos;ve booked myself solid.</p>
<p>I&apos;ve had a packed schedule these past few weeks. I try to move the needle on at least two projects a day (more on weekends). Even if it&apos;s entirely reasonable to fit all these things into my schedule, I have not yet found a way to avoid the stress.</p>
<p>Even when I know that, if I push myself, I can read this much and write that much and code this feature all in one day, I haven&apos;t found a good way to push myself without pressure-stress.</p>
<p>I&apos;m still hoping that I&apos;ll learn how to move quickly without stress as I learn my capabilities, but I&apos;m not sure I&apos;ve been adequately accounting for the <a href="http://scholar.google.com/scholar?q=adverse+effects+of+stress&amp;hl=en&amp;as_sdt=0&amp;as_vis=1&amp;oi=scholart&amp;sa=X&amp;ei=KE6AUuGTEqerigLmyoHADw&amp;ved=0CCwQgQMwAA">cost of stress</a>.</p>
<p>It&apos;s worth remembering that doing less than you&apos;re capable of _on purpose&#xA0;_might be a good strategy for maximizing long-term output.</p>
<hr>
<p>There you go. Those are my notes gathered from trying to learn lots of things very quickly (and trying to learn one hard thing in particular). Comments are encouraged; I am by no means an expert.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>