How To Bridge the Insight/Action Gap (Part 2)


Reasons why it’s easier to understand than to act, and what to do about them.

In the first part of this post, I introduced the insight/action gap and offered one macro reason why it’s so prevalent: changing things takes effort, and as long as we’re alive, the evolutionary imperative says “it’s all good”. I also suggested that we should think of this gap less as a singular chasm that mysteriously doesn’t get spanned, and more as the output of a million micro-forces that don’t happen to push/pull in quite the right directions.

In this second part, I’ll cover the first two of the major structural reasons why the gap can come about and then hang about, and make suggestions for how to counter them.

1. Interpretation is easy and satisfying.

The reason: Humans are interpretation addicts. Even just within the cognitive sphere, our daily lives are made up of constant interpretive activities attaching to everything from patterns of light and shadow to football scores to the ever so slight arc of your mother’s eyebrow when you tell her about your travel plans. And all this incomprehensibly intricate activity attaches to language more readily than to anything else. 

Language turns continuous, analogue sounds into discrete segments: the letters of the alphabet that we combine and recombine into words, and the words we amalgamate into sentences.* The digital nature of language makes it extremely easy to copy and recombine and do complex things with it that feel satisfying—like write blog posts about it being helpfully digital, or concoct arguments for why moving house probably isn’t worth it. So there’s a bias to making everything semantic; you can think of language as a funnel that drags everything into itself. 

Talk therapies are a great example of the language funnel in action: Let’s take all the complexities of my existence and turn them into words, often narratively structured and retrospective (here’s the story of how this current life situation came about). All stories are satisfying, and stories about ourselves are even more satisfying than all the others, and neat stories that tie up all the loose ends are the most satisfying of all—even when the satisfaction comes laced with masochism. Therapies that privilege the verbal interaction itself aren’t directly designed to help anyone change behaviours. They’re predicated on the belief that the language funnel is the way to get the problem solved: that if we can just do the funnelling really well, then the insight will follow from that, and then the insight will lead to the actions that change the life in the desired ways. But this assumption is often not borne out in reality. In fact, the insight/action gap is being assumed not to exist. Which is fairly stupid.

[*I owe this insight—beautifully obvious once you see it—to my mother, Sue Blackmore. It’s from a chapter draft for a book I hope she’ll complete and publish, with the working title God’s Memes.]

The response: Make a habit of taking the next step whenever you come to a conclusion in words, whether that’s when journaling or in conversation with someone or just following a train of thought. For example, you conclude that self-esteem issues are part of what makes you keep not asking for that promotion. OK, so you don’t stop there. Instead you decide on a single action you can take that may even slightly reduce the effects you’ve identified, e.g. get yourself a self-help book on self-esteem, try out two exercises from it, and see whether they make any difference. 

Think of the verbal conclusion as a hypothesis to be tested, not a truth to sit back into. If the action you try out does nothing, maybe your conclusion was off-base. If it does something unexpected, ditto. The point is not to get invested in your own right-or-wrongness, but to open up avenues for change that will bear some relation to the original hypothesis, and that will be coming about precisely because you dared to treat it as a starting point not a stopping place.

2. Intellectual change is overvalued relative to practical change.

The reason: The language funnel is propped up by the way that as societies we often privilege verbalized intellectual outputs (exam scripts, books, TED talks, even fluent conversational insights into one’s own problems) over practical skills and transformations that may sound trivial when articulated in words (e.g. learning to eat in a way that suits your body and life, having a romantic relationship that feels wonderful). This persists even though the practical things typically have much more direct and/or profound consequences for quality of life. 

In many areas of life, changing practical actions ought to be seen as urgent and important, but often isn’t, in part because the “you need to work out what the root causes are first” line dominates—e.g. what’s really the origin of these relationship problems you’re having? Behaviour change can lead to mindset change just as much as the other way round (I’d argue much more readily), but the societal bias means that the action-to-attitude half of the causal loop is often neglected. This is deeply unfortunate, given there are many factors that make it easier to get any process of change kicked off with a tweak to what you do rather than attempted alteration to how you think. Telling yourself to feel differently about something you’ve been feeling a particular way about for a long time (e.g. feeling less angry towards your partner when they do that thing you can’t stand) is typically harder than doing something relevantly different in your day (e.g. asking them how their day went). And of course, focusing on thinking—even if we  do successfully do it differently—can very easily just pile up more and more thinking that never translates: feeling less anger may or may not translate into your relationship really blossoming again.

The response: A simple tactic to uncover and counteract the default valuation biases we often have as individuals is firstly to rate the perceived significance of an intellectual conclusion (e.g. this habit that’s obviously creating problems for my relationship—maybe something about flirting with other people, say—represents a part of me that isn’t getting expressed any other way) immediately after reaching it and then setting a calendar alert to do so again a month later. Then, second, do something similar with a practical change that relates to this conclusion and that you also try  out (e.g. finding some other way to express this part of you or something that may contribute to laying it to rest—maybe making time for something else you find exciting, with or without your partner): how significant does the change feel a) when you first decide on it, b) just after you’ve first done it, and c) a month after you first did it? When you’ve got both your 1-month ratings you can look back at both sets, make some notes on what patterns you notice, and draw some conclusions about expected versus actual significance.

3. Thoughts cost less than actions.

The reason: As I pointed out in Part 1, actions come with resource costs. Of course, thoughts do too, but they come a lot cheaper. So if you can get away with just doing some thinking, inferencing, pattern-matching, then you get the interpretive satisfaction and perceived significance outlined in points 1 and 2 without the costs of actually doing anything.

The trouble is that cognitive dissonance naturally arises when you reach a convincing conclusion you don’t act on: You’ve now set up a conflicted situation in which your knowledge and actions are at odds with each other, e.g. your belief that rewarding jobs are good things for people to have versus your demonstrably not acting to get yourself one. And cognitive dissonance is unpleasant. So to lessen it to a tolerable level, what you need is not just to do the low-cost thinking, but alsoto give yourself good enough reasons for not changing anything in response. And this is where cognitive dissonance reduction kicks in. 

Dissonance reduction is a highly honed human skill, and it can work wonders, as long as the miracle you want is to reduce discomfort at the cost of preventing problem-solving. This is the tradeoff (and it’s one of the most brutal there is) because, by definition, you’re persuading yourself that the problem isn’t as serious as it might seem, or that the costs of action are too high or its probability of success too low, or whatever, in order to make yourself feel better about inaction. If you convince yourself that it was always inevitable that you would end up in this kind of dull and low-paying job and that now after 20 years it’s clear that you’ll always have jobs like this, for example, or if you tell yourself that having a job you really look forward to getting back to on a Sunday evening sounds idyllic but that no one really loves their work that much, do they, then that extra little bit of cognitive activity saves you the effort of actually trying to solve the problem.

The response: I guess there are three main options here: 1) improve the (perceived) cost/benefit ratio for taking action, 2) worsen it for thinking, or 3) block the cognitive dissonance reduction. So you might 1) devise very small specific problem-solving actions to take (e.g. 30 minutes’ online research of vacancies in your area), or 2) tell someone else your cognitive conclusions so it’s not just you who knows you’re not doing anything about them, or 3) identify a common dissonance reduction tactic you tend to employ (e.g. trivializing or distracting yourself from the costs of the conflict) and invent a way to impede it (e.g. starting some daily journaling where you make a point of giving yourself space to think about what your professional dissatisfaction is actually doing to your life and your family’s).

4. Thoughts can be one-offs; actions usually can’t. 

There’s a particular kind of structural mismatch between insight and the corresponding actions that contributes to making the former easier. With insights, although they may take time and effort to formulate, once you’ve achieved one, it often has a “one shot and you’re done” feel to it. I might conclude, say, that the ways I keep spending my weekends seem to make little sense given my explicit life priorities reflect a profound need I hadn’t previously acknowledged to myself. Maybe wearing myself out with marathon training despite saying I want to start a business reflects a need to give myself excuses for not achieving as much as other people seem to think I could. Once I’ve reached that conclusion and it has a certain minimum satisfaction rating (i.e. it makes enough sense of enough things without making too many implausible assumptions), I’m done; I’ve got my payoff. Conversely, there are very few one-shot actions that get you high satisfaction in one go. Truly conclusive actions are rare, and many pivotal actions (leaving your partner or your job, starting a business or getting married) are merely the starting point for the multiple repetitions with variation and progression that are needed to get you what you want (e.g. a great career or relationship).

The response: Seek out actions that have as many of the benefits as possible of one-shot solutions. One way to do this is to optimize a new action to have immediate payoffs. So you  might make a change—however small—to your weekend routines with or without your family that instantly feels better and more aligned with your priorities or values than the previous norm. A related option is to design the new action to bring about cascades of other changes. For example, stopping watching TV every evening after dinner: 

–> helps you do more varied things with your partner, which 

–> makes you both feel like you’re investing more into the relationship and getting more out of it, which 

–> makes you feel more able to express little things that are on your mind before they turn into enormous problems, which 

–> makes you both more relaxed knowing you’ve got a solid communicative foundation, which 

–> helps sex happen more often and feel better, which

–> improves everything! (and is a great substitute for telly)

5. Insights tend to apply at the wrong level of detail.

The reason: This is one of the biggest of the big. Verbalized intellectual insights tend to operate at a high level of generality, e.g. my employment situation revolves around my lack of self-esteem, or, slightly less vaguely, around my unwillingness to demand more than the merely tolerable for myself, while tiring myself out trying to help others live better. This type of insight might generate something that feels like a plan of action: I need to bolster my self-esteem / start being more assertive / get a new job / starting listening to my body’s needs / eat more / run less. But even the last of these, which sound the most practical, aren’t particularly useful as direct guides to action. I’m struck so often by how often you can think you’re making a plan when actually what you’re doing is sketching out a hazy aspiration. What you really need to make it likely that you’ll take and sustain meaningful action is something more like “I’m going to try out 15-minute Mon/Wed/Fri evening yoga sessions for 2 weeks after getting changed from work and before making dinner; then use questions x, y, and z to assess how well that routine is working for me relative to going for a run in the early morning; then decide whether to keep them going or adjust or try something completely different”.

The response: In any life domain where you’ve identified that stasis is the default (e.g. physical wellbeing, relationship, career), practise developing your insights into guides to action at a useful level of specificity. Usually—because our minds resist specifics at every turn—this means the more the better, including contingency planning (e.g. if I miss a weekday evening I’ll do a weekend morning instead, or I’ll do nothing different and just get back on track on the next scheduled day) as well as “how will I know whether I’ve succeeded?” criteria.

6. High-level fatalism is infectious; “this isn’t too bad really” is endemic.

The reason: Believing certain things about the structure of the universe and your place in it makes effective action more or less likely. If, for example, you believe that there’s a divine plan and your suffering is happening for a reason, you may be less likely to do anything about it. Ditto if you believe everyone else’s suffering matters more than your own, a belief that often comes in “I don’t deserve anything better than this” clothing. Even if you don’t have any obvious high-level beliefs blocking your inclinations to act, the belief prerequisites for action may be missing: Seeking and taking satisfaction in intellectual insights requires you to believe only that the universe makes some sense; taking action, meanwhile, requires you also to believe yourself capable of change and deserving of it (or at least not undeserving, or alternatively indifferent to the entire concept). As an onlooker to another individual’s suffering, one of the greatest frustrations of all is often the “this isn’t bad enough to take seriously” response. It’s the easiest thing in the world to find examples of other people who are worse off than we are, and then to use them to justify our inaction. And maximum bystander frustration may correlate with low-to-medium suffering, since there the downplaying is easy but the solutions probably are too. After a chat with someone the other week, I jotted down “‘I can’t complain’ probably means you should!”, and maybe that sums it up.

The response: Changing high-level beliefs is one of the most effortful cognitive things we can do, because they tend to be inculcated early on by a thousand powerful sociocultural forces, often in the form of religious packaging that’s had thousands of years to evolve (Blackmore, 1999, Ch. 15). It can be done, though, and recognising its necessity to achieving something else personally meaningful (e.g. a professional or relationship change) can be a catalyst to a liberating loss of constricting ideology, even if the path there is often traumatic. If what’s needed is nothing quite as formalized as apostasy, other ways of switching up high-level priors can work too, like a changed physical and social environment or mind-altering substances and practices of one kind or another.

7. The gap makes itself look wider than it is. 

The reason: Alongside deferring action, another things humans are good at is over-generalization, particularly with a negative slant. As I pointed out in my own resignation story in Part 1, quite likely your “years of doing nothing” aren’t actually that, when you observe more carefully and assess more fairly. 

And maybe you have made major attempts in the past either to shake things up or to leave the relationship or both. Maybe those efforts have almost worked. And in between them, maybe you’ve made quite a few smaller forays into altering the details. In this sense, “doing nothing” is normally not literally doing nothing. In reality it was probably simply not enough, not nothing. If you consider any of your “really change something” efforts followed by the lapses back into the status quo that came after—maybe that month or two when you tried a trial separation or had those couples counselling sessions—the near-miss structure can often be particularly clear. Think of all the little or not-so-little things that contributed: things like precisely how much work stress you were also under at the time; or just how well the counsellor’s approach seemed to gel with you versus your partner; or precisely how open you’d been with your partner about what was really bothering you right then; or exactly how much your children seemed to be noticing your arguments; or how often you were able to talk to your close friend about things during that phase; or how long and how enjoyable that work trip was that you took at the critical time; or where you were in your cycle on that critical weekend where everything blew up; or less readily detectable details like how your mood had been affected by the season or your diet… If just enough of these thousand details had been just enough different, that effort would have been the one that worked.

The response: When you pretend you haven’t done anything at all, you prevent yourself from learning from what you have done. Instead, you could do an audit of your work in this domain so far, mapping out rough dates and durations of things you tried and then asking questions like “what went right that time?”, “what single difference could have helped it keep going right?”, “what does that tell me for this time?”. Thus, by acknowledging that the gap has never been as enormous a gulf as you might otherwise have pretended, you make it much easier for yourself to bridge it for real this time.

Chasm-jumping by Jolyon Troscianko.

8. The magic trick is just that.

The reason: Despite appearances, it’s crucial to remember that there is no mysterious abyss between understanding and doing. There are just many weighted probabilities for or against that first small action today and its repetition today and the day after and its variation next month. Insight is a product and change is a process, and the product needs to be treated as valuable primarily through its capacity to unleash the process.

Ultimately, actions are self-correcting and thoughts aren’t. You can labour under a cognitive misconception for a lifetime quite blissfully ignorant—but if you really take the corresponding actions, the world is much likelier to correct you. A prime example that springs to mind here is weight loss. The conclusion that not having achieved this is the source of one’s problems and that achieving it would solve them is one of the currently most widespread delusions there is. Of course, one sneaky survival trick this stupid meme has is that even if you do act on it, even for many weeks or months, lasting weight loss is usually not achieved, which keeps you thinking until your deathbed that your life would be way better if it were. But the belief now also comes equipped with a convenient plethora of practical and socially validated avenues for trying to act on it, and the mere believing has become so ubiquitous as to be almost invisible: of course everyone wants to weigh less, so let’s not even think of that as a question, let’s all just fixate on what the best gimmick is for making it happen. 

So, bridging the gap is just as important for a false belief as for a true one. But after that, so is pausing, looking around you, and asking how life on this side of the chasm (or the puddle) compares to life on the other. If it’s worse, or not better, we can stride on into the further reaches of the territory where insight and action pose less and less as polar opposites. 

The response: Practise anti-magical thinking and doing whenever you can, by honouring in your everyday life the truths that there are always options, that you always learn by trying something different—and that the insight that comes from action is (when you really pay attention to it) the kind that really counts.

How To Bridge the Insight/Action Gap (Part 1)


Part 1: An introduction to knowing and not doing

How is it that you can know exactly what to do and yet you still don’t do it? 

Years can slip by in the state of knowing and not doing. Decades can. Lifetimes can.

Knowing and not doing is a state humans often dwell in. It can take the form of career or relationship stagnation, as well as arising at the everyday micro level of clearing out those drawers or uninstalling Instagram. Humans are great at deferring things—and the amount of insight we have into why we shouldn’t often seems not to make a lot of difference.

Evolved to do nothing

The insight/action gap—a specific variant on the general category of procrastination—might seem like a grand mystery about human nature: How could it be that creatures like us could so often know so much and do so very little? But if you want a grand answer, you probably don’t have to go much beyond the simple evolutionary pressure that says: I’m alive, it’s all good, change nothing. Changing things costs resources, and any use of resources could turn out to be a waste. Changing things often involves increased risk, or just less easily estimable risk. And so the thought/action gap naturally opens up. 

Thinking and doing don’t cost the same. Minds get easily sucked into hyperactive patterns of imagining possible actions, and that hyperactivity tendency exists because it can be helpful for keeping us alive, without costing too much. Thinking “too much” often pays off, especially when the thinking involves mindreading about the possible actions other people (e.g. mates, competitors) might be about to take. Stopping before actual action is a useful default imperative in a context where resources are few and survival is precarious.

Obviously, that’s not us now. Excess resource availability at the individual level arguably now causes more problems than scarcity in post-industrial societies, and mortality isn’t mostly the problem, misery is. Risk is more amorphous now (because we care about more things that aren’t just life versus death), and safety nets may be slightly more prevalent than in our evolutionary past. So it makes sense that this long-evolved prior, “change nothing”, wouldn’t be serving us very well anymore.

So that’s a grand answer for you. What about less grand answers? Beyond the basics of that energy-saving “do nothing” default, I think the micro-answer, or rather the aggregate of them, is really the point here. In general, action or no action comes down to finely weighted probabilities. 

The knife edge of pros and cons

Let’s take the example of a difficult relationship. You have years’ worth of accumulated insight into the problems. Maybe the insights have been multiplied and refined by therapy, individually and/or together. Over many weeks and months and years, you let things stay more or less as they were because you were prioritizing other things, or didn’t know quite where to start, or felt scared to. Just think of the millions of micro-weightings that have contributed, over the years, to helping inaction and no-change win out, even just infinitesimally, over the actions that could have genuinely either meaningfully improved the relationship or liberatingly ended it.

This kind of pattern doesn’t instil itself only in our personal lives; careers can be full of it too. Last summer I resigned from my university job designing and running a writing programme for grad students and postdocs, after three years of work that was radically underpaid and undervalued. I should have resigned sooner; probably I should have realized at the very beginning that the contractual terms were inappropriate. 

I didn’t do nothing all those three years. Around the end of year 2, I applied to have the post regraded to a higher salary band, and was refused with some empty words about how much my work was in fact valued. That was deeply frustrating, but obviously I still spent a year not-resigning. I also spent that year not doing other things that would have been sensible for paving the way to what came after. 

The things that kept me there doing almost nothing to change the situation weren’t total idiocy on my part; the brakes to change were effective precisely because they were partly good reasons for keeping on doing the same thing. How much I was learning from doing the work was a case in point. It was great to get the chance to design a whole writing programme from scratch and to refine it by witnessing how it helped and didn’t help the students and researchers it served. But it would also have been appropriate to do maybe a year’s learning under those contractual conditions, and then take my learning somewhere it would be more meaningfully valued by an employer or institutional client. What made sense initially stopped making sense long before I did anything to reflect that fact.

Another misleadingly good reason that kept me not taking action was the knowledge of being helpful to a large number of other people—the students and postdocs who without this programme would have had as little formal support for building structures and confidence for their academic writing as their predecessors. It was really hard suspecting (rightly, as it turned out) that the post wouldn’t be refilled and that a lot of what I was doing would just not happen if I left—and that therefore a couple of hundred people per year would be worse off, if I decided that my time was worth more than this. A very common structure for inaction is convincing ourselves that other people matter more than we do—even though when we’re getting exploited, that probably means other people are too (e.g. the next person who does this job). This instinct is closely related to the one that says “my problems aren’t bad enough to take seriously” (which I discuss in the eating disorder context here) and “here’s good enough” (explored via the concept of optimizing for what we care about but getting trapped in deceptive “locally optimal solutions” here).

Coming back to the evolutionary bias against risk—the inaction wasn’t really about risk in any tangible sense. The pay was so low that even a tiny amount of extra coaching or other work would easily fill the shortfall, and the CV/prestige points had been earned long ago. But I suppose there was still a vague lurking sense of “but I don’t know what else changing this will change”—a defaulting to “at least I know how life is with this in it”.

Bridging the gap

So, what finally made me do it? Actually, a major spur was listening to my mother and stepfather talking about upping the hourly rate for their personal assistant, cleaner, and gardener, and realizing that all of them were already earning more than I was. Maybe that doesn’t reflect brilliantly on me, but the realization that 10 years of higher education and another 10 of postdoctoral research plus academic training experience had—if I let this situation continue—basically increased my earning capacity by zero was a serious moment of “OK, this can’t carry on”. 

In the end, everything crystallized around the little phrase “opportunity cost”. I had a sharper and sharper awareness of all the things I couldn’t do—things that earned more money than this, and things that were not at all about the money—because of the time and energy I was giving to this. It was less and less ignorable how many hours a week (all tracked on my timesheet) were sunk into this, not for no reward, but for personal (both emotional and material) rewards that felt increasingly out of whack with the degree of value that was being derived by the people I was helping, and by the material recompense it was translated into. This was the institution’s fault, and essentially I spent three years trying to persuade a venerable university that writing support for humanities scholars needs taking seriously. I failed, as most people (though not all) have failed at most things (though not all) they’ve tried to change in this university. It was time to cut loose, well before that third summer—but by then at the very latest.

And so I sent my resignation email to my manager, and there was sadness in that ending. I felt angry that I had to stop doing something that was so useful to so many students just because the pay and prospects were so poor. I felt a little tearful when I ran the final writing event and was unable to tell the participants what would be happening after I’d left. I felt fairly cynical about the prospects of this being a protest resignation that actually does something—though I put a lot of work into that email, and I think it got through a little bit.

There was also great relief, and a sense of power, in doing this thing that had for so long needed doing—but had needed doing urgently only for my own sake. My last day at work happened to be my mother’s 70th birthday, so it was nice to echo her threshold crossing with my own version.

It took me a few more months to really start seizing the opportunities that a tiring, complex, though personally rewarding job had been robbing me of, but in the end I began to. 

Do I wish I’d done all this a year sooner? Maybe. I think I really needed to get certain that this was the only sensible option—and it’s frightening how long it can often take us to conclude we really are sure enough.

It’s not about not-knowing

Anyway, the point is, there are all kinds of factors that keep us accepting things we shouldn’t—and ignorance is often the least of them. In this case, certainly, I can’t blame ignorance. I signed that contract in my right mind (though distorted by many years of being on similarly bad university salaries); I submitted the timesheets and the payment totals every month; I knew (roughly) what the resignation process was. The basic structure of knowing what to do and why and not doing it was in place throughout. 

So, there was no magic to any of this, and no mystery about it at the time, really. There were just all the crucial prosaic everyday details that feed into answering the question: Does this thing that isn’t working very well get dislodged now, or does it survive another month, year, or decade?

In the second half of this post, I’ll offer you eight structural reasons why the insight/action gap comes into being. And, to try to stave off the irony of trying to merely understand the insight/action gap better, for each I’ll also give a suggestion for how to actually bridge it.

You can read on here. Or if you’re pausing here, I invite you to pay gentle attention to when and how you’re getting sucked into thinking and talking versus translating this into acting.

All-Inclusive Resorts and Dietary Self-Regulation (Part 3)


7: Rendering your limits irrelevant

In this series so far, we’ve used the all-inclusive vacation model to illuminate recovery from a restrictive eating disorder: from adhering to a blanket rule of “as little as possible” to incentivizing “more than necessary.”

We’ve reviewed evidence suggesting that “dietary restraint” is counterproductive, atrophying the skill of eating without top-down rules, and often resulting in the opposite of what’s intended (eating lots once any eat-less rule has been broken).

I’ve suggested that the difference between applying rigid rules and “self-regulating” amounts to whether the feedback is meaningfully incorporated into the decision-making system: whether our actions adjust in response to other relevant stuff that’s happening in ourselves or our environments. In this penultimate part, I offer some pointers for how to tell whether you’re operating with meaningful feedback or not and how to start if you’re not.

In a way, the “self” prefix in self-regulation is a misnomer. There is no little homunculus me sitting in my skull, pulling the strings. “Me” includes innumerable inbuilt signaling mechanisms that evolved to guide human eating and movement, plus all kinds of evolutionarily newer factors like social conformity cues stretching far beyond the individual organism.

And, of course, there’s no hard line at the cellular level between these implicit “rules” and the explicit numerical ones we might also cognitively generate—it’s all just stuff arising from the activity of our neurons and all our other cells.

Someone with anorexia knows better than most people—someone in recovery from anorexia, all the more so—just how automatized the “top-down” rules can get, how seamlessly integrated into the self they can become. They come to pass themselves off ever-so-nearly convincingly as “what I really feel like,” e.g., I’m not hungry, I don’t even like ice cream, I feel better this way, ugh, bacon fat is so disgusting. 

Operationally, though, there is that key difference between the rules that work and those that don’t: the former incorporate feedback, the latter don’t. And if that structural difference is hard to spot (because we’re all experts at self-deception), there’s a simple difference you can glean from the outcomes.

If you’re in the ambiguous not-very-ill-but-probably-not-entirely-fine zone (aka quasi-recovery) and want to tell which method you’re operating by, the million-dollar question to ask is: Do I ever eat “too much” or exercise “too little?” More precisely: Do you ever get to the point where all the signals are straightforwardly saying “I don’t want to eat anymore” or “I really want to move just for the sake of it” because you’ve ignored a subset of the signals that were saying you ought to stop eating or start exercising much earlier on?

If the answer is yes and you feel fine when that happens, great, you probably don’t have a problem. If the answer is no, or yes, and you feel awful when it happens, you probably do.

In the dietary restraint experiment described in Part 2, this simplicity is presumably the point the high-restraint (“dieter”) participants got to at the end when they were genuinely full of ice cream. Or maybe they just stopped because their rule-breaking was freaking them out enough that it felt better to stop than carry on. Or maybe some of them actually got to the end of the tub and would have kept eating if they hadn’t.

If these people started making a consistent habit of eating “too much,” that would be a route for them to switch high dietary restraint for low. Contravening the signals that constituted the blanket priority of high restraint would allow them to start letting other measurements take up the slack. 

How do you actually make a habit of rule-breaking, though? When the rules are as pathologically powerful as anorexia’s, you need serious encouragement to transgress. There are plenty of encouragement types, but the simplest and best is often to change the explicit “do not transgress” threshold.

Do the thought experiment: If you have anorexia and you tell yourself, “OK, I now have no limits, I can eat as much ice cream as I like,” what happens? Possibly, you buy yourself a carton, pick up a spoon, and never look back. Probably, you do nothing.

Having no limit is meaningless because you have no idea how to operate without a limitation to butt up against, with hunger, desire, and the self-satisfied sense of being superior to all those people who eat when they’re hungry and stop when they’re not. Your number is how you know how to stop eating, and it is therefore what gives you the confidence to start.

Given this is how your eating operates—given you know exactly what to do with upper limits—it’s far more sensible to make your first step towards losing all the limits by just upping the limit. That, you can work with. 

However acutely or semi-recovered you’re using numerical limits, you can do worse than just increase your current ones by some non-negligible margin and see what happens. (Or the opposite with an exercise compulsion: gradually reduce the minimum per day or per session.)

What happens when you’re patient and determined, is that in the end, you’ll reach the point where you’ve raised the limit high enough that you can’t actually get there (or, with exercise, where it’s reduced to zero)—in other words, all the other ways of deciding when to stop (or start) have taken over.

In other words, you’ve booked yourself an all-inclusive vacation with limitless food and drink and no need to do anything much. All the other mechanisms that show you how to stop and start have been allowed to start doing their job again because the “300 calorie max.” or “30 minutes min.” (or whatever other) rules have been stopped from pre-empting all of them. 

An alternative or complementary strategy is to turn your maximum (for food) into a minimum or your minimum (for exercise) into a maximum. This gives you an additional explicit imperative to act differently and a different route to weakening the previous rule by ignoring it.

Finally, here’s another way of defining the difference between self-regulating with feedback or not: Do I have only one method for deciding when to start/stop eating (or exercising), or do I have many? The more methods you have, the less you’re probably aware of “having” any at all because they’re all kicking in as and when appropriate: hunger/satiety, the appeal of this specific food, social context, today and tomorrow’s activity levels, how much time you have, what happens to be in the fridge, etc. 

If you try out any of these structural encouragements to develop self-regulation, remember that you cannot set too high a numerical open-loop rule for yourself because all you need to do is stop operating by such rules.

You won’t get “unnecessarily” fat by upping your limit too high because “too high” is the structural prerequisite for the rule’s irrelevance—which is where you can really start living. 

Source: Abas Gemini via Wikimedia Commons, CC BY-SA 4.0

8: Beyond eating disorders

In this series, we’ve strayed quite a long way from the Playa del Carmen resort we started at, via milkshakes and open-loop versus closed-loop systems. In the penultimate part we circled right back to it: back to how raising numerical limits lets you rediscover what it’s like to self-regulate in a way that works and feels great. Here are the key takeaways from this series on “self-regulation”, or your body knowing what it’s doing:

  • Self-regulation can’t happen without feedback. The system has to be closed-loop, not open-loop.
  • Self-regulation can’t happen in the presence of a strong external regulator that overrides feedback (e.g., a rigid, exception-free rule) and makes the system open-loop.
  • Open-loop regulation in eating disorders is not only ineffective by all meaningful metrics (health, happiness, etc.), it also misses out on even the benefit of not needing to measure, because measurement is going on all the time, just not of anything useful, and not leading to meaningful adjustments.
  • An external open-loop regulator (a rigid rule), once habitual, can’t typically be removed just by declaring it no longer exists or applies.
  • Instead, we need to devise a process that makes the external regulator unable to operate. For instance, if its job is to impose a numerical limit (e.g., on calorie consumption), we up the limit so high that it becomes meaningless. Alternatively, if you’re so rule-bound that quantifiably increasing your freedom of movement results in no new movement, you can force the change by converting an upper limit into a lower one.
  • Once the limit is high or low enough that other regulators (recalibrated satiety, fatigue, or any of the other richly complex signals that constitute “(not) feeling like it”) can kick back in, the external one will be rendered superfluous—or rather, its historical superfluity will be exposed.

This is the core of the story this series tells about eating disorders. Then, there are some interesting wider speculations that these structural principles around dietary restraint and self-regulation could lead us to. They take us out into eating and exercise habits, weight control, and health and happiness beyond the clinical realm.

It’s easy to argue that the evolved systems for hunger/satiety and bodyweight regulation that used to serve humans well no longer do. The familiar argument is that because there’s now so much more readily available fat and sugar on offer and so little need for most people in post-industrial societies to do anything physical to survive, we need new ways of keeping ourselves regulated. The increasing prevalence of obesity (and metabolic syndrome more generally) is typically cited in support of this argument.

The argument that we need new regulation methods for energy intake/expenditure is the standard justification for introducing more and more open-loop regulators into the spheres of eating and exercise. More of these are imposed into our attentional spheres every year, via governmental and medical guidance on calorie intake and weekly minutes of exercise (such as the CDC’s recommended 150 moderate minutes per week or the NHS’s ridiculously arbitrary “5 a day”), supported by all the conspicuous numerical indicators intended to help people apply these rules: nutritional information and traffic lights on food packaging, calorie counts on menus, calories-burned estimates on treadmills and fitness trackers, etc. The demonstrable failure of all these initiatives to make any significant change of the type intended (see Piwek et al., 2016Jo et al., 2019) seems to lead only to yet louder calls for more of the same. 

On the logic proposed in this series, however, if external open-loop regulators are the problem, not the solution, one would expect that the more widely they’re promoted, the worse the situation will become. On a population level, their spread will increase the prevalence of poor bodyweight management thanks to actively encouraged over-reliance on open-loop regulators that (as the dietary restraint literature cited earlier in this series suggests) don’t work. Given that this appears to be what we’re seeing, despite few significant changes in food production or availability in post-industrial nations over the past decades, there seems to be some evidence in support of this counter-hypothesis. 

If there’s any truth in it, this shifts the causal burden for increasing the prevalence of obesity from modern changes in diet/exercise incentives to the misguided responses to these changes at the level of standard public health initiatives, as well as individual recourse to diets and tracking technologies. In this story, the rise of the extraneous regulators—a great surge of them, ignited by the popularization of fat-reduced diets from around the 1980s onwards, and catalyzed by the tech explosion on which all forms of (self-)quantification easily piggyback—is what has made and will continue to make people fatter in the long run (Jakicic et al., 2016), not what is valiantly keeping a lid on the “obesity epidemic.”

If this alternative story has any merit, even as a hypothetical, then what we need to do is switch the public health focus away from all the numerical distractions that prevent people from self-regulating effectively (see this New York Times piece for a recent overview), and towards encouragements to optimize closed-loop, self-regulation. This might involve training in fundamentals like interoceptive awareness, eating speed (Troscianko and Leon, 2020), power/skill-oriented movement, and many other easily unlearned instincts. Who knows, maybe treating ourselves, and being treated by our governments, a little more like competent adults might reveal that we were all along?

One final meta-point to wrap up: I love how intellectually generative the two weeks in the Mexican sun turned out to be for me. This series is just one of the things that came out of it, along with a post on reasons to dine out alone, plus lots of ideas for course design that I spent fun time scribbling about on my balcony or by the pool or ocean. These things happened precisely because there was no pressure on any of it (if I’d aimed to write two long blog posts and create outlines for a writing support program and a mind/body course, that would have been a great way to wreck the vacation), and because the everyday “shallow work” had been removed to make space for things that weren’t urgent but were meaningful. 

Deep work can happen when all the usual shallow demands are lifted and idleness is embraced, just as real eating and movement can happen when all the numbers are lifted. The good stuff takes energy, and the energy comes from fuel and from rest—of the kind you get only when you know your body well enough to give it what it needs.


Jakicic, J. M., Davis, K. K., Rogers, R. J., King, W. C., Marcus, M. D., Helsel, D., … & Belle, S. H. (2016). Effect of wearable technology combined with a lifestyle intervention on long-term weight loss: The IDEA randomized clinical trial. JAMA316(11), 1161-1171. Open-access full text here.

Jo, A., Coronel, B. D., Coakes, C. E., & Mainous III, A. G. (2019). Is there a benefit to patients using wearable devices such as Fitbit or health apps on mobiles? A systematic review. The American Journal of Medicine132(12), 1394-1400. Paywall-protected journal record here.

Piwek, L., Ellis, D. A., Andrews, S., & Joinson, A. (2016). The rise of consumer health wearables: Promises and barriers. PLoS Medicine13(2), e1001953. Open-access full text here.

Troscianko, E. T., & Leon, M. (2020). Treating eating: A dynamical systems model of eating disorders. Frontiers in Psychology11, 1801. Open-access full text here.

All-Inclusive Resorts and Dietary Self-Regulation (Part 2)


5: Milkshakes and dietary restraint

In the first part of this series, we explored the general idea of linking the “more than you need” structure of an all-inclusive vacation with the “more than you’ve always convinced yourself you need” structure of successful recovery from a restrictive eating disorder. I also described the anorexic version of such a vacation (think nocturnal routines and lots of food hoarding), the pseudo-recovered version (full of “healthy eating/exercise” rules and plenty of body comparison) and the fully recovered blissed-out version that was my reality a few months ago.

You can do the thought experiment yourself, if you like: If you were to book a couple of Caribbean weeks for yourself tomorrow, what would your reality be like? What does that tell you about what you could be prioritizing right now?

Meanwhile, let’s take the next step in the self-regulation part of the argument. All-inclusive resorts give us one angle on what it means to make your own decisions: to self-regulate in ways not constrained by blindly applied rules. Some interesting experiments carried out in the 1970s gave us another. They investigated what “dietary restraint” (DR: using self-control to try to limit one’s food intake) does to people’s eating habits when they’re presented with something they like eating (ice cream) after they’ve already consumed a milkshake they wouldn’t normally.

The way DR survives—and is proliferating—as an approach to eating is that it promises you’ll eat less, or make better food choices, and so end up with a slimmer (read “better”) body. Anorexia nervosa (AN) is DR on steroids. AN gets you thin. And it keeps you not eating very much—until it doesn’t.

Many people diagnosed with AN progress to a bulimia nervosa at some point (and/or progressed to AN from something else), and transitions between diagnostic categories are common in many directions and at many phases of illness and life (Schaumberg et al., 2019).

Just as missing a meal often leads to later eating more than what you missed, so chronic restriction often leads to chronic over-eating—specifically in a way that feels out of control and may culminate in deliberate vomiting or other negation/compensation attempts. Of course, this leaves the others: the people who don’t shift from anorexia to any other eating disorder. They either have lifelong anorexia (which is not a victory for the human host), or they recover fully and permanently—a victory for the human and not for DR (or AN).

In the non-clinical context of people who exert higher or lower levels of dietary restraint, the same kind of picture emerges: restraint can not just be fragile, but create fragility. In Herman and Mack’s (1975) experiment, participants were asked to consume either no milkshakes, one chocolate milkshake, or one chocolate and one vanilla milkshake in a fake taste test acting as a pre-load.

Then everyone was presented with three tubs of ice cream (chocolate, vanilla, and strawberry) with another taste survey and invited to “taste” as much of each as they wanted in ten minutes, supposedly to provide accurate taste ratings. The researchers’ predictions were as follows:

subjects required to consume two milkshakes in addition to their daily quota of calories would be in a position of having exceeded the “permissible” limits of consumption for a restraint-governed daily intake. Normally restrained subjects might be expected temporarily to give up the attempt at restraint, once they had come to perceive themselves as having already “overeaten”.

If such subjects had not consumed a milkshake, their normal restraint would remain intact. Highly restrained subjects, then, were expected to consume more ice cream in the two-milkshake condition than in the zero-milkshake condition.

By contrast, subjects who are normally not restrained would not be “triggered” by the excessive milkshakes. Such subjects should behave internally, eating less ice cream after a larger milkshake preload. For both types of subjects, the one-milkshake preload was expected to have an intermediate effect. (p. 650)

The data were a strikingly good fit for the predictions. The researchers found that the low-restraint eaters would simply eat until they were full, whereas the high-restraint eaters who had already had either one or two milkshakes ate more ice cream than any of the others:

Quite clearly, the data conform strongly to the predicted interaction. High restraint subjects consume more ice cream after the milkshake preload than after no preload at all. Low restraint subjects consume decreasing amounts of ice cream as a function of the size of the preload. (p. 654)

One milkshake seemed to be enough to eliminate restraint in the high-DR participants, presumably through the “what the hell” disinhibition effect that leads to counter-regulation. The “dieters” who drank one or two milkshakes kept eating because they’d already failed by breaking the rule, so they might as well fail big. (This “what the hell” effect is the diet-specific version of the all-or-nothing fallacy so pervasive in eating-disordered forms of thought.)

All results controlled for “acute deprivation” by assessing time since the last eating and rough calories eaten then. And it’s worth noting that there was no difference between the “normal weight” and “obese” (>15 percent overweight) participants’ behaviour: The difference that makes a difference is the one between low “restraint” and high, or being a non-dieter or a dieter. So what we have is oscillation between extremes: from long (or not so long) periods of eating less than one would like to shorter periods of eating as much as one would like—which is a lot because the appetite is typically denied. The oscillation, by definition, precludes getting practice at existing in the middle ground of sensing and responding and adjusting without hard and fast top-down rules.

The dietary restraint metric is hard to measure. Subsequent studies have suggested that the original DR scale may have been tapping a particular combination of high restraint and high susceptibility to disinhibition (Westenhoefer et al., 1994) or of restraint plus negative mood, which has been found to be associated with bingeing, purging, and generally more “disturbed” eating habits (Penas-Lledo et al., 2008).

In general, however, the restraint theory literature speaks to a recurring structural feature of what makes recovery from restrictive eating disorders hard: that withdrawal of rule-based restraint inevitably creates a self-regulation vacuum (at least temporarily), because that responsive middle ground is so unknown.

The middle ground was what used to appall me most about the idea of not being ill. Every day was that radical oscillation from nothing to a lot, but intentionally: fast all day, eat a large meal in the dead of night. I didn’t know how anyone could cope with the dullness of just messing around in the lowlands of neither very hungry nor very full, let alone tell me I ought to. I thought the long deep trough and the ecstatic peak were the best kinds of happiness on offer—or rather, I increasingly knew I hated it but didn’t believe anything else could be less bad.

Being well again is basically about existing in the dietary middle ground—which in turn, as I failed to realize back then, makes exploring the much more interesting extremes of other territories feasible (the highs and lows of love, sex, intellectual curiosity, professional ambition, aesthetic creativity, etc.). But crucially, recovery as the process that gets you here is not initially about inhabiting the middle ground.

I’ve written a bit before about how normality can be a slippery concept in recovery (and the more I think about it, the more misleading a guide it seems). One of the worst mistakes you can make at the start of the recovery process is to imagine that your task is to switch straight from an anorexic way of eating to a “normal” one.

Even if by normal you mean not statistically common but good and sustainable—the happily pragmatic way of eating that will sustain you for the rest of your life—you can’t get there directly because your body is incapable of self-regulating. After all, it’s had zero opportunity to do so as long as you’ve been ill.

In the next section, we’ll go on to explore what this mysterious ideal called self-regulation really is.

6: Self-Regulation, Plus or Minus Feedback

In the previous section of this self-regulation series, we took a look at some evidence suggesting that if you (1) have strict limits on eating and (2) are temporarily disinhibited for some reason (maybe you flouted a dietary rule in a small way, or you were in an altered state thanks to alcohol or strong emotion), you may well break your food rules by a wide margin. A range of recent studies adds further support to the general idea that dietary restriction impairs behavioural self-control: for example, a study from last year found that 10 days of calorie deprivation reduces people’s food-related but not other types of self-control (Standen and Mann, 2021).

Rules and self-regulation

One way to interpret this type of evidence is that dietary rules prevent effective self-regulation. What exactly do I mean by self-regulation in this context? I guess I mean stopping and starting eating because of a sensitive, adaptable set of instincts, not a rigid and arbitrary set of requirements.

Think of the milkshake drinkers we met in Part 2, who normally exercise high dietary restraint: All they’re doing is applying an input rule, and, once it’s broken, they’re lost. Once some researchers have induced them to have the milkshake, but their rule says not to, then the rule is useless to them because it’s already broken. And so they often end up compounding their rule-breaking by eating significantly more than they would have if the rule to eat less had never existed and (maybe most paradoxical of all) more than if they hadn’t eaten anything beforehand. They do so because (1) they have an awful lot of unfed hunger generated by following the rule, and (2) what the hell, there’s no difference between a rule broken by a millimetre or a mile; it’s still a failure. As soon as the boundary of the permitted is transgressed, there’s nothing left to support the supposedly desirable behaviours and resulting experiences. You’re at sea with your standard compass useless to you. 

Closed loop versus open loop

In structural terms, this neatly illustrates the difference between a closed loop and an open loop. In an open loop, you have a rule (say, eat 2,000 calories a day, or run 2 x 20 minutes a day) and you apply it regardless of changes in state. This may have the advantage of simplicity: You don’t need to make adjustments based on a range of measurements. But it also means that uncertainty about the things that matter gets amplified as time passes. You can’t correct for mistakes, either because you have no idea where you are (because you’re taking no measurements of anything meaningful—e.g., you’re blind to how well or badly your life and health are panning out) or because you’re not acting properly on the measurements you are taking (e.g., because you’re afraid to do anything differently, even though you see how badly what you’re doing is serving you). You have no robustness to perturbations, in your environment or in yourself, because you have no way of even gathering the relevant information effectively, let alone acting on it to reliably counteract instability.

This is anorexia nervosa. The main form of measurement going on that’s allowed to have any appreciable effect on anything is the measuring of the input variable to which the rule attaches (e.g., you measure calories consumed so far today to decide whether to eat any more this evening). Other measuring (e.g., of bodyweight or calories in or out), even if it’s as obsessive-compulsive as the behaviour-guiding kind, probably makes no difference to the application of the rule (you eat/exercise the same regardless of your weight, or you eat the same regardless of your exercise, say). If these secondary forms of explicit measurement do have effects, it’s by supplanting any other information you might have gathered about your current state (e.g., feeling tired or unwell, being injured, being hungry).

Thus, returning to the milkshake and ice cream experiment, things get eaten or not eaten depending on (say) a daily calorie rule, and if this rule is prevented from being applied (e.g., some calorie value is concealed from you) or is broken (e.g., overshot because some nasty experimenter induces you to have something all sugary and fatty and unnecessary), the whole thing collapses, because there was never anything else to turn to. As with cruise control that isn’t actually measuring vehicle speed and/or is assuming no disturbances (by road surface, gradient, etc.), the system is fundamentally fragile. It may be stable for a while by accident, but it won’t stay that way for long in the real world.

By contrast, in a closed-loop system, the information gathered is used to determine the next action. For example, say in the open-loop context you have a fixed exercise routine that proceeds every single morning regardless of physical or mental state, location, busyness, etc. The rule (e.g., x reps by y sets @ intensity z) is always obeyed and never changes. In a closed loop, the type, length, intensity, and sheer presence or absence of the routine adjust to all the relevant factors in the organism and its environment. The things being measured are dynamic; the measurements are about assessing the current state in order to be able to act in line with it. 

Most of the closed-loop measurements don’t even feel like measurements in the numerical sense that explicit rule following requires, because they’re automated by evolved biological mechanisms that support performance within acceptable homeostatic bounds (e.g., via ghrelin and leptin secretion, metabolic modulation, force of muscular contraction under load), as well as by automatically introspected inclinations and sensations (anything from musculoskeletal mobility to lethargy to personal/professional priorities for today). All of this is compromised if top-down exercise rules are imposed without acknowledgment of the relevant signals that could prompt self-correction.

In reality, of course, the distinction isn’t absolute. The open-loop version doesn’t manage to entirely override all the equilibrium-geared control systems of the human body. It may also incorporate limited feedback, in the form of compensation for some episodes of rule-breaking, though this is still irrespective of the wider causes and effects. Meanwhile, the closed-loop version has rules of thumb, input defaults that mean, in absence of any off-the-charts measurements of other kinds (e.g. serious DOMS [delayed onset muscle soreness], coming down with a cold), the habitual behaviour will occur, within standard bounds. This reduces the cognitive load of having to make every decision from scratch every time. But all kinds of micro-adjustments (e.g., in amount of warmup, rest time between sets, whether to go for your personal best today), are happening more effectively for the fact that the open-loop rules aren’t constantly trying to override everything else.

If you’re operating open-loop, or attempting to, in the realm of food (or any other life domain I can think of), you’re probably not living very happily, even when you’re not transgressing. So, in the penultimate part of the series, we’ll take a look at what your options are if this open-loop state sounds like you, and you’d like it not to be.

Read on to the final instalment here.


Herman, C. P., & Mack, D. (1975). Restrained and unrestrained eating. Journal of Personality43(4), 647-660. Paywall-protected journal record here.

Peñas-Lledó, E. M., Loeb, K. L., Puerto, R., Hildebrandt, T. B., & Llerena, A. (2008). Subtyping undergraduate women along dietary restraint and negative affect. Appetite51(3), 727-730. Full PDF download here.

Schaumberg, K., Jangmo, A., Thornton, L. M., Birgegård, A., Almqvist, C., Norring, C., … & Bulik, C. M. (2019). Patterns of diagnostic transition in eating disorders: A longitudinal population study in Sweden. Psychological Medicine49(5), 819-827. Open-access full text here.

Standen, E. C., & Mann, T. (2021). Calorie deprivation impairs the self-control of eating, but not of other behaviors. Psychology & Health. Paywall-protected journal record here.

Westenhoefer, J., Broeckmann, P., Münch, A. K., & Pudel, V. (1994). Cognitive control of eating behavior and the disinhibition effect. Appetite23(1), 27-41. Paywall-protected journal record here.

All-Inclusive Resorts and Dietary Self-Regulation (Part 1)


1: Introducing the all-inclusive resort analogy

In October/November I spent 15 nights in Playa del Carmen to circumvent the US travel ban and get into the US to see my partner for the first (non-Zoom/WhatsApp) time in 13 months. I decided to splash out, and spent far more on this holiday than I ever have on any trip in my life—especially one on my own.

It struck me right from the moment of typing in my credit card details that this was a nicely anti-anorexic thing to be doing: 1) Spend a lot of money, just on myself, and not because I “had” to, i.e. buying myself something more than I “had” to. What I didn’t realize at that point was how much broader the implications of the all-inclusive vacation model could be for thinking about how to do eating disorder recovery successfully—and maybe even about broader questions around how to do healthy eating and exercise successfully.

Here’s the thesis in brief. All-inclusive is the epitome of incentivizing “more than necessary”: You’re paying to encourage yourself to have as much as possible. You’re paying to put the limits (on eating, drinking, and whatever else your package includes) so high as to be practically irrelevant (I guess you could camp out at the resort bar or restaurant and eventually get told you can’t have any more? but probably not until after you passed out / threw up). The idea is that this is beneficial (e.g. relaxing) because then you get to self-regulate without some major standard constraints (e.g. cost) getting in the way.

In this miniseries I’ll argue that the all-inclusive framework is, structurally speaking, the same framework that’s needed to recover from a restrictive eating disorder (or from chronic dieting): the limit is raised high enough to be irrelevant. (The same applies for a compulsive exercise problem, but switched around: Here the limit is made low enough to be irrelevant.) Only then can you start self-regulating, i.e. start using feedback (e.g. on how you feel, what other outcomes you’re getting), rather than blindly applying rules (e.g. how many calories or minutes or kilometres regardless of everything else).

Of course, the all-inclusive benefits may not, in the vacation or the recovery context, be immediate. Self-regulation may take time to be learnt—maybe a lot of time. I guess some people do all-inclusive and binge-eat/drink in a way that makes them miserable, and some others do it in as miserly a way as if they were paying for every drink and meal, and some people do it just fine but don’t enjoy themselves because too many other things are wrong. Equally, learning how to self-regulate in recovery, and then getting the payoffs for the rest of life, obviously isn’t instant—although in some cases, the instincts for how to do it may snap back into place a lot more quickly than expected.

Letting internal regulation take over.

Following step 1 (spend a lot of money on something where everything is included, i.e. the incentive is now to consume more not less) allows for the magic of step 2: Let the self-regulation happen. For me this autumn, the eating-specific effects weren’t particularly salient, because I’m already self-regulating happily in that realm, but the way eating and drinking adjusted effortlessly to the absence of ordinary constraints was a pleasant part of a broader ease in adapting to having pretty near zero limitations or responsibilities. The most strikingly beautiful part of this holiday—even more so than the blue-green Caribbean water, the palm trees, and the ocean sunrises from my balcony—was how everything simply took care of itself, effortlessly, in the absence of almost all readymade guidelines. 

I don’t recall any time in my adult life when there were so few requirements on me—self-imposed or otherwise. I had a few coaching calls in the calendar, but I deliberately cleared other work commitments for this fortnight, so otherwise it was empty. And this being all-inclusive, there was nothing practical (shopping, cooking, cleaning, etc.) to think about. There were no pre-decided boundaries in my day. Being on my own, there weren’t even anyone else’s preferences to accommodate. There were, basically, no “should”s.

So, what happens when you take the ought out of your life?

In many senses, of course, two weeks lounging around in a swanky hotel has little to do with the rest of life. But it can provide some important illumination for the rest—clarifying it ex negativo, through the absence of what’s ordinarily present. What it does is remove almost all the accreted habits that normally prevent us from answering that question from scratch. There’s never a blank slate, but the slate is a lot freer of old scrawls when everyday busyness is prevented from fooling us into believing we have no options.

“Ditching the ought” has to become a reflex in recovery from anorexia. For a while, a new version of “ought” (eat more, move less) has to replace the old; later, the whole idea of “ought” has to change its nature, become more malleable by context. This progression obviously applies to the diet and exercise specifics, but it’s also about much broader questions concerning how we choose to live and why—which are what the great excitement of fully recovering really amounts to. Now that I get to choose how to live, rather than an illness always already having dictated 90% of the answer, how do I in fact choose to?

If this still feels a million miles away from you, trudging across the endless grey tundra of recovery, this series may serve two purposes: 1) illustrate the basics of how to make recovery work, via the all-inclusive vacation analogy; 2) encourage you to try out such a vacation for real, as a pleasurably literal way to accelerate the process. At the end I’ll also offer some observations relevant to eating well (in the fullest sense of that term) in the absence of an eating disorder but the presence of all the screechy sociocultural signals that can make it feel so hard to find and maintain personal equilibrium.

2: The anorexic vacation

Unless you’re far more profoundly motivated by the desire not to waste money than most people are, there’s not much point in doing the literal all-inclusive vacation with full-blown anorexia if you’ve done none of the analogous practice of lifting constraints and shifting incentives beforehand. I can imagine with tedious ease exactly what this Mexican fortnight would have been if anorexia had still been running the show. The “oughts” would have kept doing what they always did. 

First of all, I would have had to be brought kicking and screaming (well, the pallid, flaccid anorexic version of that) here in the first place. There’s not much point in someone with anorexia actively choosing a place that costs a lot and is remotely worth the cost only if you enjoy eating and drinking plenty, at civilized times of day. I would have been horrified by everything from the price tag to the idea of eating every meal in a restaurant (though I guess I’d have approved of the fridge and coffee machine in the room, the former topped up by a man with a trolley full of cold cans and snacks every day, the latter adaptable to make tolerable cups of tea). So, I would have needed a really compelling reason to do this at all. Let’s say a parent generously gave this to me, hoping it would do me good.

So I’m here. I arrive on a Sunday evening; the bellboy shows me to my room. Despite the long trip and jetlag I force myself to stay up long enough to have several hot drinks as a prelude to my nighttime meal. I’ve obviously brought lots of special foods with me, probably including cereal bars, cereal, maybe some soy milk to tide me over before I find a shop, maybe bread and/or margarine and lettuce, and of course lots of chocolate and other extremely sweet things. And I’ve gathered up all the plane food to eat tonight—not the main course, which I worried about getting through customs or spilling in my bag, but the bread rolls and the brownie dessert and the mini butters and everything. And I have my electronic kitchen scale (and I’ve probably emailed the hotel in advance to ask whether they have body scales) and I work out how to incorporate these into the immovable framework for my single meal of the day. And that meal is an urgent ecstasy, as always. And I sleep deeply until lunchtime at the earliest. 

When I wake up, I consult Maps to find an acceptable walk to take me to a supermarket to buy more “essentials” and keep me walking for long enough to be comparable to the daily bike ride at home. And when that’s all done and there are maybe a few daylight hours left, I might lie out in the sun on the beach or balcony, self-conscious about my thinness if in public, probably chilled by the slightest warmest breeze. And I feel guilty or at least on edge the whole time about not being productive, and I have some work-related book to read that I’m making faint pencil marks in the margins of (since it’s a library book), and I’m bored by all of them and maybe grant myself a half hour for something I wanted to read, like fiction, but only once it’s night again. 

And because I’m obsessed with not wasting money (I don’t care very much about other people’s money, but still a little bit), I go to the “barefoot bar” and order a wrap or a panini and factor it into my late-night meal, and I go to the café every day to get free coffees and eye up all the cakes and pastries and get several of them every day to have at the nightly high point of my life: eating fat and sugar. 

And the days pass, and I stay in my haze, and my skin gets a tiny bit of colour but I miss half the daylight hours, and I bless the brief respite from serious cold but keep my body unable to insulate itself, and I turn a short daily sea swim into a non-negotiable ordeal, and I don’t speak more than a few words to anyone but make a whole lot of people feel vaguely sad or uncomfortable, and so my time in sunnier climes comes to an end. And I go home the same sad person who spends 21 hours of her day wishing time onwards just so she can eat.

I shudder to imagine this. I drafted this section over wine in between courses and banter with the waiters at the breezy outdoor restaurant. It was hard to write because I wasted so many holidays this way. And I defended the waste, for fuck’s sake. That’s what’s really so infuriating and incomprehensible about this illness: how it makes its hosts think their life is better without it, not so much worse it barely counts as living.

3: The recovered vacation

So take instead the reality. Not the version where every new possibility is already precluded by a “no”, every old habit already insisted on by an “it couldn’t be any other way”. The version where the decisions make themselves—as they always in reality do, but in all the beauty of their self-determining nakedness. The version where days start at 6 or 7 or 8 or whenever I happen to wake up, and where I maybe start reading a bit of one of my holiday paperbacks (Anne Tyler, Yan Hang, Muriel Spark) in bed, or more likely get straight up, take a quick peek out from behind the net curtain to check the state of the sky, pull on a minimal negligee (I’d always rather be clothes-free), put a teabag and water in the coffee machine, go to the loo, clean my teeth and face, and go onto the balcony to sit watching the sunrise sea while I write my diary and drink my tea. And then most mornings I go to 8am yoga, and usually to some other class later (it was fun to try all kinds of things I never would otherwise: HIIT boxing, TRX, a crazy fitness challenge thing on the main lawn in full view of all the pool-goers), and after yoga I have some variants on eggs, cheese, and meat in the outdoor restaurant, and then the rest of the day is a lazy, soft-edged mixture, drifting between balcony, pool, and beach; between reading, writing, email, work/pleasure Zooms, dozing, eating, drinking, swimming, wandering into town for something. There are few set times for anything, only pretty capacious meal deadlines (breakfast by 10:30, lunch by 4, sometimes a dinner booking); and instead there have been instincts that have come and gone: to drift into a fictional world for a while, to crystallize a new bit of a course idea in writing; to get coffee and a cake (going into the café and not handing over money in return for a latte just doesn’t get old!); maybe to go and lift something a bit heavy in the gym; to have an evening swim in the uplit pool or to sit on the balcony with a beer and salty snacks instead. The extreme luxury of 14 whole days of this feels almost surreal. I don’t talk to a great many people beyond the waiters, but I get lots of cheery hair compliments and have a few interesting chats with waiters and other guests.

And imagining it being gently poisoned from the start is easy—just as easy as imagining it being utterly annihilated by severe anorexia. Take pseudo-recovery, the place so many people stop. What version do you get here? 

4: The pseudo-recovered version

If you’ve stopped halfway in recovery, you get the version where you do a lot of comparing of how much you’re eating and exercising with how much you would do at home. Where you need to get your daily exercise in before you can relax. Where you think about calories when choosing from menus. Where if you had just the couscous and chicken salad for lunch one day you go without the panini or the wrap every day after that because the lower precedent has now been set (and the same with adding flour tortillas to breakfast, or cake to coffee, or dessert at dinner). Where you have to have a certain number of swims per day, of a certain length, aimed at calorie burning or muscle maintenance (which is all really aimed at how slim or toned you look). Where you limit yourself to alcohol x days a week rather than deciding whether or not you feel like it. Where once you’ve found out about the fitness class schedule, you have to go to all of them, or some subset non-negotiably. Where you spend a lot of time looking at your swimsuited self in mirrors and comparing your body with other people’s. 

(Or, at a slightly more advanced stage of pseudo or partial recovery, where you manage to resist some or all of these instincts but you still feel a lot of guilt and doubt and preoccupation as you do so.)

And this version is a bit less hard to get frustrated or angry about, because it’s easier to see how you kid yourself it’s decent. But it makes me just as sad to think about, maybe more so, because it has even more of an inbuilt self-perpetuation mechanism than the acute-AN version. This is the “stopping halfway” that I’ve written about before, and that many readers said they felt powerful recognition of. (PT removed all comments from all blog posts last year, but I have a copy of the hundreds that were posted in response to the “stopping halfway” piece, and all the others.)

This is the state that tries to pass itself off as the best of both worlds—still relatively thin and relatively free—rather than the worst: not particularly thin, still deluded that thinness matters, and not remotely free.

I guess it looks a lot—at least if you squint and look away pretty quick—like the best you can reasonably expect to get, because pseudo-recovery and normality are getting harder and harder to tell apart. But if you’re in the former camp, you have one great advantage over those in the latter. You know that this is part of a process that you’ve begun and that you can decide to resume and complete. You’ve already got from 0% to 80% or 90% recovered; you can certainly manage the last 10% or 20%. 

If it helps, start planning a vacation like this (or not at all like this, but dreamily different from the everyday and opening up the space for self-regulation in whatever way works for you) and remember how elusive but how entirely unfakeable the difference is between the experience you get if you complete this process versus if you don’t. 

It’s not quite like being a child again, the good version, but it also kind of is: It’s doing what you want, when you want, and not even any parents to tell you not to, because now you know enough to sense when you want and need your own bedtime to be.

Read on to a milkshake-themed Part 2 here.