Who to contact about x-risk

I feel bad that a lot of people feel like they have no one to call about their concerns about existential risk, or adjacent topics that seem very important to them because of relevance to existential risk. I feel especially bad about this when folks contact me about such topics and I don’t have time to give a good response. This post is meant to encourage behavior that can gradually shift the world in a more positive direction for addressing people’s worries about x-risk.

I love my local fire department. There were serious wildfires in my area over the past year, and at times I felt worried about them. A few times, I called my local fire department for updates on the situation. They answered the phone, kindly responded to my questions about the fires, and told me how to sign up for more frequent updates. They tested their update system, and I received the tests. Conditional on the wildfires, I was quite happy about their responsiveness to my need for more information. Almost no one died of wildfires in California last year.

Who can you call if you’re worried about existential risk (x-risk), or global catastrophic risks more broadly? A lot of people have contacted me about this topic because it is my area of professional focus, particular as it pertains to artificial intelligence. I feel bad that I can’t be as responsive to them as my local fire department has been to me. If people are worried about x-risk, there should be someone they can call to get more information about it. My local fire department is the de facto best source of information about local fires, and is recognized as such, but currently there is no globally-recognized best source of information about x-risk (although some institutions are doing quite well in this regard in my opinion; see below.) So who should you be contacting if you’re worried about x-risk? Here’s what I suggest:

1) Therapists, for advice on managing your priorities or feelings. If you’re having trouble concentrating on other important things in your life (sleep, food, family, friends, work) because you can’t stop thinking about x-risk, please see a therapist. Seeing a therapist is also a good idea if you’re just worried and wish you could manage the anxiety better. You might feel that therapists don’t know anything about x-risk, but if you see 3-5 different therapists and pick the one you like best, you will probably find one that can help you manage your anxiety and focus your thinking about x-risk in ways that are non-destructive to your health and lifestyle. If you feel your therapist doesn’t understand you, tell them it bothers you that they don’t seem to understand you and you want to spend more of your time with them resolving that. If you feel your therapist doesn’t understand existential risk because it’s too abstract or intellectual, look for a therapist with a PhD, who is therefore more likely to be open to academic conversations.

It’s important to learn to manage one’s fears, anxieties, and frustrations around this topic before attempting to engage with experts on it; otherwise the conversation will probably be unproductive.

2) Academics, for expert information. If you need information directly from experts, you can try contacting personnel at research institutions who think about existential risk, such as:

  • The Center for the Study of Existential Risk (Cambridge)
  • The Future of Humanity Institute (Oxford)
  • The Stanford Existential Risk Initiative (Stanford)

However, these folks are extremely busy, and probably won’t have time to respond to most people’s questions. In that case, I suggest contacting your local university instead. Even if you don’t get a good answer, you create evidence that people would like to be able to contact their local university with questions about x-risk, which over time can help can create jobs for people who want to think and communicate about x-risk professional. If that fails, you can also try:

3) Government representatives, for basic information, or as expert proxies. If you’re having a hard time reaching experts, or just want basic information about how governments and companies manage x-risk, I suggest you contact local representatives of your municipal or state government. They will not know as much about the topic as you’d like, so you should ask them to gather more information and get back to you about it. They might have better luck getting a conversation with experts than you will, and after they do that, they might be slightly better at answering future questions about the topic. In other words, you’ll have helped the government focus a bit more of its attention on x-risk, and thereby helped them to become slightly more informed about the topic.

Much of the government is responsive rather than proactive in nature, such that it can mostly only pay attention to topics if people pressure or expect them to. If we never ask, our governments will never learn to answer.

A note on contacting the government: Some folks I know have expressed that it would be bad for governments to get too interested in existential risk, because then the issue will become politicized in a way that damages discourse about it. I think there is some truth to this concern, however, I think this is higher order effect and is therefore less concerning than the problem of causing the government to gradually become more responsible and responsive to the topic. I think the kind of gradual force that’s created by contacting local representatives in a democracy creates a net-positive effect for the issue at hand, even if a certain amount of political machination inevitably ends up emerging around the topic.

Effective funerals: buy biographies instead of expensive burials, and maybe cemeteries can become libraries.

Cemeteries and funerals are beautiful, because they tell a story of the past that we care about. They’re also somewhat expensive: families routinely spend on the order of $10k on funeral and burial rites for their families. There are people whose entire jobs are the preparation of bodies for funeral rites. Can we tell the story of the past better, but for the same cost?

I believe we can. If your loved one is close to death or has recently died, instead of planning for an expensive burial funeral, you might consider instead planning for the cheapest possible disposal of their earthly remains, and use the excess money to hire a biographer. The biographer can talk to your loved one’s family, and even your loved one directly if they haven’t yet passed, and write down people’s most treasured or meaningful memories about them. Your children, grandchildren, and great grandchildren could have much more than a tombstone to remember them by.

If more people adopted this tradition, cemeteries could become libraries where we keep tomes of stories about our lost loved ones, both bitter and sweet. When we bring flowers to the cemetery, we could leave them next to a book containing their life story. We could re-read their memories, and perhaps even take some time to read through the memories of other people we don’t know, and develop a feeling of what it was like to be them. Probably some people would take an interest in reading the stories even of strangers. Perhaps these “cemetery historians” would even bond when they meet at the cemetery, and recommend their favorite stories to each other. Together, we’d have a culture more capable of preserving and cherishing the memories of the people we’ve lost.

Sure, the biographies wouldn’t always be perfectly accurate, and perhaps far from it. Some biographers might offer to exaggerate in order to create more favorable stories. But we’d all know that to be the case, and we might even become more aware of the inconsistencies between our stories of the past if we could read many different accounts of what happened. And I know I’d spend more time enjoying the beautiful landscape of cemeteries if there were also books there to read about the people who were buried there.

How could we transition to such a culture?

  1. Writers: advertise yourselves as end-of-life biographers. If you’re a writer willing to write stories about people around the ends of their lives, tell people you’re willing to do it, and name your price. Start a website. Pioneer a culture. Try partnering with a local funeral home to make it easier for grieving families to find you and get your help.
  2. Funeral homes: partner with biographers. Offer to connect biographers with grieving families in exchange for percentage of the fee paid for their service.
  3. Families with dying loved ones: make a social media post seeking a biographer. Let your desire to preserve the memories of your loved one clear and visible to people you know. Decide an amount you’re willing to pay—or range of amounts, say “Between \$1000 and \$3000″—and let people know that you’re interested in their help writing up a biography for your loved one. Let them know it doesn’t have to be perfect, and that something is better than nothing (if that’s how you feel).

I realize there would be lots of challenging questions and priorities for the biographers and families to sort out. But that’s why it’s a job. Funeral directors get used to dealing with grieving families, and learn to accommodate their preferences as best they can. I believe end-of-life biographers could learn to do the same. And I wager that, in 50 years time, if we’re all still around to read the stories they right, we’ll be glad of their work.

Make Gmail or Inbox open “mailto:” links in Chrome

Life will be better… just click the “handler” button and choose “allow”:

Associate your academic email address with a Google account

If I’ve sent you a link to this blog post, it’s probably because your .edu email address is not already associated with a Google account, and I got a notification about that when sharing a doc or calendar item with you. To fix this problem permanently, open a browser logged into a gmail account (create a new one if you don’t want to use your personal one), and go to:
https://myaccount.google.com/alternateemail

From there, you can add email addresses that will actually work for receiving things like Google doc invitations and Google calendar invitations. This is somewhat new, and different from just setting up a “send mail as” setting in gmail, because it applies to all google services at once.

Give it a try, and save us both a bunch of future hassle 🙂

Deserving Trust, II: It’s not about reputation

Summary: a less mathematical account of what I mean by “deserving trust”.

When I was a child, my father made me promises. Of the promises he made, he managed to keep 100% of them. Not 90%, but 100%. He would say things like “Andrew, I’ll take you to play in the sand pit tomorrow, even if you forget to bug me about it”, and then he would. This often saved him from being continually pestered by me to keep his word, because I knew I could trust him.

Around 1999 (tagged in my memory as “age 13”), I came to be aware of this property of my father in a very salient way, and decided I wanted to be like that, too. When I’d tell someone they could count on me, if I said “I promise”, then I wanted to know for myself that they could really count on me. I wanted to know I deserved their trust before I asked for it. At the time, I couldn’t recall breaking any explicit promises, and I decided to start keeping a careful track from then on to make sure I didn’t break any promises thereafter.

About a year later, around 2000, I got really wrapped up in thinking about what I wanted from life, in full generality… Continue reading

FAQ

I get a lot of email, and unfortunately, template email responses are not yet integrated into the mobile version of Google inbox. So, until then, please forgive me if I send you this page as a response! Hopefully it is better than no response at all.

Thanks for being understanding.

Continue reading

Deserving Trust / Grokking Newcomb’s Problem

Summary: This is a tutorial on how to properly acknowledge that your decision heuristics are not local to your own brain, and that as a result, it is sometimes normatively rational for you to act in ways that are deserving of trust, for no other reason other than to have deserved that trust in the past.

Related posts: I wrote about this 6 years ago on LessWrong (“Newcomb’s problem happened to me”), and last year Paul Christiano also gave numerous consequentialist considerations in favor of integrity (“Integrity for consequentialists”) that included this one. But since I think now is an especially important time for members of society to continue honoring agreements and mutual trust, I’m giving this another go. I was somewhat obsessed with Newcomb’s problem in high school, and have been milking insights from it ever since. I really think folks would do well to actually grok it fully.


You know that icky feeling you get when you realize you almost just fell prey to the sunk cost fallacy, and are now embarrassed at yourself for trying to fix the past by sabotaging the present? Let’s call this instinct “don’t sabotage the present for the past”. It’s generally very useful.

However, sometimes the usually-helpful “don’t sabotage the present for the past” instinct can also lead people to betray one another when there will be no reputational costs for doing so. I claim that not only is this immoral, but even more fundamentally, it is sometimes a logical fallacy. Specifically, whenever someone reasons about you and decides to trust you, you wind up in a fuzzy version of Newcomb’s problem where it may be rational for you to behave somewhat as though your present actions are feeding into their past reasoning process. This seems like a weird claim to make, but that’s exactly why I’m writing this post.

Continue reading

Start following conservative media, and remember how agreements between people and states actually work

Dear liberal American friends: please pair readings of liberal media with viewings of Fox news or other conservative media on the same topics. This will take work. They will say things you disagree with, using words you are unfamiliar with. You’ll have to stop scrolling down on Facebook and actively google phrases like “Trump executive order to protect America.” That may sound hard, but the integrity of your country depends on you doing it.

You’ve probably heard about the President’s executive order restricting immigration from seven countries, which lead to the mistreatment of legal visa holders and permanent residents of the United States in Airports. You probably also understand that there is a huge difference between ruling out new visas from those countries, and dishonoring existing ones. The latter is breaking a promise. Dishonoring agreements like that makes you untrustworthy, and that is very bad for cooperation. Right?

Well, hear this. Continue reading

Time to spend more than 0.00001% of world GDP on human-level AI alignment

From an outside view, looking in at the Earth, if you noticed that human beings were about to replace themselves as the most intelligent agents on the planet, would you think it unreasonable if 1% of their effort were being spent explicitly reasoning about that transition? How about 0.1%?

Well, currently, world GDP is around \$75 trillion, and in total, our species is spending around \$9MM/year on alignment research in preparation for human-level AI (HLAI). That’s \$5MM on technical research distributed across 24 projects with a median annual budget of \$100k, and 4MM on related efforts, like recruitment and qualitative studies like this blog post, distributed across 20 projects with a median annual budget of \$57k. (I computed these numbers by tallying spending from a database I borrowed from Sebastian Farquhar at the Global Priorities Project, which uses a much more liberal definition of “alignment research” than I do.) I predict spending will roughly at least double in the next 1-2 years, and frankly, am underwhelmed…

Continue reading

Considerations against pledging donations for the rest of your life

I think donating to charity is great, especially if you make more than \$100k per year, placing you well past the threshold where your well-being depends heavily on income (somewhere around \$70k, depending on who does the analysis). I’ve been in that boat before, and donated more than 100% of my disposable income to charity. However, I was also particularly well-positioned to know where money should go at that time, which made donating particularly worth doing. I haven’t made any kind of official pledge to always donate money, because I take pledges/promises very seriously, and for me personally, taking such a pledge seems like a bad idea, even accounting for its signalling value. I’m writing this blog post mainly as a way to reduce social pressure among such folks who earn less than \$100k per year to produce donations, while at the same time encouraging folks who earn more to consider donating more seriously.

Continue reading

Open-source game theory is weird

I sometimes forget that not everyone realizes how poorly understood open-source game theory is, until I end up sharing this example and remember how weird it is for folks to see for the first time. Since that’s been happening a lot this week, I wrote this post to automate the process.

Consider a game where agents can view each other’s source codes and return either “C” (cooperate) or “D” (defect). The payoffs don’t really matter for the following discussion.

First, consider a very simple agent called “CooperateBot”, or “CB” for short, which cooperates with every possible opponent:

def CB(opp):
  return C

(Here “opp” is the argument representing the opponent’s source code, which CooperateBot happens to ignore.)

Next consider a more interesting agent, “FairBot”, or “FB” for short, which takes in a single parameter $k$ to determine how long it thinks about its opponent:

Continue reading

Abstract open problems in AI alignment, v.0.1 — for mathematicians, logicians, and computer scientists with a taste for theory-building

This page is a draft and will be updated in response to feedback and requests to include specific additional problems.

Through my work on logical inductors and robust cooperation of bounded agents, I’m meeting lots of folks in math, logic, and theoretical CS who are curious to know what contributions they can make, in the form of theoretical work, toward control theory for highly advanced AI systems. If you’re one of those folks, this post is for you!

Continue reading

Voting is like donating thousands of dollars to charity

(Share this post to encourage folks with rational, altruistic leanings to vote more. I originally posted this to LessWrong in 2012, but I figured it was worth re-posting.)

Summary:  It’s often argued that voting is irrational, because the probability of affecting the outcome is so small. But the outcome itself is extremely large when you consider its impact on other people. I estimate that for most people, voting is worth a charitable donation of somewhere between \$100 and \$1.5 million. For me, the value came out to around \$56,000.  So I figure something on the order of \$1000 is a reasonable evaluation (after all, I’m writing this post because the number turned out to be large according to this method, so regression to the mean suggests I err on the conservative side), and that’s big enough to make me do it.

Moreover, in swing states the value is much higher, so taking a 10% chance at convincing a friend in a swing state to vote similarly to you is probably worth thousands of expected donation dollars, too. (This is an important move to consider if you’re in a fairly robustly red-or-blue state like New York, California, or Texas where Gelman et al estimate that “the probability of a decisive vote is closer to 1 in a billion.”) I find EV calculations like this for voting or vote-trading to be much more compelling than the typical attempts to justify voting purely in terms of signal value or the resulting sense of pride in fulfilling a civic duty.

Selfish voting is a waste of your time

Continue reading

Protected: Move-in application for room in 4-bedroom house at south edge of Berkeley

This content is password protected. To view it please enter your password below:

Professional feedback form

Continue reading

Leveraging academia

Since a lot of interest in AI alignment has started to build, I’m getting a lot more emails of the form “Hey, how can I get into this hot new field?”. This is great. In the past I was getting so few messages like this that I could respond to basically all of them with many hours of personal conversation.

But now I can’t respond to everybody anymore, so I have a new plan: leverage academia.

To grossly oversimplify things, here’s the heuristic. Continue reading

Seeking a paid part-time assistant for AI alignment research

Please share this if you think anyone you know might be interested.

Sometimes in my research I have to do some task on a computer that I could easily outsource, e.g., adding bibliographical data to a list of papers (i.e., when they were written, who the authors were, etc.). If you think you might be interested in trying some work like this, in exchange for

  • $20/hour, paid to you from my own pocket,
  • exposure to the research materials I’m working with, and
  • knowing you’re doing something helpful to AI alignment research, then
Continue reading

Interested in AI Alignment? Apply to Berkeley.

Summary: Researching how to control (“align”) highly-advanced future AI systems is now officially cool, and UC Berkeley is the place to do it.

Interested in AI alignment research? Apply to Berkeley for a PhD or postdoc (deadlines are approaching), or transfer into Berkeley from a PhD or postdoc at another top school. If you get into one of the following programs at Berkeley:

  • a PhD program in computer science, mathematics, logic, or statistics, or
  • a postdoc specializing in cognitive science, cybersecurity, economics, evolutionary biology, mechanism design, neuroscience, or moral philosophy,
… then I will personally help you find an advisor who is supportive of you researching AI alignment, and introduce you to other researchers in Berkeley with related interests.

This was not something I could confidently offer you two years ago. Continue reading

“Entitlement to believe” is lacking in Effective Altruism

Sometimes the world needs you to think new thoughts. It’s good to be humble, but having low subjective credence in a conclusion is just one way people implement humility; another way is to feel unentitled to form your own belief in the first place, except by copying an “expert authority”. This is especially bad when there essentially are no experts yet — e.g. regarding the nascent sciences of existential risks — and the world really needs people to just start figuring stuff out. Continue reading

Breaking news: Scientists Have Discovered the Soul

2016 is a great year for physics. Not only have we discovered gravitational waves, but just this week, physicists have announced the existence of a long sought after object: the human soul. Continue reading

Credence – using subjective probabilities to express belief strengths

There are surprisingly many impediments to becoming comfortable making personal use of subjective probabilities, or “credences”: some conceptual, some intuitive, and some social. However, Phillip Tetlock has found that thinking in probabilities is essential to being a Superforcaster, so it is perhaps a skill and tendency worth cultivating on purpose. Continue reading

A story about Bayes, Part 2: Disagreeing with the establishment

10 years after my binary search through dietary supplements, which found that a particular blend of B and C vitamins was particularly energizing for me, a CBC news article reported that the blend I’d used — called “Emergen-C” — did not actually contain all of the vitamin ingredients on its label. Continue reading

A story about Bayes, Part 1: Binary search

When I was 19 and just beginning my PhD, I found myself with a lot of free time and flexibility in my schedule. Naturally, I decided to figure out which dietary supplements I should take. Continue reading

Help me write LaTeX on a large e-ink display ($200 reward)

Edit: my employer was eventuslly able to order me an e-ink monitor, so the reward is off 🙂

I would like to write LaTeX on a wireless-enabled e-ink display with a 13″ or larger screen to avoid visual fatigue. If you solve this problem for me, I will pay you a $200 reward, be extremely grateful, and write a blog post explaining your solution so that others might benefit 🙂 Some examples that I would consider solutions: Continue reading

Seeking a paid personal assistant to create more x-risk research hours

My main bottleneck as a researcher right now is that I have various bureaucracies I need to follow up with on a regular basis, which reduce the number of long interrupted periods I can spend on research. I could really use some help with this. Continue reading

Use a giant notepad to think better

Having a space to write things down frees up your mind — specifically, your executive system — from the task of holding things in working memory, so you can focus your attention on generating new thoughts instead of looping on your most recent ones to keep them alive. Writing down what’s in your head — math, plans, feelings, whatever — can start paying cognitive dividends in about 5 seconds, and can make the difference between a productive thinking day and a lame one. Continue reading

A Mindfulness-Based Stress Reduction course in the East Bay starting January 19

Summary: I think the standardized 8-week MBSR course format is better designed than most introductory meditation practices, and have found David Weinberg in particular to be an excellent mindfulness instructor. Since something like 30 to 100 people have asked me to recommend a way to learn/practice mindfulness, I’m batch-answering with this post. Continue reading

Why CFAR spreads altruism organically, and why Labs & Core make a great team

Following on “Why scaling slowly has been awesome for CFAR Core”, here are two other questions I’ve gotten repeatedly about CFAR:

Q2: Why isn’t altruism training an explicit part of CFAR’s core workshop curriculum?
Continue reading

Red-penning: rolling out an experimental rationality / creativity technique

Note: I’m writing about this technique to (1) reduce the overhead cost of testing it, and (2) illustrate what I consider good practices for “rolling out” a new technique to be added to a rationality curriculum. Despite seeming super-useful in my first-person perspective, experience says the technique itself probably needs to undergo several tests and revisions before it will actually work as intended, even for most readers of my blog I suspect. Continue reading

Why scaling slowly has been awesome for CFAR Core

Summary: Since I offered to answer questions about my pledge to donate 10% of my annual salary to CFAR as an existential risk reduction, the question “Why doesn’t CFAR do something that will scale faster than workshops?” keeps coming up, so I’m answering it here. Continue reading

Break your habits: be more empirical

Summary: The common attitude that “You think too much” might be better parsed as “You don’t experiment enough.” Once you’ve got an established procedure for living optimally in «setting», be a good scientist and keep trying to falsify your theory when it’s not too costly to do so.

Continue reading

Beat the bystander effect with minimal social pressure

Summary: Develop an allergy to saying “Will anyone do X?”. Instead query for more specific error signals: Continue reading

AI strategy and policy research positions at FHI (deadline Jan 6)

Oxford’s Future of Humanity Institute has some new positions opening up at their Strategic Artificial Intelligence Research Centre. I know these guys — they’re super awesome — and if you have the following three properties, then humanity needs you to step up and solve the future: Continue reading

The 2015 x-risk ecosystem

Summary: Because of its plans to increase collaboration and run training/recruiting programs for other groups, CFAR currently looks to me like the most valuable pathway per-dollar-donated for reducing x-risk, followed closely by MIRI, and GPP+80k. As well, MIRI looks like the most valuable place for new researchers (funding permitting; see this post), followed very closely by FHI, and CSER. Continue reading

Why I want humanity to survive — a holiday reflection

Life on Earth is almost 4 billion years old. During that time, many trillions of complex life forms have starved to death, been slowly eaten alive by predators or diseases, or simply withered away. But there has also been much joy, play, love, flourishing, and even creativity.

Continue reading

MIRI needs funding to scale with other AI safety programs

Summary: MIRI’s end-of-year fundraiser is on, and I’ve never been more convinced of what MIRI can offer the world. Continue reading

The Problem of IndignationBot, Part 4

Summary: I proved a parametric, bounded version of Löb’s Theorem that shows bounded self-reflective agents exhibit weird Löbian behavior, too. Continue reading

The Problem of IndignationBot, Part 3

Summary: Is strange “Löbian” self-reflective behavior a just theoretical symptom of assuming unbounded computational resources?

Continue reading

(Ignore this post)

Apologies to any subscribers; I needed to publish this in order to test sidebar-hiding with several different devices and login credentials 🙂   Continue reading

Embracing boredom as exploratory overhead cost

(Follow-up to Fun does not preclude burnout)

Sometimes I decide to spend a few weeks or months putting some of my social needs on hold in favor of something specific, like a deadline. But after that’s done, and I “have free time” again, I often find myself leaning toward work as a default pass-time. When I ask my intuition “What’s a fun thing to do this weekend?”, I get a resounding “Work!” Continue reading

Fun does not preclude burnout

As far as I can tell, I’ve never experienced burnout, but I think that’s only because I notice when I’m getting close. And in recent years, I’ve had a number of friends, especially those interested in Effective Altruism, make the mistake of burning out while having fun. So, I wanted to make a public service announcement: The fact that your work is fun does not mean that you can’t burn out. Continue reading

Use separate email threads for separate action-requests

When I realized this principle, I experienced around a 2x or 3x increase in my rate of causing-people-to-do-things-over-email, out of the “usually doesn’t work” range into the “usually works” range. I find myself repeating this advice a lot in an attempt to boost the effectiveness of friends interested in effective altruism and related work, so I’m making a blog post to make it easier. Continue reading

The Problem of IndignationBot, Part 2

Summary: Agents that can reason about their own source codes are weirder than you think.

Continue reading

What’s your vision of a beautiful life?

After releasing my Robust Rental Harmony algorithm, I felt a certain sense of satisfaction, like my friends and I had built something wholesome and beautiful.  Reflecting on this,  it occurred to me that I might want my life to feel like an artistic creation… like a beautiful substructure of mathematics that reflectively self-appreciates wherever it arises. This felt different from my desire to help the world at large, and also from my desire for moment-to-moment enjoyment. Continue reading

Deliberate Grad School

Among my friends interested in rationality, effective altruism, and existential risk reduction, I often hear: “If you want to have a real positive impact on the world, grad school is a waste of time. It’s better to use deliberate practice to learn whatever you need instead of working within the confines of an institution.” Continue reading

The Problem of IndignationBot, Part 1

I like to state the Prisoner’s Dilemma by saying that each player can destroy \$2 of the other player’s utility in exchange for \$1 for himself. Writing “C” and “D” for “cooperate” and “defect”, we have the following: Continue reading

Willpower Depletion vs Willpower Distraction

I once asked a room full of about 100 neuroscientists whether willpower depletion was a thing, and there was widespread disagreement with the idea. (A propos, this is a great way to quickly gauge consensus in a field.) Basically, for a while some researchers believed that willpower depletion “is” glucose depletion in the prefrontal cortex, but some more recent experiments have failed to replicate this, e.g. by finding that the mere taste of sugar is enough to “replenish” willpower faster than the time it takes blood to move from the mouth to the brain: Continue reading