At CFAR, we ask: Can we do more for the world by learning about cognitive biases like scope insensitivity that might thwart our attempts to make altruistic decisions? Can we get more use out of our gut instincts by learning what their strengths and weaknesses are? Can playing cooperative games with intuitive Bayesian reasoning improve our ability to assess arguments and reason collectively in groups?
Questions about human rationality fascinate me. By “rationality”, I mean the non-trivial art and science of reasoning and acting effectively to achieve goals. This is the cognitive science sense of the word “rational”, which doesn’t mean being cold and unemotional like Mr. Spock (who in my estimation, is comically irrational), and doesn’t mean being self-centered like homo economicus. In fact, rationality is an important tool for effective altruism; look at GiveWell.org for example.
Understanding rationality requires science — fields like psychology, neuroscience, and behavioral economics — as well as disciplines like math, statistics, and game theory, which can examine what effective strategies look like quantitatively. So a large part of my interest in rationality is professional: as an academic, I attend conferences, seminars, and workshops to learn more about how the mind works whenever I can.
But being rational is also an art, which requires working together and learning from each other. And it is not an art I believe I or anyone else has mastered! Thankfully, understanding how we can all think and act more effectively as a team is naturally a super inspiring goal. It’s why I get involved in projects like the MPHD seminar, CFAR, and CFAR’s SPARC.
Lastly, there is also a growing social movement surrounding an interest in rationality and its applications, as well as blogs like OvercomingBias and LessWrong. All around, I think this is a good thing, and hope it continues!