AI strategy and policy research positions at FHI (deadline Jan 6)

Oxford’s Future of Humanity Institute has some new positions opening up at their Strategic Artificial Intelligence Research Centre. I know these guys — they’re super awesome — and if you have the following three properties, then humanity needs you to step up and solve the future:

(1) You care deeply about steering the future of humanity in a positive direction;

(2) You have a background in technical research of some kind at the Masters or PhD level (economics, math, CS),

(3) You consider yourself somewhat of a “generalist” in that you are willing to think systematically about topics outside your on-paper qualifications.

The problem: Tragedy of the Research-Commons. Everyone who actually has properties 1-3 thinks someone else would do a better job than them, so almost no one is actually doing the work, or even applying for the positions. I see this all the time, in potential-researcher after potential-researcher, and it’s a bad pattern for humanity to follow…

So, break the pattern and apply for some research jobs!

Right now is an exceedingly high-leverage time to do this research, and there’s a lot to be done, from technical safety specs to strategic policy work. The world is finally starting to realize AI x-risk is worthing thinking about (see Bill Gates, Elon Musk, Stephen Hawking, Stuart Russell, Tom Dietrich…), and many groups — governments, funding agencies, even AI companies — are now looking to find competent researchers to answer their questions about it.

But by my count, there are currently 12 people on the planet studying AI x-risk on a full-time basis.* That’s 0.000000001% of the world population… And sadly, as awesome as they are, they’re surely not the most talented 12 people humanity has to offer. Those 12 people are NOT the most elite, absurdly-high-IQ cognitive powerhouses in the world. They are the people who stepped up.

And maybe you should step up too. If you care, at least apply for one or two of the jobs. Get an interview. Be honest about what you know, what you don’t, and how you might be able to help, so FHI can decide if you’re actually the best fit for their needs. Just don’t decide for them by not applying!


*The 12 people — so you can call me out if I’ve missed any — are Nick Bostrom, Eliezer Yudkowsky, Stuart Armstrong, Anders Sandberg, Owain Evans, Daniel Dewey, Benya Fallenstein, Patrick LaVictoire, Jessica Taylor, me, Yang Liu, and Scott Garrabrant. There are some other people (fewer than 30) working full-time on studying x-risk in general, some others (fewer than 30) working full-time as support staff for AI x-risk researchers, and a number of AI researchers (fewer than 30) who spend 20% or so of their research-hours explicitly researching AI safety techniques or risks. But really, like cybersecurity, AI x-risk is a field deserving of full-time researcher attention, except even moreso because superintelligent systems will be even harder to predict than humans. Be one of those full-time researchers!

Leave a Reply

Your email address will not be published. Required fields are marked *