I sometimes forget that not everyone realizes how poorly understood open-source game theory is, until I end up sharing this example and remember how weird it is for folks to see for the first time. Since that’s been happening a lot this week, I wrote this post to automate the process.

Consider a game where agents can view each other’s source codes and return either “C” (cooperate) or “D” (defect). The payoffs don’t really matter for the following discussion.

First, consider a very simple agent called “CooperateBot”, or “CB” for short, which cooperates with every possible opponent:

def CB(opp):
return C

(Here “opp” is the argument representing the opponent’s source code, which CooperateBot happens to ignore.)

Next consider a more interesting agent, “FairBot”, or “FB” for short, which takes in a single parameter $k$ to determine how long it thinks about its opponent:

def FB_k(opp):
search all strings of length ≤ k
...for a proof that
..."opp(FB_k) = C"
if a proof is found:
return C
else:
return D

I claim this agent is interesting.

Warm-up question: What’s $FB_k(CB)$?

Warm-up answer: Easy! It depends on $k$. If $k$ is tiny, like 5, then since no proof about $CB$ can possibly be written in that many characters, the proof search in $FB_k(CB)$ finds nothing, so it returns D. As soon as $k$ is large enough to encode a proof that $CB(FB_k)=C$, we also have $FB_k(CB)=C$.

If that wasn’t easy, it means you’re not reading this post slowly enough to appreciate what’s coming next. Read less, think more 🙂

Real question: what’s $FB_k(FB_k)$, for very large $k$?

Hint #1: Well, $FB_k(FB_k)=C$ only if it can first find a proof that $FB_k(FB_k)=C$, which is kinda circular, right? Again, if you didn’t have that thought, you are reading this blog post too fast to appreciate the rest of the answer. Go back and read the source code for $FB_k$ again, and think harder about it this time 😉

Got it yet? Here’s hint #2:

Hint #2: The next thought you should have is that FairBot cannot write down a complete simulation of FairBot writing down a complete simulation of FairBot writing down a complete simulation of… After all, the parameter $k$ is finite, even if it’s really big.

If you didn’t have this thought already, read these wikipedia articles before continuing:
Infinite regress Stack overflow

Okay, that’s enough hints. What does this all mean for the value of $FB_k(FB_k)$? Here’s the answer:

Answer: Because the proof would have to be a circular/infinite regress, it couldn’t have a finite length and so can’t be of length less than $k$, so no proof is found, so the agents return “D”, right?

Wrong. By a generalization of Godel’s Theorem for bounded agents that I proved last year, for large $k$, the agents find a super-weird finite proof of cooperation, and hence cooperate. Did I mention this is super weird? If you don’t think it’s super weird, read the question again and make sure you’re not mixing up “C” and “D”, or “finite” and “infinite”. Anyway, here is the paper:
Parametric Bounded Lob’s Theorem and Robust Cooperation of Bounded Agents.

I hope you enjoyed that, and/or the paper linked in the answer. But don’t read the paper before you’ve thought about it yourself first! To me, the main importance of results like this are to show that open-source game theory can be very counterintuitive, and we could really use a lot more research to understand how it works before we start building a lot of code-based agents that are smart enough to read and write code themselves.