In multi-agent settings, the best action to take typically depends on the actions that other agents will perform. In such situations, it may be useful to make a model of the behavior of others. However, if others are doing the same, that model of behavior itself may include a model of the behavior of others, which in turn may include yet another model of the behavior of others. In this talk, we aim to determine how far this process of iterative opponent modeling or theory of mind should continue. Is it always better to be smarter, or is there such a thing as being smart enough?