In the Structure and Interpretation of Computer Programs, Abel and Sussman tell us “Underlying our approach to this subject is our conviction that “computer science” is not a science and that its significance has little to do with computers. The computer revolution in the way we think and the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology —the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative view taken by classical mathematical subjects. Mathematics provides a framework for dealing precisely with notions of ‘what is’. Computation provides a framework for dealing precisely with ‘How to’”. This idea of “procedural epistemology” informs my research. My work is in agent-based computational models as applied to economics with a heavy statistical component. In other words, I consider myself an economist, a statistician, and a computational social scientist.
For the history of modern mathematics, there has been a controversy over the use of set theory. Axiomatic set theory implies the existence of so-called intangible objects whose existence involves no contradiction but of which examples cannot be constructed. The issue is that any such algorithmic way of generating these objects would never halt.
This issue has analogues in many sciences. Scientific controversies are often mere duels between different families of models. Both families may be reasonable and even plausible but it is impossible to enumerate all possible theories, economic or otherwise and this would remain so even if what constituted an economic theory were more sharply defined than is currently or even possibly the case. It is a truism that institutions matter in economics but it is hardly possible to consider all possible institutional frameworks. This poses a major difficulty insofar as many of the ideological fault-lines in economics ultimately reduce to the question of the relationship between the market and the institutional framework within which it exists; which is prior and why.
I believe the research potential of agent-based models and computational social science is immense for several reasons. Firstly, such models can naturally consume complex data structures involving individuals in a way that is practically impossible in traditional mathematical economics. Indeed, it is increasingly possible to run agent-based models at full scale meaning, for example, a real estate agent-based model that has agents based on census data without sampling.
Secondly, the very act of programming involved the juggling of nested abstractions. This provides a way to formalize relationships between these families of models and indeed nest models within each other. The mere act of doing this reveals a great deal of their logical structure as well as some less remarked on assumptions.
An often remarked on difficulty of computational social science involves model interpretation. While the core principles of the model may be terse, the potential complexity of the model is immense. There is an interesting parallel between this difficulty and that facing newer statistical methods such as machine learning. Traditionally, the parameters estimated when fitting a statistical model had a direct theoretical interpretation. When computing was scarce and expensive, these requirements made sense. Now, it makes more sense to appeal directly to the predictions the model makes and how they compare to actual data. For this reason, I think the way forward is more in keeping with Leo Breiman’s ‘’second culture’. Thus, computational social science in concert with a new line of research in the statistical emulation of such models will help us understand how much of the complexity is inherent to the system and how much of it is imposed by the parametrization.
My long term research ambition is to work in this developing cross section; applying my skills as an economist, a statistician, and a computational modeler.