In June, I attended a two-week workshop in Northern California, combining technical research and personal growth. The workshop was run by the Machine Intelligence Research Institute (MIRI) and the Center For Applied Rationality (CFAR) in Northern California. The goal of MIRI is to lay the multidisciplinary theoretical foundations (in math, philosophy, computer science, and decision theory, among other fields) to try to insure friendly artificial intelligence, that is to say, to make it so that when artificial intelligence becomes smarter than humans, its interests are aligned with ours.
Their research has a fair amount of overlap with mathematical logic. I’d encourage any logicians who are interested in these sort of things to get involved. It’s a very good and important cause; the future of humanity is at stake. Unaligned artificial intelligence could destroy us all in a way that makes nuclear war and global warming seem tame in comparison.
Their technical research agenda is a good place to start for a technical perspective. The book Superintelligence by Nick Bostrom is a good starting point for a less technical introduction and to help understand why MIRI’s agenda is important and nontrivial.
One area of MIRI research that I find particularly interesting has to do with a version of Prisoner’s Dilemma played by computer programs that are allowed to read each others’ source code. This work makes use of a bounded version of Löb’s theorem. Actually, a fair bit of MIRI research relates to Löb’s theorem. Here is a good introduction.
Feel free to contact me if you’d like to know more about how to get involved with MIRI research. Or you can contact MIRI directly.