Here is a paper that came out of the MIRI Summer Fellows Program research workshop that I attended in June. The LaTeX code is here.
Quantilization is a form of mild optimization where you tell an AI to choose something at random from (for instance) the top 10% of best solutions, rather than taking the best solution. This helps to get around the problem of an agent whose values are mostly aligned with yours but that does pathological things when it takes its values to the extreme. In this paper, we examine a similar process, but involving two (or more) agents rather than one.
For those of you who were also at the MSFP, you can read some additional discussion of the paper here. The main idea is that Connor is working on a simulation to help test the ideas in the paper. If you’re interested in helping with the simulation but don’t have access to the forum post linked above, get in touch with me.