a1 School of Mathematical and Computing Sciences, Victoria University, PO Box 600, Wellington, New Zealand E-mail: Rod.Downey@mcs.vuw.ac.nz
a2 Department of Mathematics, University of Chicago, Chicago, IL 60637, USA E-mail: email@example.com
a3 Department of Computer Science, Auckland University, Auckland, New Zealand E-mail: firstname.lastname@example.org
a4 Institute of Discrete Mathematics and Geometry, Technical University of Vienna, Wiedner Hauptstrasse 8-10 / E104, A-1040 Vienna, Austria E-mail: email@example.com
We report on some recent work centered on attempts to understand when one set is more random than another. We look at various methods of calibration by initial segment complexity, such as those introduced by Solovay , Downey, Hirschfeldt, and Nies , Downey, Hirschfeldt, and LaForte , and Downey ; as well as other methods such as lowness notions of Kučera and Terwijn , Terwijn and Zambella , Nies [101, 100], and Downey, Griffiths, and Reid ; higher level randomness notions going back to the work of Kurtz , Kautz , and Solovay ; and other calibrations of randomness based on definitions along the lines of Schnorr .
These notions have complex interrelationships, and connections to classical notions from computability theory such as relative computability and enumerability. Computability figures in obvious ways in definitions of effective randomness, but there are also applications of notions related to randomness in computability theory. For instance, an exciting by-product of the program we describe is a more-or-less natural requirement-free solution to Post's Problem, much along the lines of the Dekker deficiency set.
(Received September 13 2005)