Probability theory is an axiomatic description of the world, i.e. the sample space, in terms of a measure of size one. This simple definition leads to a wonderfully intricate picture, in which sometimes our intuitions about chance are neatly confirmed and other times the results completely baffle us. In general, probability theory starts by positing a probability measure and then makes statements about individual events. Statistics is sometimes referred to as “reverse engineering”, in the sense that it reverses the way probability theory looks at things: statistics starts by looking at individual events and then tries to infer which probability measure was used to generate these events. In order to do justice to the structure of the data as well as to the principle of parsimony, statistics concentrates on families of probability measures, defined as a statistical model, such as a linear regression model for continuous data, a mixed effect model for longitudinal data, etc.

Both probability and statistics have a variety of application areas: stochastic simulations are crucial in climate models and the study of complex events, such as forest fires. Examples include the percolation model and the contact model, being models respectively for a disordered medium and the spread of epidemics. These systems give insight into the behaviour of models for ferromagnetism, such as the Ising and Potts models. They also provide a host of really captivating and well motivated problems in probability. Statistics, as usual, is at the flip-side of such an approach. It can use those models in combination with data to make statements about whether it is likely that climate change is man-made, whether the HIV virus can be stopped in its tracks or what a likely casualty count is as a result of the invasion in Iraq.