Archive 2008

December 17, 2008 Sebastian van Strien (University of Warwick en Universiteit Leiden)

On some questions of Fatou, Milnor and Palis on iterations of polynomial maps

This talk is about iterations of polynomials acting on the complex plane and their associated Julia, Fatou and Mandelbrot sets. I will give a survey of some recent results in this area.

December 10, 2008: Ruud Hendrickx (UvT)

In several jurisdictions, commercially exploiting a game of chance (rather than skill) is subject to a licensing regime. It is obvious that roulette is a game of chance and chess a game of skill, but the law does not provide a precise description of where the boundary between the two categories is drawn. We provide a framework of determining the relative skill level of a game and discuss some computational aspects. We apply this theory to various variants of poker. Confronting the computed relative skill levels with jurisprudence on the Dutch Gaming Act, we conclude that poker should be classified as games of skill.

November 26, 2008: Michel Vellekoop (UT)

Dividends and Discontinuities: the Dirty Little Secret of Mathematical Finance

Standard option models usually pay no or little attention to the inclusion of dividends in the model for the underlying asset prices. In this talk we show that option pricing is only possible in practice if dividends are explicitly included, and we provide a general semimartingale framework to do so. As a first apllication, we show how this allows us to extend integral representations for the early exercise premium in American options to the case where dividends are paid. A second application leads to the surprising result that future price processes need not be risk neutral martingales on discontinuous filtrations.

November 12, 2008: Frank van der Meulen (TU-Delft)

Bayesian nonparametric estimation for diffusions

Diffusions can be obtained as solutions of stochastic diffeential equations. As such, they are characterized by their drift and diffusion coefficient. In this talk I will discuss Bayesian estimation of these coefficients using either continuous or discrete time observations. If we observe a sample path of a diffusion continuously in time, we only need to estimate the drift parameter. I will present general conditions from which the posterior rate of convergence (the rate at which the posterior contracts around the "true" parameter) for estimating this parameter can be deduced. Then I will move to the discrete time setting. For this case I will show how the posterior can be computed by Bayesian data augmentation. Joint work with Harry van Zanten and Aad van der Vaart (Vrije Universiteit).

October 8, 2008: Peter Harremoës Centrum voor Wiskunde voor Informatica (CWI)

The Law of Thin Numbers

It is wellknown that binomial distributions and other Bernoulli sums can be approximated by Poisson distributions, which is sometimes called the law of Small Numbers. It is less known that this can be view as a case of entropy maximization. Inspired by ideas from information theory we shall develop a new framework to describe Poisson approximation. One of the ideas is the definition of thinning of a random variable that allow us to formulate a Law of Small Numbers based on iid sequences rather than on triangular arrays. We also get a closer link to the Central Limit Theorem and get a new lower bound on the rate of convergence in the Central Limit Theorem. We also get very tight bounds the total variation distance between a binomial and the Poisson distribution with the same mean.

October 1,  2008: Sicco Verwer (TUD, informatica)

An efficient algorithm for learning timed processes

We describe an efficient algorithm for learning deterministic real- time automata (DRTA) from positive data. A DRTA is an intuitive model for many real-time systems. The data can be obtained from observations of some process. We assume this process to be stationary. The algorithm uses statistical tests in order to learn an DRTA model that describes this stationary process. This model can be used to reason and gain knowledge about real-time systems such as network protocols, business processes, reactive systems, etc.

September 24, 2008: Leandro Pimentel (TU Delft)

Greedy Polyominoes and first-passage times on random Voronoi tilings

Let N be distributed as a Poisson random set on R^d with intensity comparable to the Lebesgue measure. Consider the Voronoi tiling of R^d, { C_v : v in N }, where C_v is composed by points x in R^d that are closer to v in N than to any other v' in N. A polyomino P of size n is a connected union (in the R^d topological sense) of n tiles, and we denote by Pi_n the collection of all polyominos P of size n containing the origin. Assume that the weight of a Voronoi tile C_v is given by F(C_v), where F is a nonnegative functional on Voronoi tiles. In this paper we investigate the tail behavior of the maximal weight among polyominoes in Pi_n: F_n=F_n(N):=max{ sum_{v in P} F(C_v) : P in Pi_n }. As the main application we show that first passage percolation has at most linear variance.

September 18, 2008: Richard Gill (UL)

Careless statistics costs lives

I will explain the Snapinn (1992) rule for early stopping of a randomized clinical trial. This very cunning protocol preserves the standard analysis at the end of a not-early-terminated trial, by balancing the chances (under the null-hypothesis) of abandoning the trial early for expected futily when actually the final result would have been significant, and abandoning the trial early for expected signficance when actually the final result would not have been significant. Further cunning features allows the protocol to be extended from the theoretical setting of testing a normal mean (known variance) to the general setting of, for instance, comparing two unknown Bernoulli probabilities. The Snapinn rule was built into the protocol of the now famous PROPATRIA trial of probiotics treatment in acute pancreatits. It appears now that this trial was allowed to run to completion because of a confusion between one-sided and two-sided testing. This confusion together with the fact that the monitoring committee was blinded to the actual treatments given to the two treatment groups made it possible for them to continue the trial, effectively because there was still a good chance of finally obtaining a significant *harmful* effect of the treatment, when, according to their own protocol, they should have stopped it, because there was almost no chance any more of finally obtaining a significant *beneficial* effect of the treatment. I will give recommendations for precautions which should be built into the design of RCT's in the future, in order to prevent this kind of mistake. www.math.leidenuniv.nl/~gill/probiotica.pdf (slides of talk) arxiv.org/abs/0804.2522 (discussion paper)

September 10, 2008: Charlene Kalle (UU)

Beta-expansions with arbitrary digits

Beta-expansions with arbitrary digits are generalizations of the well-understood classical beta-expansions which use the integers 0 up to the floor of beta as digit set. After a short review on the classical beta-expansions, we will introduce two transformations that generate expansions with arbitrary digits, the greedy and lazy transformation, and give some of their measure-theoretical properties. We will then consider a random transformation that generates all possible beta-expansions for a given beta and arbitrary digit set.

May 28, 2008: Wioletta Ruszel (Groningen)

What it takes to be Gibbsian for planar rotors

We study the Gibbsian character of time-evolved planar rotor systems on $mathbb{Z}^d$, $dgeq2$ , in the transient regime, evolving with stochastic dynamics and starting from an initial Gibbs measure $

u$. We model the system by interacting Brownian diffusions $(X_i(t))_{i in mathbb{Z}^d, t geq 0}$ moving on circles. We prove that for small times t and both arbitrary initial Gibbs measures  $

u$ and arbitrary temperature dynamics, or for long times and both high- or infinite-temperature initial measure and dynamics, the evolved measure  $

u^t$ stays Gibbsian. Furthermore, we show that for a low-temperature initial measures  evolving under infinite- temperature dynamics there is a time interval such that $

u^t$ fails to be Gibbsian

May 14 and 21, 2008: Mike Keane (Wesleyan University)

Once Reinforced Random Walks on Lines and Ladders

In these lectures we shall treat the recurrence (or possible transience; there are open questions here) of once reinforced random walks on the integers and on products of integers with finite segments of integers, called ladders. The first lecture will deal with once reinforcement on the integers, where we can prove that no matter what the strength of the reinforcement (or weakening) is, such random walks are recurrent. In this lecture we also introduce the martingale approach to the recurrence problem. In the second lecture, we shall treat once reinforcement on the ladders. If there are only two copies of the integers, then Sellke has proved that once reinforced random walk is recurrent for any positive reinforcement, and together with Feiden we have now a proof that this remains true for negative reinforcement (i.e. weakening). Both questions are still open for ladders of widths greater than two, although there are positive results for some values of positive reinforcement due to Sellke (low values of positive reinforcement) and Vervoort (high values of positive reinforcement). We sketch some of the proofs and explain the current state of affairs. Of course, it is expected that for any width and any reinforcement, positive or negative, random walk is recurrent, and even if we consider the case of two dimensions, i.e. infinite ladders we expect recurrence. However, the latter problem seems to be well beyond reach using current techniques.

May 7, 2008: Karma Dajani (Utrecht)

Beta-expansions revisited 

We give an overview of some of the old and new results describing the ergodic and arithmetic properties of algorithms generating expansions to non-integer base.

April 23, 2008: Anne Fey-den Boer (TU Eindhoven)

Quasi-units in Zhang's sandpile model

Zhang's model is a non-abelian sandpile model. Numerical simulations of this model on large grids have indicated that the stationary height distribution per site is sharply peaked at discrete values, resembling that of the abelian sandpile model, despite the fact that in Zhang's model the heights are continuous. Zhang called these values 'quasi-units'. We have defined and analyzed this model rigorously in dimension 1. Our main result concerns the limit of infinite grid size. We find that the stationary height distribution indeed tends to that of the abelian sandpile model, up to a scaling factor. Among other results, we prove uniqueness of the stationary height distribution. Finally, I will outline some future research plans on this model, for example, study phases transitions in an infinite volume version, study the model in higher dimensions, as a growth model, and eventually form a link with neuronal network modeling.

April 16, 2008: Ludolf Meester (TU Delft)

Extremal distributions for sums of iid random variables on [0,1]  orShouldsimple problems have simple solutions?  Two old "Problems section conjectures", one from Statistica Neerlandica and one from SIAM Review, concern the following question: Let X_1,..., X_n be i.i.d. random variables on [0,1], satisfying E[X_1]=m, 0<m<1. Let S_n=X_1+...+X_n and 0<=t<n. Given n, m and t, which distribution maximizes P(S_n<=t)? From the answer a non-parametric confidence bound (of interest to auditors) could be derived. It would also imply a sharpening of Hoeffding's inequality.  The n=1 version of the problem is easily solved by looking for equality in Markov's inequality (you can do this in 5 minutes).  In an attempt to solve the general problem I apply Mattner's Lagrange multiplier approach, a method for finding (all kinds of) extremal distributions, which is of interest in itself. For n=2, the resulting Lagrange conditions can be shown to imply that extremal distributions should be discrete with at most three support points, one of which is 0 or 1. Combining this with some elementary optimization, this case is solved.   I will present these solutions and their implications for the published conjectures. In addition, I would like to discuss some other insights, conjectures and attempts for n>2, perhaps generating some new ideas in the audience. 

April 9, 2008: Peter Sozou (London School of Economics)

Courtship as a waiting game

Evolution selects for courtship that maximise Darwinian fitness. Courtship is modelled as an iterative game in which a male sends out a signal, such as a Valentine's card or a dinner invitation, that the female may accept or reject. If the female accepts, then the male gives another signal. This type of waiting game models mating behaviour in arthropods, hermit crabs and humans.

March 12, 2008: Michel Dekking (TU Delft)

Arithmetic differences of random Cantor sets and the lower spectral radius   

Let C and D be two Cantor sets. When will their difference C-D = {x-y: x from C, y from D} contain an interval? Necessarily we should have that the sum of their Hausdorff dimensions is larger than 1. When is this also sufficient? This question will be answered almost surely for a natural class of random Cantor sets.

March 5, 2008: Vilmos Komornik (Strasbourg)

Univoque expansions

February 20, 2008: Birgit Witte (TU Delft)

Maximum Smoothed Likelihood Estimation in Censoring Problems 

We study the stochastic behaviour of the time $X$ it takes before a certain event takes place (also called the survival time). In many cases, the variable $X$ is not observed directly due to some sort of censoring and in this talk we consider smooth estimators in two different but related censoring models.  The first model is the current status model where we observe a censoring variable $T$ (independent of $X$) and a variable $Delta = 1_{{X le T}}$ indicating whether the event took place before time $T$ or had not taken place yet. The maximum smoothed likelihood estimator (MSLE) based on the approach of Eggermont & LaRiccia (2001) is similarly characterized as the well studied and natural estimator in this model, the nonparametric maximum likelihood estimator (NPMLE), see also Groeneboom & Wellner (1992). Both estimators are consistent, however the asymptotic properties differ.  In the second model we are interested in the bivariate distribution function $F_0$ of the pair $(X,Y)$, where $X$ is the survival time and $Y$ a continuous mark variable. As in the current status model we do not observe the variable $X$ directly, instead we observe a censoring variable $T$ and a variable $Delta=1_{{Xleq T}}$. When $X$ lies to the left of $T$, i.e. $Delta=1$, we also observe the variable $Y$, in case $Delta=0$, we do not. The NPMLE in this model is studied by Maathuis & Wellner (2007), who prove that this estimator is inconsistent. We propose an estimator in the spirit of Eggermont & LaRiccia (2001).
This is joint work with Geurt Jongbloed and Piet Groeneboom. 

February 13, 2008: Steve Alpern (London School of Economics) 

Rotational (and Other) Representations of Stochastic Matrices 

Joel E. Cohen (1981) conjectured that any stochastic matrix P could be represented by some circle rotation f in the followingsense: for some partition Si of the circle into sets consisting of finite unions of arcs, we have that the entries pij of the matrix P are weights of intersection  (*) pij = μ (f (Si ) ∩ Sj ) / μ(Si ), where μ denotes arc length.  In this paper we show how cycle decomposition techniques originally used (Alpern, 1983) to establish Cohen's conjecture can be extended to give a short simple proof of the Coding Theorem, that any mixing (that is, P^N > 0 for some N) stochastic matrix P can be represented (in the sense of * but with Si merely measurable) by any aperiodic measure preserving bijection (automorphism) of a Lesbesgue probability space. Representations bypointwise and setwise periodic automorphisms are also established.  Based on a joint paper with Raj Prasad