Archive 2010

December 15, 2010 Roger Cooke (TU Delft / Resources for the future, USA)

Obesity Index and Tail Risk

This reports on results from a National Science Foundation project on Tail Risk, and illustrates techniques em0loyed at RFF to make economists, insurers and government people aware of mathematical problems in dealing with tail risk. The main mathematical results concern conditions under which tail dependence is amplified by aggregation, and a new measure of tail obesity which does not involve estimating a parameter in a hypothetical distribution, but is suitable for measuring tail obesity in finite samples.

December 8, 2010 Mark van de Wiel (VUMC)

Comparing predictors in a training-testing setting

Statistics has a long tradition in developing tests and information criteria for the purpose of model selection. These approaches do usually not apply to prediction models for high-dimensional data. Therefore, one resorts to training-test set approaches, by means of cross-validation, random subsampling or resampling. In the literature, much attention is spent on estimating predictor error in this setting. This talk highlights three different aspects of prediction error: comparative testing, confidence intervals and variability with respect to the training data sets.

We start by motivating why one might be interested in comparative inference rather than simple comparison of estimated prediction errors. A simple testing procedure is introduced that applies simultaneously to multiple splits of the same data set. For each split, both procedures predict the response of the same samples, which results in paired residuals to which a signed-rank test is applied. Hence, multiple splits result in multiple p-values. The median p-value and the mean inverse normal transformed p-value are proposed as summary (test) statistics, for which theoretical bounds on the overall type I error rate under a variety of assumptions are provided.

Next, we shortly discusss the potential to extend the testing approach to confidence intervals. Finally, we focus on another aspect of prediction: variability of the predictions across training data sets. We introduce the notion of a confidence score, which quantifies such variability. We show that the well-known decomposition of the Brier score, a popular prediction error measure, nicely generalizes to inclusion of this variance component. The latter is not true for another popular prediction error measure, area under the curve (AUC). 

Our methods are illustrated on several (high-dimensional) data sets with binary or survival response.

December 1, 2010 Evgeny Verbitskiy (UL)

Thermodynamics of a binary symmetric channel.

Binary symmetric channel (BSC) is probably the simplest communication model with noise studied in information theory.I will discuss a very basic question: How does the binary symmetric channel affect the thermodynamic properties of the input process? For example, given that the input is a Markov (or, more generally, a Gibbs) process, will the output process remain Gibbs.

November 24, 2010 Sonja Cox (TUD)

Burkholder-Davis-Gundy inequalities in Banach spaces

I will sketch how decoupling inequalities like the UMD inequality play a role in proving a Burkholder-Davis-Gundy type inequality for Banach space-valued stochastic integrals. (A small) part of my talk will be based recent work by Mark Veraar and myself.

November 10, 2010 Michel Mandjes (UVA)

Simulation-based computation of the correlation function in a Levy-driven queue

In this talk I consider a single-server queue with Levy input, and in particular its workload process $Q(t)$, focusing on its correlation structure. With the correlation function defined as $r(t) := {m Cov}(Q(0), Q(t))/{m Var} Q(0)$ (assuming the workload process is in stationarity at time 0), we first study its transform $int_0^infty r(t)e^{-theta t} dt,$ both for the case that the Levy process has positive jumps, and that it has negative jumps. These expressions allow us to prove that $r(t)$ is positive, decreasing, and convex, relying on the machinery of completely monotone functions. For the light-tailed case, we estimate the behavior of $r(t)$ for $t$ large. We then focus on techniques to estimate $r(t)$ by simulation. Naive simulation techniques require roughly $1/r(t)^2$ runs to obtain an estimate of a given precision, but we develop a coupling technique that leads to substantial variance reduction (required number of runs being roughly $1/r(t)$). If this is augmented with importance sampling, it even leads to a logarithmically efficient algorithm.

(This talk is based on joint work with P.W.G. Glynn, Stanford, and will appear in Adv. Appl. Prob. later this year)

November 3, 2010 Marianne Jonker (VU)

A frailty model for censored family survival data, applied to the age at onset of mental problems

Family survival data are often used in genetic research to estimate genetic and environmental contributions to the age at onset of a disease or of a specific event in life. The survival data can be modeled with a correlated (gamma) frailty model. We use such a model to estimate the degree of heredity (heritability), environmental effects, and twin effects on the age at which people contact social service for the first time, to test whether these terms differ for males and females, and to investigate whether the survival functions differ for twins and non-twins. Our data come from an ongoing study on health, lifestyle, and personality. Longitudinal data were collected of Dutch monozygotic and dizygotic twins and of their siblings at five time points between 1991 and 2002. At every of these timepoints it is observed whether an individual ever contacted social service; so the age at which an individual contacted social service for the first time is interval censored. The frailty variable in the model is decomposed as a linear combination of four independent gamma distributed random variables which represent the genetic contribution to the age at onset, contributions by the common environment of all siblings (twins and non-twins), a twin-effect and the contribution by individual specific, unshared alleles and environment. The simultaneous survival function is expressed in terms of the marginal survival function and several hypotheses are tested with a likelihood ratio test.

October 27, 2010  Mia Deijfen (Stockholm University)

Preferential attachment models and general branching processes

A much studied type of models for growing networks is based on so called preferential attachement: vertices are succesively added to the network and are attached to existing vertices with probability proportional to degree. This mechanism has been shown to lead to power law degree distributions, which is in agreement with empirical studies on many types of real networks. I shall describe how general branching processes can be used to derive results on the degree distribution in preferential attachment models and also in an extensions of the model where vertices are not just added to the network but may also be removed.

October 20, 2010  Peter D. Sozou (RWTH Aachen Univeristy)

A model of copying with delay

Consider an individual choosing between alternative resources, e.g. a person choosing a restaurant or an animal choosing between foraging patches. The individual may have some information about which is likely to be the best choice, but this information is not perfect. Should the individual choose according to her own information or should she instead seek to copy the actions of another individual who may have better information?

We consider the following specific problem. Two individuals must each choose between two resources. They know that one resource is better than the other. Neither knows with certainty which is the better resource; each has her own private signal about which is likely to be better. The strength of a signal, which determines the probability that the resource which appears to be the better one really is the better one, is drawn from some known distribution. Each individual knows the strength of her own signal but not that of the other’s signal.

The decision problem proceeds in discrete time steps. In each time step, each individual can either choose a resource according to her own private signal, or wait with a view to seeing if the other animal chooses a resource and then copying that choice on the next step. There is, however, a cost to delaying, modelled by means of a constant discount factor. We derive equilibrium strategies, such that each individual’s strategy is a best response to that of the other. The main result is that each individual has a signal threshold above which she should go with her own signal and below which she should wait. This threshold decreases with successive steps; it is possible for both individuals to wait for several time steps before one of them takes the plunge. Some further general results will be presented.

This is joint work with Steve Alpern.

October 13, 2010  Jelle Goeman (LUMC)

Cherry-picking: multiple testing for exploratory research

Motivated by the practice of exploratory research, we formulate an approach to multiple testing that reverses the traditional roles of the user and the multiple testing procedure. Rather then to let the user choose the error criterion, and the procedure the resulting rejected set, we propose to let the user choose the rejected set freely, and to let the multiple testing procedure return a confidence statement on the number of false rejections incurred. In our approach, such confidence statements are simultaneous for all choices of the rejected set, so that post hoc selection of the rejected set does not compromise their validity. As a tool to achieve this reversal of roles we use the familiar closed testing procedure, but focus on the non-consonant rejections that this procedure makes. We suggest several shortcuts to avoid the computational problems associated with closed testing.

October 6, 2010  Frank den Hollander (UL)

Random walk in dynamic random environment

We consider an interacting particle system on the integer lattice in equilibrium, constituting a dynamic random environment, together with a nearest-neighbor random walk that on occupied sites has a local drift to the right but on vacant sites has a local drift to the left. We describe some recent results for the empirical speed of walk: law of large numbers, central limit theorem, and large deviation principle. We compare these results with what is known for static random environments, and list some key open problems.

This is joint work with Luca Avena and Frank Redig.

September 29, 2010  Rob van den Berg (CWI/VU)

Sublinearity of the travel-time variance for dependent first passage percolation

Suppose we assign to each edge e of the d-dimensional cubic lattice a non-negative value t(e). The passage time of a path in the lattice is then defined as the sum of the t-values of the edges in the path. The passage time from a vertex v to a vertex w is defined as the infimum of the passage times of all paths from v to w.

Benjamini, Kalai and Schramm proved (in a paper in Ann. Probab. (2003)) that if the dimension d is at least 2 and the t(e)'s are i.i.d. two-valued random variables, the variance of the passage time from the vertex 0 to a vertex v is sublinear in the distance from 0 to v. (Note that if the dimension d = 1, the variance is of course linear in the distance). A few years ago, this result was extended to a large class of independent, continuously distributed t-variables by Bena"im and Rossignol.

We extend the result by Benjamini, Kalai and Schramm in a very different direction, namely to a large class of models where the t(e)'s are dependent. This class includes, among other interesting cases, a model studied by Higuchi and Zhang, where the passage time corresponds with the minimal number of sign changes in a subcritical `Ising landscape'.

This is joint work with Demeter Kiss.

September 15, 2010  Jan van Neerven (TU Delft)

Approximating the coefficients in parabolic stochastic partial differential equations

In this joint work with Markus Kunze, we investigate the continuous dependence on the data A, F, G and _ of mild solutions of abstract parabolic stochastic partial di_erential equations of the form

dX(t) = [AX(t) + F(t;X(t))] dt + G(t;X(t))dW(t); X(0) = _;

where W is a (cylindrical) Brownian motion. We provide su_cient conditions for continuous dependence of the compensated solutions X(t) ��etA_ in the norms Lp(;C_([0; T];E)). The results are applied to a concrete class of semilinear parabolic SPDEs with _nite-dimensional multiplicative noise.

September 8, 2010  Zhe Guo (TU Delft)

The Hammersley process on the circle

In this talk, I’ll introduce the HP(Hammersley’s Process) on the circle, and some basic speed theorems of the particles and second class particles which start from the Uniform distribution in this model; Moveover, as the main conclusion in this talk, one result of the longest way will be shown by constructing two related interacting particle systems namely L-system and M-system.

May 26, 2010: Shankar Bhamidi  (University of North Carolina, Chapel Hill)

Flows, rst passage percolation and random disorder in networks

Abstract

May 19, 2010 :  Markus Haase (TU Delft)

Renewal Sequences and Convergence Rates in Ergodic Theorems

Abstract

May 12, 2010:  Mike Keane  (Wesleyan University)

Ergodicity of Adic Transformations

An adic transformation is a very general object, defined on the unit interval by a so-called cutting and stacking procedure. The name arose because of the example given by the dyadic odometer, or equivalently, the classical rotation by 1 of the 2-adic integers. Nowadays it has become common to describe such transformations by Bratteli diagrams, and in this guise any measure preserving transformation of a Lebesgue space can be found. In this lecture we begin with a presentation of the binomial transformation, explain the relationship of it with the binary odometer, and present a simple proof which I found several years ago of its ergodicity. Next, I discuss the Euler transformation, or perhaps more descriptively the "rise and fall transformation", and explain how we (joint work with Sarah Bailey, Karl Petersen, and Ibrahim Salama in Math.Proc.Camb.Phil.Soc.(2006), Vol.

141, 231-238) use a similar idea to prove its ergodicity. Finally, I'd like to present the in my opinion most interesting open problem in this area, for which we currently have no idea for a solution: Is the binomial transformation weakly (or strongly) mixing? The lecture is designed to be accessible for faculty, graduate students, and advanced undergraduates with some knowledge of measure theory and probability

April 28, 2010:  Radboud Duintjer-Tebbens  (TU Delft)
The Role of System Dynamics Models in the Debate of Control vs. Eradication of Polio

The global polio eradication program missed its original target date of 2000 due to a number of challenges, including financial shortfalls. In 2006, a number of prominent public health leaders suggested  abandoning the eradication objective in favor of a policy of  "control". The ensuing policy debate was informed by a mathematical model and ultimately led to a renewed commitment to finish global  polio eradication. System dynamics concepts not only helped build the model but also helped identify the heuristics working against the

eradication objective. I will present the context for the polio debate and give a crash course in system dynamics modeling concepts that helped build the model and communicate the insights to a policy audience.

April 21, 2010: Peter Grunwald  (CWI/UL)

The Catch-Up Phenomenon in Model Selection and Model Averaging

We partially resolve a long-standing debate in statistics, known as the AIC-BIC dilemma: model selection/averaging methods like BIC, the Bayes factor, and MDL are consistent (they eventually infer the correct model) but, when used for prediction or adaptive estimation, the rate at which predictions improve can be suboptimal. Methods like AIC and leave-one-out cross-validation are inconsistent but typically converge at the optimal rate. We give a novel analysis of the slow convergence of the Bayesian-type methods. Based on this analysis, we propose the switching method, a modification of Bayesian model averaging that achieves both consistency and minimax optimal convergence rates. Experiments with nonparametric density estimation confirm that our large-sample theoretical results also hold in practice in small samples. We also discuss how our results can coexist with those of Yang (2005), who proved that the strengths of AIC and BIC cannot always be shared. Joint work with T. van Erven (CWI) and S. de Rooij (Cambridge)  

March 24, 2010: Piet Groeneboom (emeritus hgl TU Delft)

Monotone hazards and life and death

About forty years ago, at the start of my career, a well-known statistician told me that isotonic regression was a dead subject. Twenty years ago, another well-known statistician told me that the bootstrap was dead. Around the same time Apple computer was declared dead by the Microsoft following community.

So, somewhat appropriately, I recently used my Apple computer to resurrect isotonic regression and the bootstrap from their graves to perform a danse macabre. Perhaps Apple computer, the bootstrap and isotonic regression aren't as dead as some people want us to believe.

March 10, 2010: Gerard Hooghiemstra (TU Delft)

The Poisson-Dirichlet distribution and first passage percolation on random graphs 

The Poisson-Dirichlet distribution is a {it random} probability distribution on the positive integers. More specifically, let $E_1,E_2,ldots$, be an i.i.d. sequence of exponentially distributed random variables with mean 1 and define  $Gamma_i=E_1+ldots E_i$. Then for $alphain (0,1)$,  the Poisson-Dirichlet probabilitiesare given by ${P_i}_{igeq 1}$, where $$P_i=  (Gamma_i)^{-1/alpha}/(sum_{j=1}^infty  (Gamma_j)^{-1/alpha} ).$$ These random probabilities will play an important role in first passage percolation on certain random graphs.

March 3, 2010: Ernst Wit (Rijksuniversiteit Groningen)

Sparse inference en differential geometry -- with applications in genomics

The advent of high-dimensional datasets has presented a challenge to traditional statistical inference. The n>p paradigm turned out to be too restrictive and statisticians seemed to be for a while in high seas.

However, they found their (wet) feet again, when they realized the connections between high-dimensional inference on the one hand and model choice and penalized methods on the other. L_1 penalized inference had the additional advantage of also resulting in sparse solutions.

We give a background to L_1 penalized inference and consider some extensions to other types of "path estimators". In particular, we will consider how to use differential geometry to extend sparse inference for non-linear models. We look at an application of penalized inference in a genomic network

February 17, 2010: Dorota Kurowicka (TU Delft)

Regular vines / new developments

Copulae (distributions on unit hypercube with uniform margins) have become very popular in dependence modeling in financial as well as other engineering contexts. Bivariate copulae are well studied, understood and applied. Multivariate copulae, however, are often limited in range of correlation structures and other properties as e.g. tail dependence that they can handle.  A new graphical model introduced in 1997, called regular vines, allows specification of a joint distribution on n variables with given margins by specifying n- choose-2 bivariate copulas and conditional copulas. Estimating parameters of copulae on a vine using the maximum likelihood principle, named Pair Copula Construction (PCC) is performed sequentially starting form the first tree. This landmark advance in associating bivariate copulae to a vine and estimating copula parameters from data demonstrated the superiority of vines and opened large areas of application in mathematical finance, risk analysis and uncertainty modeling in engineering.  Regular vines were studied by few researchers from this very department. It took time before the community of researchers interested in this model grew sufficiently. Vines’ ‘fan club’ contains now members form e.g. Norway, Germany and Canada. The rapid growth of the vine community in the last few years bodes well for the pursuit of this research agenda. In this talk we introduce the graphical model vines and present briefly its basic properties. Moreover some new results and open research questions concerning vines will be discussed.

February 10, 2010: Eduard Belitser (Universiteit Utrecht)

On oracle projection posterior rate and model selection

We apply the Bayes approach to the problem of projection estimation of the signal observed in Gaussian white noise model and we study the rate at which the posterior distribution concentrates around the true signal from the space $ell_2$, as the information in the observations tends to infinity. A benchmark is the rate of the so called oracle projection risk, i.e. the smallest risk of the unknown true signal over all projection estimators.

Under appropriate hierarchical prior, we study the performance of the resulting (appropriately adjusted: shifted, rescaled or empirical Bayes) posterior distribution and establish that the posterior concentrates around the true signal with the oracle projection convergence rate.

The results are nonasymptotic and uniform over $ell_2$. Another important feature of our approach is that our results on the oracle projection posterior rate are always stronger than any result about posterior convergence with the minimax rate over all nonparametric classes for which the corresponding projection oracle estimator is minimax over this class. Based on posterior, we construct a Bayes adaptive estimator and show that it satisfies an oracle inequality. We also study implications for the model selection problem, namely we propose a Bayes model selector and assess its quality in terms of the so called false selection probability.

February 3, 2010: Yanick Heurteaux (Université Blaise Pascal)

Measures and the law of the iterated logarithm

Le $m$ be a unidimensional probability measure with dimension $d$. A natural question is to ask if the measure $m$ is comparable with the Hausdorff measure (or the packing measure) in dimension $d$. We give an answer (which is in general negative) in several situations including self-similar measures and quasi-Bernoulli measures. The law of the iterated logarithm or estimations of the Lq-spectrum in a neighborhood of 1 are the tools to obtain such estimations.