Archive 2012

December 19, 2012 Richard Kraaij (TU Delft)

Stationary product measures for conservative particle systems and ergodicity criteria

When: Wednesday December 19, 2012 15:45-16:45
Where: TU Delft, Faculty EWI, Mekelweg 4, Snijderszaal (first floor) 

For the exclusion process and the independent random walk model, among others, it is well known that there exist a family of stationary product measures. For the two given examples these are the Bernoulli and Poisson product measures indexed by appropriate density profiles.
In the talk I will describe a common structure among these models and show that every model of this type has stationary product measures. I will describe what these measures are and which density profiles are appropriate. After that I will focus on the question whether these stationary product measures are ergodic. 

November 28, 2012 Fetsje Bijma (VU University Amsterdam)

Different approaches to the analysis of EEG/fMRI brain data 

When: Wednesday November 28 2012, 15:45-16:45
Where: TU Delft, Faculty EWI, Mekelweg 4, Snijderszaal (first floor)

The combination of EEG and fMRI registration of brain activity is potentially fruitful, since it combines the high temporal resolution of EEG and the high spatial resolution of fMRI. Nevertheless, the analysis of the combined data is mathematically challenging. The common practice is to find a regressor of interest based on the EEG data, and use that regressor together with confounders in a linear model applied to the fMRI data. These regressors of interest are often based on a subset of the EEG data. Moreover, this approach yields large differences between healthy individuals, which are biologically hard to understand.

We have studied the use of Kronecker products (KP) in order to take into account the full EEG data. We propose a KP consisting of 3 matrices to model the EEG covariance over sensors, time samples and fMRI epochs. I will discuss the interpretation, benefits and some first results of this model.

As a second alternative we have clustered the EEG data in the alpha band (8-13 Hz) to find the fMRI epochs where different alpha frequencies (low, middle, high) are dominant. In this way we can build regressors specific to the different alpha frequencies. I will compare the results of the classic approach and this new approach. 

November 21, 2012 André van den Berg (TU Delft) 

VALLENDE MACHTEN en hun rol in de discrete analyse 

When: Wednesday November 21 2012, 15:45-16:45
Where: TU Delft, Faculty EWI, Mekelweg 4, Snijderszaal (first floor) 

In deze voordracht zal ik iets laten zien van een sommatiemethode die analoog is aan het overbekende integreren m.b.v. primitieve functies.
De volgende onderwerpen komen hopelijk aan de orde:  differenties, somfuncties, vallende machten, stirlinggetallen (het verdelen van knikkers over bakjes, injectieve en surjectieve afbeeldingen), newton-reeksen, differentieschema’s, partiële sommatie en operatorenrekening. 

November 7, 2012 Jayanta Pal (General Electric Global Research Center, Bangalore)  

A Penalized Likelihood Ratio Based Approach in the Decreasing density problem at the end-point 

When: Wednesday November 7 2012, 15:45-16:45
Place: TU Delft, Faculty EWI, Mekelweg 4, Lipkenszaal (first floor) 

We consider the problem of estimating the modal value of a decreasing density on the positive real line. This has application in several interesting phenomena arising for example in renewal theory, and in biased and distance samplings. We use a penalized likelihood ratio based approach for inference and derive the scale-free universal large sample null distribution of the log-likelihood ratio, using a  suitably chosen penalty parameter. We present simulation results and a real data analysis to corroborate our findings, and compare the performance of the confidence sets with existing results.  

October 17, 2012 Paul Eilers  (Department of Biostatistics, Erasmus Medical Center, Rotterdam) 

Asymmetric Statistics: Quantiles, Expectiles and Data Depth 

When: 15:45 hrs on Wednesday October 17, 2012
Place: TU Delft, Faculty EWI, Mekelweg 4 - Lipkenszaal (1st floor) 

We all are familiar with means and medians. The mean minimizes the sum of squares of residuals and the median the sum of their absolute values. Negative and positive residuals are treated equally. What happens if we give them different weights? This question leads us into the fascinating world of asymmetric statistics.

Quantiles can be computed by minimizing an asymmetrically weighted sum of absolute values. For example, we get the 75-th percentile if we give weight 0.25 to negative residuals, and weight 0.75 to the positive ones. This approach can be directly extended to regression on predictors, so-called quantile regression. With suitable basis functions and a penalty we get quantile smoothing.

It is not generally known that asymmetric weighting of residuals can also be used in a least squares setting. This leads to expectile regression or smoothing. Least asymmetrically weighted squares (LAWS) has many interesting and useful properties, of which I will give examples. Expectile computations are easy.

One can use either quantiles or expectiles to define {\em data depth}, which is a measure of how close individual data points are to the center of a sample. In two dimensions one can compute convex quantile or expectile contours for a cloud of points. They are a valuable enhancement of a scatterplot.  

October 10, 2012 Kimberly McGarrity (TU Delft /M2i) 

Nonparametric inference in a stereological model with randomly sized, oriented cylinders

When: 16:00 hrs on Wednesday October 10, 2012
Place: TU Delft, Faculty EWI, Mekelweg 4 - Lipkenszaal (1st floor)  

We use oriented circular cylinders in an opaque medium to represent certain microstructural features in steel. The opaque medium is sliced parallel to the symmetry axes and rectangular portions of the cylinders are observed on the cut plane. There is a well established inverse relation between the distribution of the observed 2D rectangle lengths and the distribution of the 3D cylinder radii that dates back to Wicksell (1925). Because the cylinders are oriented in our model, all of the height information for a given, cut cylinder is preserved. We propose a nonparametric estimation procedure to estimate the marginal distributions of the 3D cylinder radii and heights from the observed 2D rectangle lengths and heights. Also, from the 2D observations, other interesting distributional properties of these cylinders are estimated, such as the covariance between the radii and heights, the distributions of the Aspect Ratio, Surface Area and Volume of the cylinders. Many of these quantities are directly linked to the mechanical properties of the material, and are, therefore, useful for industry. Finally, the mathematical model and estimation procedures are applied to two banded microstructures for which nearly 90 μm of depth have been observed via serial sectioning.  

October 3, 2012 Dr. Johannes Schmidt-Hieber    ( École Nationale de la Statistique et de l'Administration Économique (ENSAE))  

Confidence Statements for Qualitative Features in Deconvolution 

Suppose that we observe data from a deconvolution model, that is, we observe an i.i.d. sample from an unknown distribution under additive noise. In many practical problems the main interest lies not in pointwise reconstruction of the true density but rather in answering qualitative questions, for instance about the number of modes or the regions of increase and decrease.

In this talk, we derive multiscale statistics for deconvolution in order to detect qualitative features of the unknown density. Important examples covered within this framework are to test for local monotonicity or local convexity on all scales simultaneously. We investigate the moderately ill-posed setting, where the Fourier transform of the error density in the deconvolution model is of polynomial decay. Theoretically we derive convergence rates for the distance between the multiscale statistic and a distribution-free approximation and study the power of the constructed tests. In a second part, we illustrate our method by numerical simulations. This is joint work with Axel Munk (Goettingen) and Lutz Duembgen (Bern).   

September 19, 2012 Wenxia Li  (East China Normal University)   

Multiscale  self-affine Sierpinski carpets 

The well-known self-affine Sierpinski carpets, first studied by McMullen and Bedford independently, are constructed geometrically by repeating a single action according to a given pattern. In the present talk, we extend them by randomly choosing a pattern from a set of patterns with different scales in each step of their construction process. The Hausdorff and box dimensions of the resulting limit sets are determined explicitly and the sufficient conditions for the corresponding Hausdorff measures to be positive finite are also obtained.  

September 12, 2012 Roberto Fernandez (Utrecht University)

REGULAR PROCESSES MAY BE NON GIBBSSIAN 

Processes are determined by transition probabilities, that is by conditional expectations with respect to the past.  In contrast, one-dimensional Gibbs measures are fields determined by simultaneous conditioning on past and future.  For Markovian and exponentially continuous processes both theories are known to be equivalent. 

We present a simple process showing that this equivalence does not extend to more general cases.  The process is ergodic, has a continuous dependence with respect to the past and even admits a renewal construction.  Yet, a straightforward explicit calculation shows that it is not Gibbsian.

Work in collaboration with G. Maillard (Marseille) and S. Gallo (U. Federal do Rio de Janeiro, Brazil)  

July 11, 2012 Giulio Tiozzo (Harvard) 

Renormalization and alpha-continued fractions 

α-continued fraction transformations are a one-parameter family of maps which arise from generalized continued fraction algorithms. The average speed of convergence of these algorithms, (which corresponds to the entropy of the maps) varies wildly with the parameter, and is known to be locally monotone outside a closed fractal set E.

Surprisingly, such a set has the same structure as the real slice of the Mandelbrot set, making it possible to apply ideas from complex dynamics to continued fractions: in this talk, we will investigate the self-similar structure of E and characterize the plateaux occurring in the graph of entropy.  

May 30, 2012 Eric Cator (TUD) 

SIS epidemics

We will discuss a simple epidemic model, called the Susceptible-Infected-Susceptible (SIS) model. It is a model on an arbitrary connected undirected graph, where infected nodes infect their neighbors with constant rate, and the infected nodes heal at a possibly different rate. Clearly, if all nodes are healthy, this will be an absorbing state of the Markov process, and therefore the only stable configuration is concentrated on this state. However, it turns out that if the infection rate is high enough (higher than some critical level), there will exist a so-called meta-stable state. We will discuss some recent preliminary results on this meta-stable state and the value of the critical infection rate, for general graphs and two specific examples.

This is joint work with Piet van Mieghem  

May 23, 2012 Markus Heydenreich (UL) 

The incipient infinite cluster - construction, scaling and random walk properties 

Understanding critical behavior of spatial systems is an important though challenging task. We consider the incipient infinite cluster, an object closely related to critical percolation, on the high-dimensional lattice. It is a natural example of a spatial object showing fractal properties, self-similarity, and a non-degenerate scaling limit. In particular, the behaviour of random walk on the incipient infinite cluster significantly deviates from Euclidean lattices. 

During the talk, I explain various constructions of the incipient infinite cluster, and discuss properties of the corresponding random walk. Subsequently, I present a result about the scaling limit of the backbone of the incipient infinite cluster. This result is achieved through a new lace expansion for percolation.

Joint work with Remco van der Hofstad (Eindhoven), Tim Hulshof (Eindhoven) and Grégory Miermont (Orsay).  

May 16, 2012 Rob van den Berg (CWI and VU University) 

Disjoint-occurrence inequalities: introduction and recent progress

The BK inequality for product measures (van den Berg and Kesten (1985)) says that the probability that two increasing events `occur disjointly' is smaller than or equal to the product of the two individual probabilities.

This result is often used in percolation and interacting particle systems, but can also be interpreted in operations-research terms (e.g. random distribution of resources). The conjecture that the inequality even holds for all events was proved by Reimer in the mid-nineties.

  In spite of Reimer's work several natural, fundamental problems in this area remained open. In this talk I will start with an introduction and general overview, and then discuss recent progress. In particular I will show and explain an extension of the BK inequality to randomly drawn subsets of fixed size (joint work with Johan Jonasson) and more recent extensions for the ferromagnetic Ising model and the antiferromagnetic Curie-Weiss model (joint work with Alberto Gandolfi). 

May 9, 2012 Eduard Belitser (TUE) 

Optimal two-stage procedures for estimating location and size of maximum of multivariate regression functions 

We propose a two-stage procedure for estimating the location  and size of the maximum of a smooth d-variate regression function.

In the first stage, a preliminary estimator of the location obtained from a standard nonparametric smoothing method is used.

At the second stage, we ``zoom-in'' near the vicinity of the preliminary estimator and make further observations at some design points in that vicinity.

We fit an appropriate polynomial regression model to estimate the location and size of the maximum. We establish that, under suitable smoothness conditions and appropriate choice of the zooming, the second stage estimators have better convergence rates than the corresponding first stage estimators of the location  and size of the maximum.  More specifically, for $\alpha$-smooth regression functions, the optimal nonparametric rates $n^{-(\alpha-1)/(2\alpha+d)}$ and $n^{-\alpha/(2\alpha+d)}$ at the first stage can be improved to $n^{-(\alpha-1)/(2\alpha)}$ and $n^{-1/2}$ respectively for $\alpha>1+\sqrt{1+d/2}$.

These rates are the optimal rates in the class of all possible sequential estimators.

Interestingly, the two-stage procedure resolves the curse of dimensionality problem to some extent, as the dimension does not control  the second stage convergence rates, provided that the function class is sufficiently smooth. We consider a multi-stage generalization of our procedure that attains the optimal rate for any smoothness level $\alpha>2$ starting with a preliminary estimator with any power-law rate at the first stage. Based on joint work with S.Ghosal and H. van Zanten. 

May 2, 2012 Shashi Jain   (CWI/TU Delft) 

Stochastic grid method with bundling for pricing Bermudan Options 

Tilley's bundling algorithm was amongst the first few methods to price Bermudan options using simulation. His method however suffered from some drawbacks especially non trivial extension to multi-asset problems. The talk describes extension of Tilley's bundling algorithm to multi-asset problems using the stochastic grid method, with several improvements to the original algorithm. We give numerical results for multi-asset arithmetic and geometric basket options, and max on several assets.  

April 18, 2012 Pasquale Cirillo (Bern) 

Polya lattice models for cascading failures 

A cascading failure is a failure in a system of interconnected parts, in which the breakdown of one element can lead to the subsequent collapse of the others. The aim of this talk is to introduce a class of simple combinatorial models for the study of cascading failures. The new models are called Polya Lattice Models (PLM). In particular, having in mind particle systems and Markov random fields, we take into consideration a network of interacting urns displaced over a lattice. Every urn is Pólya-like and its reinforcement matrix is not only a function of time (time contagion), but also of the behavior of the neighboring urns (spatial contagion), and of a random component, which can either represent fate, or the impact of exogenous factors. In this way a non-trivial dependence structure among the urns is built, and it is used to study default avalanches over the lattice.

Thanks to its flexibility and its interesting probabilistic properties, the given construction may be used to model different phenomena characterized by cascading failures such as financial networks or power grids. During the talk we will also show how PLM can be used from a Bayesian nonparametric point of view, and present a first application to the study of financial fragility and firms dynamics. 

March 28, 2012 Florian Völlering (Universiteit Leiden)

Random walks in random environments 

Random walks in random environments arise when the transition kernels of simple random walks are modified according to a random environment. This modification is dependent on the position of the walk in the environment, which makes the study of the random walk non-trivial. I will give an overview of the topic, presenting both traditional and new results, for environments which are either static or changing in time. 

March  21, 2012 Berend Roorda (Universiteit Twente) 

Smarter valuation under weaker time consistency.

One of the central topics in mathematical finance is to determine the value of products, or financial positions, with a risky payoff. In a dynamic setting, most approaches rely on the paradigm of conditional valuation, postulating state-dependent certainty equivalents for the position under consideration. By replacing the position at some horizon date by its certainty equivalent at some earlier date in a backward recursive way, valuation effectively reduces to stepwise analysis on a short time scale, in the spirit of Dynamic Programming.  In risk neutral valuation, the standard valuation methods in the field, this is justified by the law of iterated expectations. Also in more general valuation methods based on axiomatic frameworks for risk measures, the prevailing rule of (strong) time consistency precisely imposes this type of backward recursive evaluation (see e.g. Föllmer and Schied, Stochastic Finance – an introduction in discrete time, 3rd edition 2011).

In this talk we indicate some fundamental limitations of this setting, in particular in situations where the law of one price does not hold anymore due to market frictions or model uncertainty. The problem is that appropriate degrees of risk aversion per time step typically aggregate to excessive levels over longer periods.  We discuss weaker forms of time consistency (as described in [1]) that bring in a new dimension in valuation, in which there is freedom to choose long term features of pricing operators without affecting its local properties. The observations give rise to rethink the role of certainty equivalents, Bayesian updating, and the Dynamic Programming principle.

Some simple examples will illustrate the main ideas.

[1] B. Roorda & J.M. Schumacher (2010) When Can a Risk Measure Be Updated Consistently? Under revision. An earlier version of this paper has been circulated under the title “Time Consistency of Nonconvex Risk Measures” (Netspar Discussion Paper 01/2009-006).  

March 14, 2012 Jeanine Houwing-Duistermaat  (LUMC)

Estimation of parameters using information from family and twin studies.
Jeanine J Houwing-Duistermaat and Bruna Balliu
Dept of Medical Statistics and Bioinformatics
Leiden University Medical Center

 To enrich for genetic factors,  families are often selected on at least one case member. These studies are typically underpowered. We therefore propose to use the joint likelihood and to combine family and twin data.

We have developed a likelihood-based approach allowing for several ascertainment schemes, to accommodate for the outcome-dependent sampling scheme, and a family-specific random term, to take into account the correlation between family members. We estimate the parameters using maximum likelihood based on the combined joint likelihood approach.

Simulations show that the combined approach is more efficient than the retrospective or prospective approach. To illustrate our approach, we use data from a family and a twin study from the United Kingdom on rheumatoid arthritis.

February 29, 2012 Hitoshi Nakada  (Keio University, Yokohama, Japan)

On costs of some Euclidean type algorithms over F_q[X]^3.

We consider  Euclidean type algorithms of three polynomials of F_q -coefficients.  There are three possibilities for such algorithms. We estimate some cost functions to find efficient one among these algorithms. 

(joint work with V. Berthe and R. Natsui)

February  15, 2012 Henk Don (TUD)

Improved lower bounds for the critical value in fractal percolation.

The fractal percolation model will be introduced. Then I will discuss a result that connects fractal percolation with site percolation. This result can be used to obtain lower bounds for the critical value p_c in fractal percolation. Actually, we are able to construct a sequence of lower bounds that converges to p_c. The terms in this sequence can in principle be calculated algorithmically, but this is computationally very intensive. To obtain numerical lower bounds for p_c, we developed an algorithm to compute lower bounds for the terms in the sequence 

February  8, 2012 Peter Spreij (UVA) 

Affine diffusions with non-canonical state space.

Multidimensional affine diffusions have been studied in detail for the case of a canonical state space. We present results for general state spaces and provide a complete characterization of all possible affine diffusions with polyhedral and quadratic state space.  We give necessary and sufficient conditions on the behavior of drift and diffusion on the boundary of the state space in order to obtain invariance and strong existence and uniqueness. Joint work with Enno Veerman.

February 1, 2012 Maik Schwarz  (Universite catholique de Louvain, Belgium)

Adaptive circular deconvolution by model selection under unknown error distribution.

Abstract