Sign up to our newsletter for the latest news,updates and offers

Special seminar on statistics front

Date: Friday, June 24th, 13:30—17:30

Location: B212, Tong Bo Building, Liu Lin Campus

 

Lecturer 1UCLA Prof. Gang Li 

ThemeSurvival Analysis via Metric Learning for Large-Scale Data

Abstract

We consider the problem of building a survival prediction model based on large-scale data. Standard regression models such as the Cox model are often inadequate to describe complex relations and interactions that may be present in a large heterogeneous population. This paper introduces a new approach to building a survival prediction model by adapting the metric learning methodology to a censored outcome.  The method is an extension of kernel regression designed to overcome the flaws of standard nonparametric regression methods in higher dimensions.  It uses data to learn a kernel function that adaptively down-weights unimportant features, up-weights important features, and achieves dimension reduction in a supervised way. It effectively handles nonlinear relations, complex interactions, and highly correlated features. We demonstrate the usefulness of our method in data rich settings using both simulated and real data.

Lecturer 2University of Manchester  Prof. JianXin Pan

ThemeA semiparametric mixture regression model for longitudinal data

Abstract

In this talk we will focus on trajectory analysis that applies finite mixture modelling to longitudinal data. The paper introduces new modelling tools using semi-parametric regression methods. A normal mixture is proposed such that the model contains one smooth term and a set of possible linear predictors. Model terms are estimated using a penalized likelihood method with the EM-algorithm. The paper also introduces a computationally appealing alternative that provides an approximate solution using an ordinary linear model methodology developed for mixture regression and trajectory analysis. Simulation experiments and a real data example of height curves of 4,223 Finnish children illustrate the methods.

Lecturer 3University of York  Prof. Wenyang Zhang

Theme: Homogeneity Pursuit in A Latent Variable Model

Abstract:

Panel data analysis is an important research area in statistics and econometrics. In panel data analysis, the impact of a covariate of interest on the response variable is often assumed to be the same across all individuals.  If only the global effect of the covariate is of interest, the statistical modelling based on this assumption is reasonable. However, for many cases, people are interested in the individual attributes of the impacts of covariates. For such cases, the aforementioned modelling would not work.  In this talk I will show a novel statistical modelling for panel data with interactive effects induced by latent variables, and an EM based algorithm to estimate the unknown parameters in the models.  I will also show the asymptotic properties for the proposed estimation, homogeneity pursuit, and the modelling idea.   To demonstrate the advantage of the proposed method over the existing ones when sample size is finite, I will show some simulation results.  Finally, I will apply the proposed method to an economic data set.   The analysis based on our approach reveals some interesting findings.

Lecturer 4 Associate Professor: Cao HongYuan, University of Missouri-Columbia

ThemeAnalysis of the proportional hazards model with sparse longitudinal covariates

Abstract:

Regression analysis of censored failure observations via the proportional hazards model permits time-varying covariates which are observed at death times. In practice, such longitudinal covariates are typically sparse and only measured at infrequent and irregularly spaced follow-up times. Full likelihood analyses of joint models for longitudinal and survival data impose stringent modelling assumptions which are difficult to verify in practice and which are complicated both inferentially and computationally. In this article, a simple kernel weighted score function is proposed with minimal assumptions. Two scenarios are considered: half kernel estimation in which observation ceases at the time of the event and full kernel estimation for data where observation may continue after the event, as with recurrent events data. It is established that these estimators are consistent and asymptotically normal. However, they converge at rates which are slower than the parametric rates which may be achieved with fully observed covariates, with the full kernel method achieving an optimal convergence rate which is superior to that of the half kernel method. Simulation results demonstrate that the large sample approximations are adequate for practical use and may yield improved performance relative to last value carried forward approach and joint modelling method. The analysis of the data from a cardiac arrest study demonstrates the utility of the proposed methods.

Before:Accounting Research on Debt Contracting

Next:Development, challenges and predication of the framework of Chinese monetary policy

close

Quick Links