Statistical methods
Our philosopy is to start with exciting new scientific problems, identify, and carefully tune statistical analytic approaches designed to solve them. Simple, easy to explain solutions are often the best, while other times subtler approaches are necessary. Not every application leads to a new method and many years of problem solving sometimes result in generalizable expertise. We call that the statistical method, which is an important component of our research. Given the high level of integration between methods and data it is quite hard to identify exactly what is generalizable. Here we provide a list of interesting generalizable concepts (a.k.a. statistical methods) that we found to be particularly useful. Some of our recent contributions are in the following areas: causal inference for brain imaging data, Population Independent Component Analysis, Population Value Decomposition, testing in linear mixed effects models, prediction (a.k.a. machine learning with variability quantification), computational advances, visualization, and Structural Principal Component Analysis. For more details you can explore the Statistical methods submenu.
Our approach to research is "problem forward", as we start with and focus on the scientific problem. This is in contrast with a "methods backward" approach, which would start with a method or theory and look for data sets and problems. We found the problem forward approach to be much more reasonable, satisfying, and honest. This approach exposes our students and collaborators to problems, data, and statistical analytic thinking much earlier in the process, giving us the chance to make an impact when it matters.
Mathematics, logic, theory, abstraction, and generalizable concepts play a large role in our research. While we consider everything that could help solve the scientific problem, we easily discount concepts that have little practical validity, irrespective of their mathematical complexity or appeal. If we find that a difficult stat theory concept is the solution then we use it, though this seems to happen very rarely in practice. Our methodological and applied focus is fundamentally pragmatic.
The last 20-30 years have witnessed an unsustainable "bubble economy" in statistics, with many statisticians following for up to a decade the same theoretical idea. This led to a boom and bust cycle that left many young researchers asking themselves "what is the next big thing in Statistics?" and look up to a very small and select group of Statisticians for inspiration. In the same time we are facing a raging data revolution that could have provided inspiration, open up completely new areas of research, and placed our discipline at the core of our data-centric information society. Instead, an ever smaller number of statisticians work directly on emerging problems, new data structures, and data. The problem is fundamental and obvious: there are very few statisticians working on data because the academic reward system and job market favor theoretical statistics. The solution should be equally obvious. On this webpage we have tried to provide details about our research groups's solution for the future. For more information we invite you to read carefully, explore the various links, and have a look at our papers, software, and training materials.