The intensification of digital communication and the increase in the performance of digital computer architectures, which is often associated with the keyword "digitization", opens up a multitude of new technical possibilities, but also presents politics and society with new challenges.
The purpose of the research cluster is to bundle the competencies available at the university in the field of data analysis and data processing in order to support politics and society in dealing with these challenges through basic and application research and to make contributions to computer-aided social research.
The immense performance of modern computers makes it possible to work on new content-related and methodological issues. In particular, the use of Monte Carlo methods to analyze the quality of statistical estimation methods and the use of computationally intensive methods for calculating statistical estimated values (bootstrap, Monte Carlo integration, Markov chain Monte Carlo simulation of posterior distributions) should be mentioned here.
(Prof Dr Martin Elff) Multi-level analysis is often used in comparative social research to trace the influence of the social and political context on individual behavior. This method is often used on the data of multinational surveys (such as the Eurobarometer or the European Social Survey), which are characterized by a large number of units of investigation at the individual level (respondents) but a small number of units at higher aggregation levels (states). To what extent the multilevel analysis still delivers reliable results under these circumstances and to what extent accurate statistical inference can be achieved are some of the questions that this project deals with.
(Prof Dr Martin Elff) The reconstruction of the positions of political actors is essential for a variety of political science issues. In analyzing the formation of government coalitions, as well as understanding changes in patterns of electoral behavior, it is essential to consider the political positions of parties. However, the reconstruction of these positions turns out to be anything but easy. There are two basic approaches for this: on the one hand, ratings by experts (i.e. country experts assign positions to the parties on predetermined scales), on the other hand, the quantitative analysis of the parties' election programs (i.e. based on the statistical analysis of prepared texts and with the aid of spatial models the positions estimated). The research project follows the second approach. As part of the project, on the one hand, existing procedures are to be improved and examined with regard to their reliability; on the other hand, they are to be used to create a comprehensive database of party positions with which the patterns of party competition can be determined. An application for funding has been submitted to the Deutsche Forschungsgemeinschaft (DFG) to support the project.
(Prof Dr Martin Elff) The data analysis software R is on the one hand a comprehensive infrastructure for data analysis and data management, on the other hand it is a programming language for statistics and graphics. Due to this dual character, it can be used in a variety of ways. Their field of application ranges from the analysis of experimental data, through the analysis of survey data, to the analysis of massive data sets of behavioral traces. However, the data structures relevant for this are not yet very familiar in the social sciences; on the other hand, the support for survey data sets that are typical for social research through R packages can still be expanded. The book project is intended to build the corresponding bridges: on the one hand, the common data structures are presented and discussed, on the other hand, it addresses the special packages relevant for the analysis of survey data.
Elff, Martin Prof Dr
Chair for Political Sociology
The speed of trading in financial markets has increased immensely in recent years. Data sets are now available in milliseconds, microseconds and even nanoseconds. The analysis of such data requires appropriate statistical models and computer-aided processes.
(Prof Dr Franziska Peter) Funded by the Deutsche Forschungsgemeinscahft (DFG), duration 2018-2020, (project number 389577820)
This project examines volatility in connection with risk forecasting and risk management on intraday frequency. Current financial market literature shows that the options-based measure of implied volatility contains information related to future stock market volatility that cannot be derived from historical stock prices. Prominent examples of such implicit volatility measures are the VIX or its German counterpart, the VDAX. In this research project, an implicit volatility measure for stocks of individual companies is developed, which is based on high-frequency option prices. This measure is calculated for a sample of European and US American companies and makes it possible to examine several important aspects relating to the assessment of stock market risk.
(Prof Franziska Peter and Thomas Heil) Research in “Computational Sciences” is largely a result of the availability of new data. Not only extraordinarily large data sets (“Big Data”) require new analysis methods and new statistical procedures for their analysis. Especially high-frequency traders with a focus on "algorithmic trading" need input and analysis at a very high frequency. It is therefore of great importance to be able to predict the volatility structures within a day. In addition, this allows us to filter and understand consistent structures (“Pattern Recognition”) within one trading day. The analysis of these structures and their prediction is carried out using algorithms from the field of machine learning. These new findings make it possible to create new statistical models and key figures for intra-day volatilities.
(Prof. Franziska Peter and Thomas Heil) The calculation of the Value at Risk or Expected Shortfall requires the estimation of the expected return as well as the estimation of the future volatility. However, greatly simplifying assumptions about the latent process of volatility often lead to inaccurate calculations of the value at risk and thus the expected shortfall. In principle, classical statistical models offer the possibility of predicting a density function from time series of returns, but these models are subject to severe restrictions, which leads to a poor prediction of the non-negative densities. The applicability of machine learning (neural networks, SVM) is proving to be very promising in the prediction of densities, since as the ultimate function approximators it is superior to classical statistical models when dealing with non-linear data. Based on such an estimate or prediction of the return densities, a direct calculation of the value at risk and the expected shortfall can be made. In addition, it is possible to increase the frequency of observations in order to be able to identify patterns in the volatility structure within a day. The prediction of the densities is done with "Mixture Density Networks".
Peter, Franziska Prof Dr
Chair of Empirical Finance & Econometrics
Anja Achtziger graduated in Psychology in 1997 (Technical University of Darmstadt, Germany) and received her PhD in Psychology in 2003 (University of Konstanz, Germany). She worked as a Postdoc and Temporary Professor of Social Psychology and Motivation at the University of Konstanz. She later moved to the Zeppelin University Friedrichshafen, Germany, there she holds the Chair in Social and Economic Psychology. Anja was a Visiting Professor of Psychology at New York University Abu Dhabi from August 2019 to May 2020. From March 2018 to December 2019 she was the speaker of the research unit “Psychoeconomics,” funded by the German Research Foundation. Anja is an associate editor of the Journal of Economic Psychology since January 2019. She is the deputy speaker of the coordination committee of the Consumer Research Network of the Federal Government of Germany appointed by the Minister of Justice and Consumer Protection Heiko Maas in 2015 and by Dr. Katarina Barley in 2018.
Anja Achtziger’s research focuses on human decision making, algorithm aversion and appreciation, self-control, and motivation. She uses a multi-method approach to investigate human cognition, with techniques encompassing laboratory and field experiments, eye-tracking, and electroencephalography (EEG). Her work ins interdisciplinary and includes collaboration with researchers from economics, management science, consumer research, computer science, and ethics. Her most recent research project, on the consequences of using algorithmic decision-making systems in legal systems for society, is “Deciding about, by, and together with algorithmic decision-making systems,” funded by the Volkswagen Foundation in its program “Artificial Intelligence and the Society of The Future”.
Martin Elff holds the Chair of Political Sociology since February 2015. His research activities cover a variety of topics, including the relation between social structure and electoral behaviour, the estimation of parties‘ political positions from their electoral platforms, measuring democracy, and methodological questions of quantitative political research.
Findings of his research have been published or are forthcoming in Acta Politica, the British Journal of Political Science, Electoral Studies, the European Journal of Political Research, German Politics, Perspectives on Politics, Political Analysis and Politics and Governance.