Estimation of conditional mutual entropy as a measure of causality in bivariate EEG time series
Estimation of conditional mutual entropy as a measure of causality in bivariate EEG time series
Disciplines
Computer Sciences (80%); Mathematics (20%)
Keywords
-
Conditional Mutual Entropy Estimation,
Neural Network,
Causality,
Parameter Estimation,
Functional Approximation,
Learning Algorithm
Over the last decade, synchronization phenomena have been observed in many natural, physical and biological systems having complex structures. However, synchronization in the strict (or identical) sense is rarely found, since the couplings between the systems (or subsystems) are usually too weak to produce such synchrony. Accordingly from experimental point of view, it is important not only to detect the strength of the couplings, but also to identify the directional information and causal (driver-response: `who drives whom`) relationships between constituent subsystems. Some level of synchrony is usually necessary in order to attain normal neural activity for cognitive functioning, while too much synchrony may be a pathological phenomenon such as epilepsy. Detection of synchrony or its transient changes leading to a high level of synchronization, and identification of causal relations between driving (synchronizing) and response (synchronized) components is a great challenge, since it can help in diagnosing neurological disorders, localization of epileptogenic foci and in particular in anticipating epileptic seizures. Standard linear statistical methods have brought only a little success in this area. As an alternative seems to investigate synchronization and causality in complex systems by so called coarse measures based on information theory. One of them is conditional mutual information (entropy), CMI, on whose precise estimation depend the correct assessment of the strength of synchronization and causality. Entropy estimation from finite data samples belongs to the topical problems in current information theory and statistics. However, statistical methods for entropy estimation are either computationally intensive or very crude, and most of the methods are based on numerous theoretical assumptions which are difficult to fulfil for experimental data. In order to cope with these drawbacks, we propose neural networks based approaches for the estimation of CMI. With this intention, the applicable ideas of the most relevant statistical methods will be extracted and examined as a parameter optimisation problem. We will employ feedforward neural networks that are mostly used for approximation of probability functions: RBF networks, Mixture models and Modular networks. These three classes of neural networks will further be evaluated, in the context of CMI estimation, from the point of view of their architectural complexity, spatial complexity, and computational complexity. The performances of the proposed approaches will be compared to a representatively selected statistical estimation method and tested on simulated (coupled stochastic/deterministic linear/nonlinear systems) and real-life physiological signals (multi-channel electroencephalogram). The results of this project have potential implications to the analysis of a wide variety of signals generated by diverse complex systems, where synchronization and causality can be observed, i.e. in neurology, cardiology, engineering, and economy.