# Independent component analysis P10

## ICA by Minimization of Mutual Information An important approach for independent component analysis (ICA) estimation, inspired by information theory, is minimization of mutual information. The motivation of this approach is that it may not be very realistic in many cases to assume that the data follows the ICA model. Therefore, we would like to develop an approach that does not assume anything about the data. What we want to have is a general-purpose measure of the dependence of the components of a random vector. Using such a measure, we could deﬁne ICA as a linear decomposition that minimizes that dependence measure | Independent Component Analysis. Aapo Hyvarinen Juha Karhunen Erkki Oja Copyright 2001 John Wiley Sons Inc. ISBNs 0-471-40540-X Hardback 0-471-22131-7 Electronic 10 ICA by Minimization of Mutual Information An important approach for independent component analysis ICA estimation inspired by information theory is minimization of mutual information. The motivation of this approach is that it may not be very realistic in many cases to assume that the data follows the ICA model. Therefore we would like to develop an approach that does not assume anything about the data. What we want to have is a general-purpose measure of the dependence of the components of a random vector. Using such a measure we could define ICA as a linear decomposition that minimizes that dependence measure. Such an approach can be developed using mutual information which is a well-motivated information-theoretic measure of statistical dependence. One of the main utilities of mutual information is that it serves as a unifying framework for many estimation principles in particular maximum likelihood ML estimation and maximization of nongaussianity. In particular this approach gives a rigorous justification for the heuristic principle of nongaussianity. DEFINING ICA BY MUTUAL INFORMATION Information-theoretic concepts The information-theoretic concepts needed in this chapter were explained in Chapter 5. Readers not familiar with information theory are advised to read that chapter before this one. 221 222 ICA BY MINIMIZATION OF MUTUAL INFORMATION We recall here very briefly the basic definitions of information theory. The differential entropy JT of a random vector y with density p y is defined as -ff y - y p y logp y dy Entropy is closely related to the code length of the random vector. A normalized version of entropy is given by negentropy J which is defined as follows J y H ygauss - K y where ysouss is a gaussian random vector of the same covariance or correlation matrix as y. .

TÀI LIỆU LIÊN QUAN
9    196    0
31    942    42
1    833    74
89    230    12
80    304    25
51    240    8
95    377    38
1    378    17
78    197    15
91    156    6
TÀI LIỆU XEM NHIỀU
13    34845    1861
3    21996    228
25    20252    3877
20    17456    1494
16    17203    2657
14    15426    2664
1    13878    450
37    13740    2842
3    11980    226
23    11227    401
TỪ KHÓA LIÊN QUAN
TÀI LIỆU MỚI ĐĂNG
19    4    1    10-12-2022
3    12    1    10-12-2022
89    7    1    10-12-2022
9    10    1    10-12-2022
16    4    1    10-12-2022
183    13    2    10-12-2022
5    18    1    10-12-2022
78    22    1    10-12-2022
7    10    1    10-12-2022
11    8    1    10-12-2022
153    3    1    10-12-2022
10    26    1    10-12-2022
8    16    2    10-12-2022
83    22    2    10-12-2022
26    5    1    10-12-2022
51    25    1    10-12-2022
47    12    1    10-12-2022
4    13    1    10-12-2022
2    5    1    10-12-2022
8    9    1    10-12-2022
TÀI LIỆU HOT
3    21996    228
13    34845    1861
3    1813    77
580    3860    352
584    2141    88
62    4911    1
171    4292    642
2    2010    74
51    2818    158
53    3721    180
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.