# Independent component analysis P20

## Other Extensions In this chapter, we present some additional extensions of the basic independent component analysis (ICA) model. First, we discuss the use of prior information on the mixing matrix, especially on its sparseness. Second, we present models that somewhat relax the assumption of the independence of the components. In the model called independent subspace analysis, the components are divided into subspaces that are independent, but the components inside the subspaces are not independent. In the model of topographic ICA, higher-order dependencies are modeled by a topographic organization. Finally, we show how to adapt some of the basic ICA algorithms to. | Independent Component Analysis. Aapo Hyvarinen Juha Karhunen Erkki Oja Copyright 2001 John Wiley Sons Inc. ISBNs 0-471-40540-X Hardback 0-471-22131-7 Electronic 20 Other Extensions In this chapter we present some additional extensions of the basic independent component analysis ICA model. First we discuss the use of prior information on the mixing matrix especially on its sparseness. Second we present models that somewhat relax the assumption of the independence of the components. In the model called independent subspace analysis the components are divided into subspaces that are independent but the components inside the subspaces are not independent. In the model of topographic ICA higher-order dependencies are modeled by a topographic organization. Finally we show how to adapt some of the basic ICA algorithms to the case where the data is complex-valued instead of real-valued. PRIORS ON THE MIXING MATRIX Motivation for prior information No prior knowledge on the mixing matrix is used in the basic ICA model. This has the advantage of giving the model great generality. In many application areas however information on the form of the mixing matrix is available. Using prior information on the mixing matrix is likely to give better estimates of the matrix for a given number of data points. This is of great importance in situations where the computational costs of ICA estimation are so high that they severely restrict the amount of data that can be used as well as in situations where the amount of data is restricted due to the nature of the application. 371 372 OTHER EXTENSIONS This situation can be compared to that found in nonlinear regression where overlearning or overfitting is a very general phenomenon 48 . The classic way of avoiding overlearning in regression is to use regularizing priors which typically penalize regression functions that have large curvatures . lots of wiggles . This makes it possible to use regression methods even when the .

TÀI LIỆU LIÊN QUAN
9    196    0
31    942    42
1    833    74
89    230    12
80    304    25
51    240    8
95    377    38
1    378    17
78    197    15
91    156    6
TÀI LIỆU XEM NHIỀU
13    34845    1861
3    21996    228
25    20252    3877
20    17456    1494
16    17203    2657
14    15426    2664
1    13878    450
37    13740    2842
3    11980    226
23    11227    401
TỪ KHÓA LIÊN QUAN
TÀI LIỆU MỚI ĐĂNG
108    7    1    10-12-2022
15    11    1    10-12-2022
227    11    4    10-12-2022
103    10    1    10-12-2022
51    24    1    10-12-2022
14    15    1    10-12-2022
8    104    1    10-12-2022
34    12    1    10-12-2022
94    87    5    10-12-2022
38    1    1    10-12-2022
8    7    1    10-12-2022
304    9    2    10-12-2022
14    8    1    10-12-2022
113    15    1    10-12-2022
10    7    1    10-12-2022
24    1    1    10-12-2022
6    9    1    10-12-2022
45    15    1    10-12-2022
3    14    1    10-12-2022
16    164    1    10-12-2022
TÀI LIỆU HOT
3    21996    228
13    34845    1861
3    1813    77
580    3860    352
584    2141    88
62    4911    1
171    4292    642
2    2010    74
51    2818    158
53    3721    180
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.