Đang chuẩn bị liên kết để tải về tài liệu:
Independent component analysis P20

Không đóng trình duyệt đến khi xuất hiện nút TẢI XUỐNG

Other Extensions In this chapter, we present some additional extensions of the basic independent component analysis (ICA) model. First, we discuss the use of prior information on the mixing matrix, especially on its sparseness. Second, we present models that somewhat relax the assumption of the independence of the components. In the model called independent subspace analysis, the components are divided into subspaces that are independent, but the components inside the subspaces are not independent. In the model of topographic ICA, higher-order dependencies are modeled by a topographic organization. Finally, we show how to adapt some of the basic ICA algorithms to. | Independent Component Analysis. Aapo Hyvarinen Juha Karhunen Erkki Oja Copyright 2001 John Wiley Sons Inc. ISBNs 0-471-40540-X Hardback 0-471-22131-7 Electronic 20 Other Extensions In this chapter we present some additional extensions of the basic independent component analysis ICA model. First we discuss the use of prior information on the mixing matrix especially on its sparseness. Second we present models that somewhat relax the assumption of the independence of the components. In the model called independent subspace analysis the components are divided into subspaces that are independent but the components inside the subspaces are not independent. In the model of topographic ICA higher-order dependencies are modeled by a topographic organization. Finally we show how to adapt some of the basic ICA algorithms to the case where the data is complex-valued instead of real-valued. 20.1 PRIORS ON THE MIXING MATRIX 20.1.1 Motivation for prior information No prior knowledge on the mixing matrix is used in the basic ICA model. This has the advantage of giving the model great generality. In many application areas however information on the form of the mixing matrix is available. Using prior information on the mixing matrix is likely to give better estimates of the matrix for a given number of data points. This is of great importance in situations where the computational costs of ICA estimation are so high that they severely restrict the amount of data that can be used as well as in situations where the amount of data is restricted due to the nature of the application. 371 372 OTHER EXTENSIONS This situation can be compared to that found in nonlinear regression where overlearning or overfitting is a very general phenomenon 48 . The classic way of avoiding overlearning in regression is to use regularizing priors which typically penalize regression functions that have large curvatures i.e. lots of wiggles . This makes it possible to use regression methods even when the .

Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.