Independent component analysis P3

Gradients and Optimization Methods The main task in the independent component analysis (ICA) problem, formulated in Chapter 1, is to estimate a separating matrix that will give us the independent components. It also became clear that cannot generally be solved in closed form, that is, we cannot write it as some function of the sample or training set, whose value could be directly evaluated. Instead, the solution method is based on cost functions, also called objective functions or contrast functions. Solutions to ICA are found at the minima or maxima of these functions. Several possible ICA cost functions will be given. | Independent Component Analysis. Aapo Hyvarinen Juha Karhunen Erkki Oja Copyright 2001 John Wiley Sons Inc. ISBNs 0-471-40540-X Hardback 0-471-22131-7 Electronic _3 Gradients and Optimization Methods The main task in the independent component analysis ICA problem formulated in Chapter 1 is to estimate a separating matrix W that will give us the independent components. It also became clear that W cannot generally be solved in closed form that is we cannot write it as some function of the sample or training set whose value could be directly evaluated. Instead the solution method is based on cost functions also called objective functions or contrast functions. Solutions W to ICA are found at the minima or maxima of these functions. Several possible ICA cost functions will be given and discussed in detail in Parts II and III of this book. In general statistical estimation is largely based on optimization of cost or objective functions as will be seen in Chapter 4. Minimization of multivariate functions possibly under some constraints on the solutions is the subject of optimization theory. In this chapter we discuss some typical iterative optimization algorithms and their properties. Mostly the algorithms are based on the gradients of the cost functions. Therefore vector and matrix gradients are reviewed first followed by the most typical ways to solve unconstrained and constrained optimization problems with gradient-type learning algorithms. VECTOR AND MATRIX GRADIENTS Vector gradient Consider a scalar valued function g ofm variables 9 p w 57 58 GRADIENTS AND OPTIMIZATION METHODS where we have used the notation w wi wm T. By convention we define w as a column vector. Assuming the function g is differentiable its vector gradient with respect to w is the m-dimensional column vector of partial derivatives dg_ dwi . . . X dwm The notation is just shorthand for the gradient it should be understood that it does not imply any kind of division by a vector .

Không thể tạo bản xem trước, hãy bấm tải xuống
TÀI LIỆU LIÊN QUAN
31    1173    49
TỪ KHÓA LIÊN QUAN
TÀI LIỆU MỚI ĐĂNG
102    100    2    29-06-2024
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.