Đang chuẩn bị liên kết để tải về tài liệu:
Data Mining and Knowledge Discovery Handbook, 2 Edition part 46

Không đóng trình duyệt đến khi xuất hiện nút TẢI XUỐNG

Data Mining and Knowledge Discovery Handbook, 2 Edition part 46. Knowledge Discovery demonstrates intelligent computing at its best, and is the most desirable and interesting end-product of Information Technology. To be able to discover and to extract knowledge from data is a task that many researchers and practitioners are endeavoring to accomplish. There is a lot of hidden knowledge waiting to be discovered – this is the challenge created by today’s abundance of data. Data Mining and Knowledge Discovery Handbook, 2nd Edition organizes the most current concepts, theories, standards, methodologies, trends, challenges and applications of data mining (DM) and knowledge discovery. | 430 G. Peter Zhang to capture the essential relationship that can be used for successful prediction. How many and what variables to use in the input layer will directly affect the performance of neural network in both in-sample fitting and out-of-sample prediction. Neural network model selection is typically done with the basic cross-validation process. That is the in-sample data is split into a training set and a validation set. The neural network parameters are estimated with the training sample while the performance of the model is monitored and evaluated with the validation sample. The best model selected is the one that has the best performance on the validation sample. Of course in choosing competing models we must also apply the principle of parsimony. That is a simpler model that has about the same performance as a more complex model should be preferred. Model selection can also be done with all of the in-sample data. This can be done with several in-sample selection criteria that modify the total error function to include a penalty term that penalizes for the complexity of the model. Some in-sample model selection approaches are based on criteria such as Akaike s information criterion AIC or Schwarz information criterion SIC . However it is important to note the limitation of these criteria as empirically demonstrated by Swanson and White 1995 and Qi and Zhang 2001 . Other in-sample approaches are based on pruning methods such as node and weight pruning see a review by Reed 1993 as well as constructive methods such as the upstart and cascade correlation approaches Fahlman and Lebiere 1990 Frean 1990 . After the modeling process the finally selected model must be evaluated using data not used in the model building stage. In addition as neural networks are often used as a nonlinear alternative to traditional statistical models the performance of neural networks needs be compared to that of statistical methods. As Adya and Col-lopy 1998 point out if such a .

Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.