Data Mining and Knowledge Discovery Handbook, 2 Edition part 77

Data Mining and Knowledge Discovery Handbook, 2 Edition part 77. Knowledge Discovery demonstrates intelligent computing at its best, and is the most desirable and interesting end-product of Information Technology. To be able to discover and to extract knowledge from data is a task that many researchers and practitioners are endeavoring to accomplish. There is a lot of hidden knowledge waiting to be discovered – this is the challenge created by today’s abundance of data. Data Mining and Knowledge Discovery Handbook, 2nd Edition organizes the most current concepts, theories, standards, methodologies, trends, challenges and applications of data mining (DM) and knowledge discovery. | 740 Pierre Geurts Obviously if these numerical estimates have a small variance and bias the corresponding classifier is stable with respect to random variations of the learning set and close to the Bayes classifier. Thus a complementary approach to study bias and variance of a classification algorithm is to connect in a quantitative way the bias and variance terms of these estimates to the mean misclassification error of the resulting classification rule. Friedman 1997 has done this connection in the particular case of a two-class problem and assuming that the distribution of Ic S x with respect to S is close to Gaussian. In this case the mean misclassification error at some point x may be written see Friedman 1997 F Frmr I S x O x 0 Es lCb S W 0-5A 2 P v rn x 1 FS Frror I S X OC x 0I vars lcb S x I 2 r y Cb X 1 where cb is the Bayes class at x and 0 . is the upper tail of the standard normal distribution which is a positive and monotonically decreasing function of its argument and such that 0 0. The numerator in 0 is called the boundary bias and the denominator is exactly the variance of the regression model ICb S x . There are two possible situations depending on the sign of the boundary bias When the average probability estimates of the Bayes prediction is greater than a majority of models are right a decrease of the variance of these estimates will decrease the error. On the other hand when the average probability estimates is lower than a majority of models are wrong a decrease of variance will yield an increase of the error. Hence the conclusions are similar to what we found in our illustrative problem above in classification more variance is beneficial for biased points and detrimental for unbiased ones. Another important conclusion can be drawn from this decomposition whatever the regression bias on the approximation of Ic S x the classification error can be driven to its minimum value by reducing solely the variance under the assumption that .

Không thể tạo bản xem trước, hãy bấm tải xuống
TỪ KHÓA LIÊN QUAN
TÀI LIỆU MỚI ĐĂNG
463    20    1    26-11-2024
187    24    1    26-11-2024
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.