Kalman Filtering and Neural Networks P2

PARAMETER-BASED KALMAN FILTER TRAINING: THEORY AND IMPLEMENTATION Gintaras V. Puskorius and Lee A. Feldkamp Ford Research Laboratory, Ford Motor Company, Dearborn, Michigan, . (gpuskori@, lfeldkam@) INTRODUCTION Although the rediscovery in the mid 1980s of the backpropagation algorithm by Rumelhart, Hinton, and Williams [1] has long been viewed as a landmark event in the history of neural network computing and has led to a sustained resurgence of activity, the relative ineffectiveness of this simple gradient method has motivated many researchers to develop enhanced training procedures. In fact, the neural network literature has been inundated with papers proposing alternative training Kalman Filtering and Neural Networks,. | Kalman Filtering and Neural Networks Edited by Simon Haykin Copyright 2001 John Wiley Sons Inc. ISBNs 0-471-36998-5 Hardback 0-471-22154-6 Electronic 2 PARAMETER-BASED KALMAN FILTER TRAINING THEORY AND IMPLEMENTATION Gintaras V. Puskorius and Lee A. Feldkamp Ford Research Laboratory Ford Motor Company Dearborn Michigan . gpuskori@ lfeldkam@ INTRODUCTION Although the rediscovery in the mid 1980s of the backpropagation algorithm by Rumelhart Hinton and Williams 1 has long been viewed as a landmark event in the history of neural network computing and has led to a sustained resurgence of activity the relative ineffectiveness of this simple gradient method has motivated many researchers to develop enhanced training procedures. In fact the neural network literature has been inundated with papers proposing alternative training 23 24 2 PARAMETER-BASED KALMAN FILTER TRAINING methods that are claimed to exhibit superior capabilities in terms of training speed mapping accuracy generalization and overall performance relative to standard backpropagation and related methods. Amongst the most promising and enduring of enhanced training methods are those whose weight update procedures are based upon second-order derivative information whereas standard backpropagation exclusively utilizes first-derivative information . A variety of second-order methods began to be developed and appeared in the published neural network literature shortly after the seminal article on backpropagation was published. The vast majority of these methods can be characterized as batch update methods where a single weight update is based on a matrix of second derivatives that is approximated on the basis of many training patterns. Popular second-order methods have included weight updates based on quasi-Newton Levenburg-Marquardt and conjugate gradient techniques. Although these methods have shown promise they are often plagued by convergence to poor local optima which can be .

Không thể tạo bản xem trước, hãy bấm tải xuống
TÀI LIỆU LIÊN QUAN
31    942    42
TỪ KHÓA LIÊN QUAN
TÀI LIỆU MỚI ĐĂNG
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.