Đang chuẩn bị liên kết để tải về tài liệu:
Linear neurons and their learning algorithms

Không đóng trình duyệt đến khi xuất hiện nút TẢI XUỐNG

In this paper, we introduce the concepts of Linear neurons, and new learning algorithms based on Linear neurons, with an explanation of the reasons behind these algorithms. First, we briefly review the Boltzmann Machine and the fact that the invariant distributions of the Boltzmann Machine generate Markov chains. We then review the θ-transformation and its completeness, i.e. any function can be expanded by θ-transformation. We further review ABM (Attrasoft Boltzmann Machine). | Journal of Computer Science and Information Technology December 2018, Vol. 6, No. 2, pp. 1-14 ISSN: 2334-2366 (Print), 2334-2374 (Online) Copyright © The Author(s). All Rights Reserved. Published by American Research Institute for Policy Development DOI: 10.15640/jcsit.v6n2a1 URL: https://doi.org/10.15640/jcsit.v6n2a1 Linear Neurons and Their Learning Algorithms Ying Liu1 Abstract In this paper, we introduce the concepts of Linear neurons, and new learning algorithms based on Linear neurons, with an explanation of the reasons behind these algorithms. First, we briefly review the Boltzmann Machine and the fact that the invariant distributions of the Boltzmann Machine generate Markov chains. We then review the θ-transformation and its completeness, i.e. any function can be expanded by θ-transformation. We further review ABM (Attrasoft Boltzmann Machine). The invariant distribution of the ABM is a θtransformation; therefore, an ABM can simulate any distribution. We then discuss that the ABM algorithm is only the first algorithm in a family of new algorithms based on the θ-transformation. We introduce the simplest algorithm in this family based on Linear neurons. We also discuss the advantages of this algorithm: accuracy, stability, and low time complexity. Keywords: AI, Boltzmann machine, Markov chain, invariant distribution, Completeness, Deep Neural Network. 1. Introduction Neural networks and deep learning currently provide the best solutions to many supervised learning problems. In 2006, a publication by Hinton, Osindero, and Teh [1] introduced the idea of a “deep” neural network, which first trains a simple supervised model; then adds on a new layer on top and trains the parameters for the new layer alone. You keep adding layers and training layers in this fashion until you have a deep network. Later, this condition of training one layer at a time is removed. After Hinton‟s initial attempt of training one layer at a time, Deep Neural Networks train all layers .

Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.