A typical AI perceives its environment and takes actions that maximize its chance of successfully achieving its goals.

The delta rule

Learn from your mistakes

Outline

Supervised learning problem

Delta rule

Delta rule as gradient descent

Hebb rule

An AI’s intended goal function can be simple (“1 if the AI wins a game of Go, 0 otherwise”) or complex (“Do actions mathematically similar to the actions that got you rewards in the past”). Goals can be explicitly defined, or can be induced. If the AI is programmed for “reinforcement learning”, goals can be implicitly induced by rewarding some types of behavior and punishing others.

Artificial neural networks

Neural networks, or neural nets, were inspired by the architecture of neurons in the human brain. A simple “neuron” N accepts input from multiple other neurons, each of which, when activated (or “fired”), cast a weighted “vote” for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed “fire together, wire together”) is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another.

An Analysis of Single-Layer Networks in Unsupervised Feature Learning

Development of Adaptive Learning Control Algorithm for a two-degree-of-freedom Serial Ball And Socket Actuator

A great deal of research has focused on algorithms for learning features from unlabeled data. Indeed, much progress has been made on benchmark datasets like NORB and CIFAR by employing increasingly complex unsupervised learning algorithms and deep models. In this paper, however, we show that several very simple factors, such as the number of hidden nodes in the model, may be as important to achieving high performance as the choice of learning algorithm or the depth of the model. Speciﬁcally, we will apply several off-the-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs and K-means clustering, Gaussian mixtures) to NORB and CIFAR data sets using only singlelayer networks.

The ANN method is being implemented to learn a set of information, a specific network design is required to cover each individual data set and application. Consequently, a special network has been designed to adopt the control parameters for the ball-and-socket actuator that consists of an input layer (valve order, time, pump power, flow-rate, output pressure, and head losses for the system), one hidden layer, and an output layer (angular displacement 1, angular displacement 2, angular velocity 1, angular velocity 2, angular acceleration 1, and angular acceleration 2).

ON-LINE ADAPTIVE LEARNING RATE BP ALGORITHM FOR MLP AND APPLICATION TO AN IDENTIFICATION

TRAINING ARTIFICIAL NEURAL NETWORKS FOR TIME SERIES PREDICTION USING ASYMMETRIC COST FUNCTION

An on-line algorithm that uses an adaptive learning rate is proposed. Its development is based on the analysis of the convergence of the conventional gradient descent method for three- layer BP neural networks. The effectiveness of the proposed algorithm applied to the identification and prediction of behavior of non-linear dynamic systems is demonstrated by simulation experiments.

Artificial neural network theory generally minimises a standard statistical error, such as the sum of squared errors, to learn relationships from the presented data. However, applications in business have shown that real forecasting problems require alternative error measures. Errors, identical in magnitude, cause different costs. To reflect this, a set of asymmetric cost functions is proposed as novel error functions for neural network training.

ON-LINE ADAPTIVE LEARNING RATE BP ALGORITHM FOR MLP AND APPLICATION TO AN IDENTIFICATION 182.53 KB