18.15 Neural Networks

Resources include:

Representation Method Measure
Multi-Layered Connected Network Back Propagation

Artificial neural networks are inspired by biological neural networks as found in the human brain with supposedly over 60 billion neurons connected by synapses. They are often called artificial neural networks to distinguish them from the natural kind. In a natural neural network electrical impulses are transmitted from neuron to neuron. In an artificial neural network numbers are transmitted from one node to the next node.

The neurons (nodes) are organised as interconnected layers. A feed forward neural network, for example, has three layers: the input, a hidden, and an output layer. The neurons act as mathematical nodes that perform some maths on the input and if the combined signal is strong enough a number is sent out from the node into the network. The particular maths needs to be learned to ensure the final output of the neural network matches the expected output. The algorithm for learning (training) a neural network modifies the calculations to achieve the desired output from the output layer, using back propagation.

Numeric input data is expected, often assumed to be scaled to be between 0 and 1. The input data (as numbers) are propagated through the nodes of the network, being multiplied by a weight associated with each link between the nodes. At each node all of the input links are summed together with another number added to the mix for each node. This number is then fed into a sigmoid function (an S shaped curve) as the final output from that node.

The final node is the output node which calculates the decision.

As a classification model the number that is output through the output nodes can be interpreted as the probability of a particular class. Neural network algorithms can be used for regression or classification tasks.

The nnet (Ripley 2022) package provides algorithms for feed-forward neural networks with a single hidden layer, and for multinomial log-linear models.

Neural networks were first implemented in the late 1960’s whilst back propagation emerged in the late 1980’s.

library(nnet)
?nnet

Options include linout= to predict a linear rather than a logistic value, rang= is the range for the initial random weights from -rang to rang with 0.5 suggested, decay= used to change the default decay parameter of 0, skip= to turn off the use of a skip-layer where input variables are connected directly to the output variable. The skip-layer results in the model being a perturbation of a linear hyperplane where we can think of the direct links (i.e., the skip-layer) representing a plane and the non-skip links representing a perturbation to the plane Velten (2009).

References

Ripley, Brian. 2022. Nnet: Feed-Forward Neural Networks and Multinomial Log-Linear Models. http://www.stats.ox.ac.uk/pub/MASS4/.
Velten, Kai. 2009. Mathematical Modeling and Simulation. Wiley.


Your donation will support ongoing availability and give you access to the PDF version of this book. Desktop Survival Guides include Data Science, GNU/Linux, and MLHub. Books available on Amazon include Data Mining with Rattle and Essentials of Data Science. Popular open source software includes rattle, wajig, and mlhub. Hosted by Togaware, a pioneer of free and open source software since 1984. Copyright © 1995-2022 Graham.Williams@togaware.com Creative Commons Attribution-ShareAlike 4.0