single layer back-propagation network
multi layer forward feed network
ANN: Artificial Neural Networks
feedforward neural network: first and most simple
1. single-layer perceptron: no hidden layer
2. multi-layer perceptron: at least one hidden layer
information moves in only one direction, no cycles or loops
Dropout: a simple way to prevent neural networks from overfitting
input layers/nodes: x1, x2...xn (features): no activation function
hidden layers/nodes
hidden layers/nodes
output layer: y(target)
each node is a neuron: weight + activation function
Mathematically, ANNs can be represented as weighted directed graphs. The most common ANN architectures are: Single-Layer Feed-Forward NNs: One input layer and one output layer of processing units. No feedback connections (e.g. a Perceptron) Multi-Layer Feed-Forward NNs: One input layer, one output layer, and one or more hidden layers of processing units. No feedback connections (e.g. a Multi-Layer Perceptron) Recurrent NNs: Any network with at least one feedback connection. It may, or may not, have hidden units Further interesting variations include: sparse connections, time-delayed connections, moving windows, …
ANN: no feedback connection
RNN: at least one feedback connection
With this choice, the single-layer network is identical to the logistic regression model, widely used in statistical modeling. The logistic function is also known as the sigmoid function. It has a continuous derivative, which allows it to be used in backpropagation. This function is also preferred because its derivative is easily calculated:
Back-propagation is one of learning techniques
Multi-layer networks use a variety of learning techniques, the most popular being back-propagation. Here, the output values are compared with the correct answer to compute the value of some predefined error-function. By various techniques, the error is then fed back through the network. Using this information, the algorithm adjusts the weights of each connection in order to reduce the value of the error function by some small amount. After repeating this process for a sufficiently large number of training cycles, the network will usually converge to some state where the error of the calculations is small. In this case, one would say that the network has learned a certain target function. To adjust weights properly, one applies a general method for non-linear optimization that is called gradient descent. For this, the network calculates the derivative of the error function with respect to the network weights, and changes the weights such that the error decreases (thus going downhill on the surface of the error function). For this reason, back-propagation can only be applied on networks with differentiable activation functions.
A multilayer perceptron (MLP) is a feedforward artificial neural network model
MLP is Feedforward ANN
Except for the input nodes, each node is a neuron (or processing element) with a nonlinear activation function. MLP utilizes a supervised learning technique called backpropagation for training the network
A feedforward neural network is an artificial neural network wherein connections between the units do not form a cycle. As such, it is different from recurrent neural networks.
No comments:
Post a Comment