CNNs have a set of filters working on localised regions that make connections in small two-dimensional areas of the input image, called the local receptive fields. CNNs use the same weights and biases for each of the hidden neurons. By sharing the weights, the network is forced to learn invariant features at different regions of the image. Thus, all the neurons in the layer detect the same feature but at different locations in the image.
Pooling layers are a type of layer typically used after Convolutional layers. They summarise the information from the convolution layer by performing a statistical aggregate function, typically average or max, applied to each feature map and by producing a compressed feature map. Forward propagation evaluates the activations, and backward propagation computes the gradient from the above layer and the local gradient to calculate gradients on the layer parameters.
Learn from the lab
We fed images to the AI System with a sequence of convolution and pooling layers; convolution to extract spatial invariant features from a subsample using the spatial average of maps and a multilayer Neural Network (MLP) as a final classifier (fully connected layers); and a sparse connection matrix between layers (weight sharing) to avoid a large computational cost and reduce overfitting.
Please insert your User name and password. If you don't have make one now. It's easy.