An out-of-the-box full-network embedding for convolutional neural networks
Abstract
Features extracted through transfer learning can be used to exploit deep learning representations in contexts where there are very few training samples, where there are limited computational resources, or when the tuning of hyper-parameters needed for training deep neural networks is unfeasible. In this paper we propose a novel feature extraction embedding called full-network embedding. This embedding is based on two main points. First, the use of all layers in the network, integrating activations from different levels of information and from different types of layers (i.e., convolutional and fully connected). Second, the contextualisation and leverage of information based on a novel three-valued discretisation method. The former provides extra information useful to extend the characterisation of data, while the later reduces noise and regularises the embedding space. Significantly, this also reduces the computational cost of processing the resultant representations. The proposed method is shown to outperform single layer embeddings on several image classification tasks, while also being more robust to the choice of the pre-trained model used as transfer source.