About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ISCAS 2022
Conference paper
Hyper-parameter Tuning for Progressive Learning and its Application to Network Cyber Security
Abstract
The long-term deployment of data-driven AI technology using artificial neural networks (ANNs) should be scalable and maintainable when new data becomes available. To insure smooth adaptation, the learning must be cumulative so that the network consumes new data without compromising its inference performance based on past data. Such incremental accumulation of learning experience is known as progressive learning. In this paper, we address the open problem of tuning the hyperparameters of neural networks during progressive learning. A hyper-parameter optimization framework is proposed that selects the best hyper-parameter values on a task-by-task basis. The neural network model adapts to each progressive learning task by adjusting the hyper-parameters under which the neural architecture is incrementally grown. Several hyper-parameter search strategies are explored and compared in support of progressive learning. In contrast to the predominant practice of using imaging datasets in machine learning, we have used cybersecurity datasets to illustrate the advantages of the proposed hyper-parameter tuning algorithms.