Parallelized Hyperparameter Optimization for Machine Learning Models
Deep learning architectures are becoming prevalent for the identification of malicious activity and malware, such as the use of LSTMs and convolutional architectures for detecting algorithmically-generated domains. In these and many machine learning models, optimization of a model’s hyperparameters is an ad hoc and brute-force endeavor which is laborious and time consuming. This is particularly painful for complex models with lengthy training times. Here, I describe a parallelized asynchronous hyperparameter optimization platform which enables the efficient exploration of parameter spaces with large clusters of GPUs coordinated by Apache Mesos. Exhaustive hyperparameter exploration is available as well as more intelligent optimization strategies such as those based on Gaussian Process Regression and Particle Swarm Optimization. The utility of this platform will be demonstrated by optimizing and fine-tuning deep architectures for detecting dictionary-based DGAs.