Researchers from Google AI and UC Berkeley have proposed ‘PRIME’, an approach focused on architecting accelerators based on data-driven optimisation that uses existing logged data consisting of accelerator designs and corresponding performance metrics to architect hardware accelerators without further hardware simulation.
PRIME can be trained on data from prior simulations, a database of actually fabricated accelerators, and also a database of infeasible or failed accelerator designs. This approach for architecting accelerators — tailored towards both single- and multi-applications — improves performance upon state-of-the-art simulation-driven methods by about 1.2x-1.5x, while considerably reducing the required total simulation time by 93% and 99%, respectively. PRIME also architects effective accelerators for unseen applications in a zero-shot setting, outperforming simulation-based methods by 1.26x.
Sign up for your weekly dose of what's up in emerging technology.
PRIME learns a robust prediction model that is not prone to being fooled by adversarial examples (that we will describe shortly), which would be otherwise found during optimization. One can then simply optimize this model using any standard optimizer to architect simulators. More importantly, unlike prior methods, PRIME can also utilize existing databases of infeasible accelerators to learn what not to design. This is done by augmenting the supervised training of the learned model with additional loss terms that specifically penalize the value of the learned model on the infeasible accelerator designs and adversarial examples during training.
Google AI and UC Berkeley have proposed the method in “Data-Driven Offline Optimization for Architecting Hardware Accelerators”. Training a strong offline optimization algorithm on offline datasets of low-performing designs can be a highly effective ingredient in at the very least, kickstarting hardware design, versus throwing out prior data. The researchers hope to use it for hardware-software co-design, which exhibits a large search space but plenty of opportunity for generalisation. The researchers have also released the code for training PRIME and the dataset of accelerators.