Eric Cheng, Daniel Mueller-Gritschneder, et al.
DAC 2019
Microprocessor design teams use a combination of simulation-based and formal verification techniques to validate the pre-silicon models prior to 'tape-out' and chip fabrication. Pseudo-random test case generation to 'cover' the architectural space is still relied upon as the principal means to identify design bugs. However, such methods are limited to functional bugs only. Detection and diagnosis of timing (performance) bugs at the architectural level is largely an expert job. Architects guide the performance team to run manually generated test cases to validate the design from a performance viewpoint. In this paper, we will review some of the new approaches being tried out to automate the generation of performance test cases. We will show how this can be done within the basic framework of current functional validation and testing of pre-silicon processor models. Three categories of 'reference' specifications are used in determining the defect-free pipeline timing behavior associated with generated test cases: (a) axiomatic specifications of intrinsic machine latencies and bandwidths; (b) proven analytical models for simple basic block and loop test cases; and, (c) a stable reference behavioral/functional (pre-RTL) model of the processor under development. We report experimental results obtained in performance validation studies applied to real PowerPC processor development projects.
Eric Cheng, Daniel Mueller-Gritschneder, et al.
DAC 2019
Ramon Bertran, Alper Buyuktosunoglu, et al.
MICRO 2012
Karthik Swaminathan, Ramon Bertran, et al.
DSN-S 2021
Ramon Bertran, Pradip Bose, et al.
ICCD 2017