Approximate computing trades accuracy of computation for savings in execution time and/or energy by leveraging approx- imation opportunities across the computing stack, including programming languages[ 14 , 34 , 38 , 40 ], compilers[ 7 , 26 , 42 , 43 ], runtime systems[ 9 , 18 , 21 ], and hardware[ 17 , 19 , 32 , 41 ]. In addition to data-parallel and streaming applications[ 23 , 24 , 39 ] researchers proposed approximation techniques suit- able for iterative numerical computations, such as iterative solvers, large scale numerical models, and sparse matrix calculations[16, 46, 47]. Approximate computing techniques typically introduce inexactness and/or approximation by transforming compute- intensive kernels, which we call approximable blocks (ABs). Furthermore, many approximation techniques expose knobs to calibrate the approximation levels (ALs), which control the error or speedup. For instance, loop perforation [ 26 , 43 ] skips a fraction of a loop’s iterations, and exposes this fraction as a knob to control the accuracy/speedup tradeoff.