Next Article in Journal
A Ka-Band SiGe BiCMOS Quasi-F−1 Power Amplifier Using a Parasitic Capacitance Cancellation Technique
Previous Article in Journal
DycSe: A Low-Power, Dynamic Reconfiguration Column Streaming-Based Convolution Engine for Resource-Aware Edge AI Accelerators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extreme Path Delay Estimation of Critical Paths in Within-Die Process Fluctuations Using Multi-Parameter Distributions

1
Minima Processor, Rantakatu 3, FI-90100 Oulu, Finland
2
Department of Mathematics and Systems Analysis, School of Science, Aalto University, FI-00076 Aalto, Finland
3
Department of Computing, University of Turku, FI-20014 Turun yliopisto, Finland
*
Author to whom correspondence should be addressed.
J. Low Power Electron. Appl. 2023, 13(1), 22; https://doi.org/10.3390/jlpea13010022
Submission received: 10 January 2023 / Revised: 9 March 2023 / Accepted: 15 March 2023 / Published: 20 March 2023

Abstract

:
Two multi-parameter distributions, namely the Pearson type IV and metalog distributions, are discussed and suggested as alternatives to the normal distribution for modelling path delay data that determines the maximum clock frequency (FMAX) of a microprocessor or other digital circuit. These distributions outperform the normal distribution in goodness-of-fit statistics for simulated path delay data derived from a fabricated microcontroller, with the six-term metalog distribution offering the best fit. Furthermore, 99.7% confidence intervals are calculated for some extreme quantiles on each dataset using the previous distributions. Considering the six-term metalog distribution estimates as the golden standard, the relative errors in single paths vary between 4 and 14% for the normal distribution. Finally, the within-die (WID) variation maximum critical path delay distribution for multiple critical paths is derived under the assumption of independence between the paths. Its density function is then used to compute different maximum delays for varying numbers of critical paths, assuming each path has one of the previous distributions with the metalog estimates as the golden standard. For 100 paths, the relative errors are at most 14% for the normal distribution. With 1000 and 10,000 paths, the corresponding errors extend up to 16 and 19%, respectively.

1. Introduction

Inadequately modelling variance in IC design increases design costs through performance loss, time-to-market delay, and yield loss. Variance components are usually lumped into ambient (addressed with design constraints), die-to-die or wafer-to-wafer process variations (corner models with possible spatial components), and within-die (WID) process variations (statistical transistor-level models). Even if the underlying model is normal, the sheer number of transistors leads to high dimensionality and non-linear transfer functions lead to non-normal distributions. The latter is especially exacerbated in the sub- and near-threshold regions, where energy optimality of digital circuits is achieved [1]. In these cases, transistor and gate delay is exponentially dependent on process and environmental parameters, such as the threshold voltage ( V t ). The high dimensionality precludes analytical models and, for large circuits, Monte Carlo simulations of all parameters are impossible even with supercomputers.
Ultra-wide dynamic voltage and frequency scaling (UW-DVFS) is a flavour of conventional DVFS that operates down to the minimum-energy point [1]. The exact operating voltage will however be dynamically dependent on the process, temperature, architecture, data, among other things [2]. Designing for UW-DFVS is similar to conventional DVFS, with the exception of the re-characterization of the operating (and other sign-off) points. Additionally, due to the high susceptibility of performance to variance, liberty variation format (LVF) with moment gate libraries (which include skewness, the third-order statistical moments) are required in the design process. The inclusion of LVF makes the re-characterization an extremely time- and CAD-license-consuming design step. If re-characterization is also required at sign-off (due to, e.g., inaccurate operating points), it can cause major delays to the conventional design process.
A better option is to add accurate statistical modelling at higher abstraction levels to evaluate circuits at earlier design phases. Such early estimation has not yet been extensively studied in the literature. The best example is perhaps [3], where a model describing the maximum clock frequency (FMAX) distribution of a non-DVFS microprocessor is derived and compared with wafer data for a 250 nm process. The model uses normal distributions which are adequate for the 250 nm process. With the normality assumption, it is shown that within-die fluctuations cause the most significant performance degradation and a channel length deviation of 20% approximately projects a loss of a single performance process generation. For modern processes and especially near- and sub-threshold operation, transistor delay is a non-linear function of random process variables and this delay is non-Gaussian [4]. Due to the high dimensionality and exponential transfer functions, skewed distributions such as lognormal [5] are not enough to model the higher-level delay variance, and therefore multi-parameter distributions are required. Accurate early-phase statistical understanding could also be advantageous in design-phase power modelling. In [6], leakage and dynamic energy are co-optimized early in the design phase. Optimization across SS (slow–slow) and FF (fast–fast) corners, and SVT, HVT, and ULL gate libraries result in a gate library mix of 14% SVT, 50% HVT, and 36% ULL. Here, understanding the cross-library variance might result in a different library mix and/or lower voltages.
With skewed distributions, extreme quantile (high sigma) effects will show up in a smaller number of parts, possibly lowering the yield. Without an extremely large number of simulations, it is difficult to measure the effects of extreme quantile variation quickly and accurately. In this article, different methods of more accurately estimating the FMAX distribution are explored via multi-parameter distributions. They allow more precise margin additions or low-voltage effect estimations, which can be achieved, for example, with confidence intervals: e.g., by making sure that the cycle time is beyond the upper confidence bound.
The rest of this article is organized as follows. In Section 2, we present the problem statement of the paper, and Section 3 provides the mathematical framework as well as introducing the different mathematical methods used to generate the final results. Section 4 explains the origins of the data and presents the results along with their discussion. Finally, in Section 5 we close the article with our conclusions.

2. Problem Statement

When dealing with extreme quantiles, we are often faced with two primary challenges. First, we usually have limited data on the process of interest, and generating additional data may be costly or outright impossible. Second, we seldom know the underlying distribution of the data, which means we have to consider several different candidate distributions.
As an example, consider the three different distributions presented in Figure 1. In addition to the normal distribution, we have fitted the Pearson type IV and six-term metalog distributions (see Section 3.1 and Section 3.2) to path delay (signal propagation time from launch FF to capture FF) data from Monte Carlo simulations (see Section 4.1). Furthermore, we have plotted their theoretical ( 1 p ) -quantiles corresponding to p = 1.35 × 10 3 and p = 3.17 × 10 5 . In the case of normal distribution, these correspond to the so-called three- and four-sigma quantiles, i.e., quantiles which lie three and four standard deviations above the mean.
The ( 1 p ) -quantiles are noticeably smaller for the normal distribution than for the Pearson type IV and six-term metalog distributions. Furthermore, the discrepancy in the point estimates is considerably larger for the quantile corresponding to p = 3.17 × 10 5 . In order to assess which distribution offers the best fit to the data, we have to consider different statistics that measure the goodness of fit (see Section 3.4).
However, point estimates of extreme quantiles are rarely useful on their own. This is due to the fact that the estimates depend on the sample used, and if we were to generate multiple samples of the same process we would always end up with different estimates for the statistic. Instead, we are interested in the confidence intervals of the extreme quantiles which give a range of values the unknown statistic is likely contained in. A common method to generate confidence intervals with relatively mild assumptions is bootstrapping (see, e.g., [7]).

3. Methods

3.1. The Pearson Distribution Family

The Pearson distribution is a family of continuous distributions originally introduced in a series of articles on biostatistics [8,9,10]. Whereas one- or two-parameter distributions can be used to fit data based on the sample mean and variance, there are often times they are unable to properly characterize skewed data. The Pearson distribution family allows distribution fitting on unimodal data that is potentially skewed or asymmetric via additional shape parameters. This makes the distribution family a compelling choice for fitting to skewed data and estimating its extreme quantiles. For a brief overview of applications of the distribution family in other work, see [11].
The Pearson distribution family consists of three main types, namely, types I, IV and VI, as well as other subtypes which are special cases of the main ones. The Pearson type I and VI distributions are generalizations of the beta and F-distributions, respectively, whereas type IV is not a generalization of any known distribution. However, based on our empirical observations, the type IV distribution usually offers the best fit on the data out of the three main types. We also noticed that it is the least likely distribution to fail to converge when optimizing parameters using non-linear optimization methods.
The density function of the Pearson type IV distribution is defined as [8]
p ( x ) = K 1 + x λ θ 2 m e ν   arctan x λ θ ,
where θ , m , ν and λ are parameters of the distribution, and K is a normalizing constant. See Appendix A for a detailed overview on the parametrization of the distribution. The parameters λ and θ are also known as the location and scale parameters, respectively, while m and ν are called the shape parameters.
Maximum likelihood estimation is commonly used to solve the parameters of a multi-parameter distribution. Maximizing the log-likelihood is often achieved using quasi-Newton methods, such as the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm with a computational complexity of O n 2 , where n is the number of data points.

3.2. The Metalog Distribution Family

A more novel class of distributions are the so-called quantile-parameterized distributions, originally proposed for decision analysis in 2011 [12]. What makes them rather exceptional is the fact that, unlike most probability distributions, they are directly parameterized by the data. This eliminates the need for parameter estimation, which usually requires non-linear optimization for more complex distributions.
Introduced in 2016, one particular family of quantile-parameterized distributions is called the metalog distribution, which has virtually unlimited shape flexibility in addition to simple closed-form expressions for density and quantile functions [13]. Due to the aforementioned properties, the metalog distribution is an attractive candidate for modelling potentially skewed distributions and estimating extreme quantiles from simulation data.
The metalog distribution quantile function M ( y ) , expressed as a function of cumulative probability 0 < y < 1 , is defined as follows [13]:
M k ( y ) = a 1 + a 2 ln y 1 y , for k = 2 , a 1 + a 2 ln y 1 y + a 3 y 1 2 ln y 1 y , for k = 3 , a 1 + a 2 ln y 1 y + a 3 y 1 2 ln y 1 y + a 4 y 1 2 , for k = 4 , M k 1 ( y ) + a k y 1 2 k 1 2 , for odd k 5 , M k 1 ( y ) + a k y 1 2 k 2 2 ln y 1 y , for even k 6 ,
where a i are parameters of the distribution. The corresponding density function m ( y ) for 0 < y < 1 is then given by [13]
m k ( y ) = a 2 y ( 1 y ) 1 , for k = 2 , a 2 y ( 1 y ) + a 3 y 1 2 y ( 1 y ) + ln y 1 y 1 , for k = 3 , a 2 y ( 1 y ) + a 3 y 1 2 y ( 1 y ) + ln y 1 y + a 4 1 , for k = 4 , m k 1 1 ( y ) + a k k 1 2 y 1 2 k 3 2 1 , for odd k 5 , m k 1 1 ( y ) + a k y 1 2 k 2 2 y ( 1 y ) + k 2 2 y 1 2 k 4 2 ln y 1 y y 1 2 k 2 2 y ( 1 y ) q k 1 1 ( y ) + a k y 1 2 k 2 2 y ( 1 y ) 1 , for even k 6 .
The parameters a i are usually determined by linear least squares from the data [13]. When choosing the number of parameters, one has to be careful not to pick too many parameters to avoid overfitting. Assuming there are n data points, k parameters to be determined and n > k , the computational complexity of fitting a metalog distribution to the data is O k 2 n .
Despite its novelty, the metalog distribution has seen different applications in various branches of natural science and engineering. Recent examples of its applications include hydrology and fish biology [13], as well as risk assessment in astronomy [14] and cybersecurity [15].

3.3. Comparison of the Distributions

The normal distribution is the baseline distribution here due to its historical significance and prevalence in other works related to path delay modelling. Its clearest disadvantage is its inability to model asymmetric and heavy-tailed data. Additionally, the support of the normal distribution is the entire real line, even though path delays cannot obtain negative values. A better two-parameter alternative would be the log-normal distribution due to its support of strictly positive real numbers as well its ability to model skewed data.
The Pearson type IV distribution allows distribution fitting to data that is potentially skewed or asymmetric. However, in some cases it might not be adequate in the ( β 1 , β 2 ) space, where β 1 is the squared skewness and β 2 the kurtosis (see Appendix A for more information). In these cases, the other Pearson main-type distributions, namely types I and VI, may offer superior results.
The metalog distribution offers virtually unlimited shape flexibility. However, choosing the number of terms for the distribution is not unambiguous, since using too many terms may result in overfitting whereas using too few terms offers inferior results (in terms of goodness of fit). As such, one usually has to make a compromise when choosing the number of terms for the distribution.

3.4. Goodness-of-Fit Statistics

In order to assess the goodness of fit of a given distribution in observed data, we use different statistical tests that quantify the distance between the empirical and theoretical distributions in various ways. In the following, we introduce three goodness-of-fit test statistics with different emphases.
First, let x = { x 1 , , x n } be the set of observations. We define the empirical distribution function F n as
F n ( t ) = 1 n i = 1 n 1 x i t ,
where 𝟙 A is the indicator function of the event A. One of the simplest tests is the Kolmogorov–Smirnov (KS) test, for which the corresponding test statistic KS is [16,17]
KS = max x | F n ( x ) F ( x ) | ,
where F n is the empirical distribution Function (4) and F the target cumulative distribution function. That is, (5) measures the largest absolute difference between the empirical and target cumulative distribution functions; therefore, a smaller value of the statistic indicates a better fit.
Another alternative to the KS test is the Cramér–von Mises (CM) criterion, originally proposed in [18,19]. The corresponding test statistic CM is computed as [20]
CM = 1 12 n + i = 1 n 2 i 1 2 n F ( x i ) 2 ,
where x 1 , , x n are the observations sorted in increasing order. In other words, (6) computes a weighted average between the empirical and target distributions, with a lower value of the statistic corresponding to a better fit.
Another closely related quantity to the CM criterion is the Anderson–Darling (AD) distance [21]. The corresponding test statistic AD is defined as follows [22]:
AD = n i = 1 n 2 i 1 n log F ( x i ) + log ( 1 F ( x n + 1 i ) ) ,
where x 1 , , x n are the observations sorted in ascending order. Compared to (6), (7) places more emphasis on observations in the distribution tails. Once again, a smaller value of the statistic indicates a better fit of the distribution.

3.5. WID Maximum Critical Path Delay Distribution

In [3], the critical path delay densities and WID parameter fluctuations are assumed to be normal. Here, we relax this assumption by allowing the WID fluctuation on a critical path to be modelled as some arbitrary density f WID with a corresponding cumulative distribution F WID . However, a chip contains several critical paths which may or may not be correlated with one another. Assuming a number N of critical paths is independent, i.e., they have zero correlation, the probability of each path satisfying a specified maximum delay t max is [3]
P WID , N ( t t max ) = F WID , N ( t max ) = F WID ( t max ) N ,
where F WID , N is the WID cumulative distribution of the chip. Consequently, the WID maximum critical path delay density f WID , N of the chip is calculated as [3]
f WID , N ( t max ) = N f WID ( t max ) F WID ( t max ) N 1 .

4. Simulations and Results

4.1. Origins of the Data

The simulation data is derived from a commercial microcontroller, similar to [23] in terms of performance and the number of gates. This UW-DVFS-capable microcontroller is a single-core embedded-class RISC core capable of near-threshold operations and has been fabricated by a 22 nm process. Post-layout statistical timing analysis (STA) data has been used to choose path groups which then have been simulated each with extensive Monte Carlo simulations. A histogram of normalized average path delays (signal propagation time from launch FF to capture FF) of all the paths is shown in Figure 2. The path delays are grouped as follows:
  • Dataset 1 (DS 1) = Hold SS 0.51 V RC worst ;
  • Dataset 2 (DS 2) = Setup win SS 0.51 V RC worst ;
  • Dataset 3 (DS 3) = Setup critical path SS 0.51 V RC worst .
Even though the chip operates around 0.4 V nominally, the corner SS 0.51 V RC worst (worst resistance/capacitance) was chosen as it exhibited the most skewed distribution. DS 2 is critical for timing–event systems, such as [23], where it is important that all critical paths are monitored by the timing–event system. Therefore, the probability of non-monitored paths violating timing has to be known.
Figure 3 illustrates the path correlation of 10 4 setup paths from STA in a heat map. Observe that the correlation matrix is not symmetrical. This is due to the fact that the path correlation ρ A , B [ 0 , 1 ] between paths A and B is calculated as
ρ A , B = # Common instances between A and B # Instances in A .
As can be seen from Figure 3, few paths exhibit any path correlation. In total, only 14.8% of setup path pairs have non-zero correlation (0.56% for hold), that is, one or more common components with each other. Most of these have a common start point, and therefore all the paths with a common start point and large correlation (majority of shared gates) can be represented by a single distribution. As such, a reasonable approximation for critical path delay density can be obtained by assuming independence between the paths, which allows the usage of (9).

4.2. Goodness-of-Fit Statistics of the Data

In the following, we fit different probability distributions to the previously chosen datasets: normal, Pearson type IV (Pearson IV) (1) as well as the metalog (3) with six terms (metalog 6). The parameters for the former two are estimated using the maximum likelihood method; the metalog distribution is directly parameterized by the data via linear least squares [13]. The goodness-of-fit statistics (5)–(7) for each distribution and dataset are tabulated in Table 1.
The results suggest that the six-term metalog distribution offers the best fit among the candidates for each dataset, followed by the Pearson type IV distribution. The choice of six terms for the metalog distribution is based on obtaining the best possible fit while simultaneously avoiding overfitting. We observed that in some instances, having more than six terms in the metalog distribution will offer a slightly better fit, while fewer terms always yielded inferior results. However, graphical inspection revealed that having more than six terms sometimes introduces bi-modality to the distribution, which is a sign of overfitting which we wish to avoid.
While the differences in the KS test statistic values in Table 1 are somewhat negligible, the CM and AD test statistics are noticeably larger for the normal distribution than for the Pearson type IV and metalog distributions. Furthermore, the results imply the normal distribution fits rather poorly to the tails of the data, as indicated by the relatively large AD test statistic values.

4.3. Singular Path Delay Confidence Intervals

Next, we calculate the 99.7% confidence intervals of the ( 1 p ) -quantiles for some values of p for DS 2–3 that are setup paths. Since DS 1 is a hold path, we compute the same confidence intervals for the p-quantiles instead. The confidence intervals are calculated with bootstrapping by resampling the original dataset with replacement 10 4 times, yielding a new estimate for the quantiles in each resample. We consider the same three candidate distributions from earlier: normal, Pearson type IV and six-term metalog. The results are displayed in Table 2.
Previously, we concluded that the six-term metalog distribution appears to offer the best fit on our data. Therefore, we consider its estimates as the ’golden standard’ with which to compare other results and compute their relative errors.
The lower bounds for the quantiles in Table 2 appear to be similar across all datasets, distributions and quantiles—the relative error for the six-term metalog estimates is usually only a few percent at most. Instead, the discrepancy in the upper bound values is noticeably larger. For p = 1.35 × 10 3 , the relative errors for the six-term metalog estimates range between 4 and 8% for the normal distribution. When p = 3.17 × 10 5 , the corresponding errors vary from 10 to over 14%.
As for the Pearson type IV distribution, the relative errors for the upper bounds of the six-term metalog estimates vary between 1 and 5% for p = 1.35 × 10 3 . When p = 3.17 × 10 5 , the corresponding errors are less than 2% for DS 1 and DS 3, while it reaches over 19% for DS 2. Even though the AD statistic value for the Pearson type IV distribution is only around twice of the corresponding six-term metalog statistic for DS 2 in Table 1, it is entirely possible that the Pearson distribution does not adequately fit to the distribution tails when extrapolating far beyond the range of the data.

4.4. Multiple Critical Path Delay Distributions

As remarked earlier, the paths exhibit very little correlation, which allows us to use (9) to approximate the WID maximum critical path delay density for multiple critical paths. By assuming a single critical path has a normal, Pearson type IV or six-term metalog distribution, we can examine the effect of the number of critical paths N on the shape of the path delay density via (9).
In Figure 4a–c, we have plotted the maximum critical path delay densities for different N using the three aforementioned distributions in DS 3. For the normal distributions, the variance decreases and the mean increases with N. However, increasing N from 1 to 10 has a greater impact on the shape of the distribution than increasing N from N = 10 3 to N = 10 4 , as remarked in [3]. When N is larger than 1, the distribution is no longer symmetrical.
A similar decrease in variance and increase in mean with N is also observable for the Pearson type IV and six-term metalog distributions. It is worth noting, however, that the mean increases more and the variance decreases less for the two distributions than for the normal distributions. Furthermore, the normal distributions are less skewed than the other distributions.
Finally, we calculate some values of t max when the probability from (8) is fixed for some N-independent critical paths. This is performed by numerically integrating the density from (9), coupled with a bisection method to find the correct t max . Once again, we assume a single critical path has a normal, Pearson type IV or six-term metalog distribution, and the metalog distribution estimates are considered to be the golden standard. The results are tabulated in Table 3, Table 4 and Table 5.
With only N = 10 2 critical paths, the relative errors to the six-term metalog estimates are at most 10% for the normal distribution, when p = 1.35 × 10 3 . For p = 3.17 × 10 5 , the corresponding relative errors reach 14%. When N = 10 3 , the relative errors range from 3 to over 12% for the normal distribution for p = 1.35 × 10 3 ; when p = 3.17 × 10 5 , the errors reach 16%. For N = 10 4 , the corresponding errors are between 8 and 15% for p = 1.35 × 10 3 , and between 6 and 19% for p = 3.17 × 10 5 . It is worth noting that the relative errors are clearly higher for DS 2 and 3 than for DS 1. This is likely due to the fact that the former two sets contain paths with more atomic components than the paths in the latter.
For the Pearson type IV distribution, the relative errors to the six-term metalog estimates are less than a percentage for DS 1 and 3 with any N or p. However, the same errors are noticeably higher for DS 2, which we already noticed for the singular path confidence intervals earlier. Due to the dependence on the number of critical paths N in (9), the relative errors for the Pearson distribution in DS 2 increase with N.

5. Conclusions

Two multi-parameter distributions, namely the Pearson type IV and metalog distributions, are discussed and suggested as alternatives to the traditional normal distribution for modelling path delay data that determines the maximum clock frequency (FMAX) of a microprocessor or other digital circuit. These distributions are assessed and compared with different goodness-of-fit statistics for simulated path delay data derived from a fabricated microcontroller. The Pearson type IV and metalog distribution outperform the normal distribution for each statistic, with the six-term metalog distribution offering the best fit. Furthermore, 99.7% confidence intervals are calculated for some extreme quantiles on the datasets for each distribution. Considering the six-term metalog distribution estimates as the golden standard, the relative errors in single paths vary between 4 and 14% for the normal distribution. Finally, the within-die (WID) variation maximum critical path delay distribution for multiple critical paths is derived under the assumption of independence between the paths. Its density function is then used to compute different maximum delays t max for varying numbers of critical paths N, assuming each path has one of the previous distributions with the metalog estimates as the golden standard. For N = 10 2 paths, the relative errors are at most 14% for the normal distribution. With N = 10 3 and N = 10 4 , the corresponding errors reach 16 and 19%, respectively. The errors are clearly higher for longer paths that contain more atomic components than shorter paths with fewer components.
While this work is based on industry-standard Monte Carlo simulations, the underlying data is based on a functional integrated circuit [23], without which it would be impossible to conduct the work (or, if based on simulations of non-fabricated work, would severely undermine the validity of the results). It would be unfeasible to test a significant number of dies to prove these concepts—producing a similar amount of results via fabrication would require hundreds or thousands of fabricated chips in split lots (wafers tuned to the worst-case corners). This sort of cost is prohibitive to all academic institutions, small- and mid-sized companies, and are the sole opportunity of the largest companies and foundries.
In future work, other multi-parameter distributions which allow the study of extreme quantiles are worth exploring. For example, the Box–Cox elliptical distributions are a novel class of distributions that allow the modelling of marginally skewed and potentially heavy-tailed data [24]. In addition, parametric quantile regression models such as [25] offer an alternative approach to quantile estimation.

Author Contributions

Conceptualization, M.R. and L.K.; methodology, M.R., P.I. and L.K.; software, M.R., M.T. and J.T.; validation, P.I. and L.K.; formal analysis, M.R. and P.I.; investigation, M.R.; resources, M.T., J.T. and L.K.; data curation, M.T. and J.T.; writing—original draft preparation, M.R., P.I. and L.K.; writing—review and editing, M.R., M.T., J.T., P.I. and L.K.; visualization, M.R.; supervision, P.I. and L.K.; project administration, L.K.; funding acquisition, P.I. and L.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Business Finland RnD4 under grant number 9228/31/2019, and the Academy of Finland, the Centre of Excellence in Randomness and Structure, under decision number 346308.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Detailed design methodology and used design parameters are presented in the article. No additional data sharing is applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Pearson Type IV Distribution

Let X be a random variable with mean μ . Denote
β 1 = E ( X μ ) 3 2 E ( X μ ) 2 3 = μ 3 σ 3 2 , β 2 = E ( X μ ) 4 E ( X μ ) 2 2 = μ 4 σ 4 ,
where μ k is the kth central moment and σ the standard deviation of X. The quantities β 1 and β 2 are more commonly known as the squared skewness (or square of the third standardized moment) and the traditional kurtosis (or the fourth standardized moment), respectively. Skewness is a measure of asymmetry of the distribution whereas kurtosis measures the tailedness of the distribution.
Using the previous notation, a Pearson density function p is defined to be any solution to the differential Equation [8]
p ( x ) p ( x ) = x a c 1 + c 2 x + c 3 x 2 ,
where
a = c 2 = ( β 2 + 3 ) β 1 μ 2 10 β 2 12 β 1 18 , c 1 = ( 4 β 2 3 β 1 ) μ 2 10 β 2 12 β 1 18 , c 3 = 2 β 2 3 β 1 6 10 β 2 12 β 1 18 .
The different types of the Pearson distribution family are split into two cases, which can be distinguished by the sign of the discriminant of the quadratic function
Q ( x ) = c 1 + c 2 x + c 3 x 2
found in (A1). If the discriminant is negative, then the roots of (A2) are imaginary. In this case, we obtain the Pearson type IV distribution, and the solution to (A1) has the form shown in (1). The parameters are defined as follows [8]:
θ = 4 c 3 c 1 c 2 2 2 c 3 , m = 1 2 c 3 , ν = 2 a c 3 c 2 2 θ c 3 2 , λ = μ + θ ν 2 ( m 1 ) .
Furthermore, the constant K can be computed as [26]
K = | Γ m + i 1 2 ν Γ ( m ) | 2 θ B m 1 2 , 1 2 ,
where Γ ( · ) is the gamma function and B ( · , · ) the beta function. In order for the density function to be proper, we further require that θ > 0 and m > 1 2 .

References

  1. Turnquist, M.; Hiienkari, M.; Mäkipää, J.; Jevtic, R.; Pohjalainen, E.; Kallio, T.; Koskinen, L. Fully integrated DC-DC converter and a 0.4 V 32-bit CPU with timing-error prevention supplied from a prototype 1.55 V Li-ion battery. In Proceedings of the 2015 Symposium on VLSI Circuits (VLSI Circuits), Kyoto, Japan, 17–19 June 2015; pp. C320–C321. [Google Scholar]
  2. Koskinen, L.; Hiienkari, M.; Mäkipää, J.; Turnquist, M. Implementing minimum-energy-point systems with adaptive logic. IEEE Trans. Very Large Scale Integr. VLSI Syst. 2015, 24, 1247–1256. [Google Scholar] [CrossRef] [Green Version]
  3. Bowman, K.A.; Duvall, S.G.; Meindl, J.D. Impact of die-to-die and within-die parameter fluctuations on the maximum clock frequency distribution for gigascale integration. IEEE J. Solid-State Circuits 2002, 37, 183–190. [Google Scholar] [CrossRef]
  4. Rithe, R.; Chou, S.; Gu, J.; Wang, A.; Datla, S.; Gammie, G.; Buss, D.; Chandrakasan, A. The effect of random dopant fluctuations on logic timing at low voltage. IEEE Trans. Very Large Scale Integr. VLSI Syst. 2011, 20, 911–924. [Google Scholar] [CrossRef]
  5. Hand, D.; Huang, H.H.; Cheng, B.; Zhang, Y.; Moreira, M.T.; Breuer, M.; Calazans, N.L.V.; Beerel, P.A. Performance optimization and analysis of Blade designs under delay variability. In Proceedings of the 2015 21st IEEE International Symposium on Asynchronous Circuits and Systems, Mountain View, CA, USA, 4–6 May 2015; pp. 61–68. [Google Scholar]
  6. Salvador, R.; Sanchez, A.; Fan, X.; Gemmeke, T. A Cortex-M3 based MCU featuring AVS with 34 nW static power, 15.3 pJ/inst. active energy, and 16% power variation across process and temperature. In Proceedings of the ESSCIRC 2018—IEEE 44th European Solid State Circuits Conference (ESSCIRC), Dresden, Germany, 18 October 2018; pp. 278–281. [Google Scholar]
  7. Efron, B.; Tibshirani, R.J. An Introduction to the Bootstrap; Chapman and Hall/CRC: New York, NY, USA, 1994. [Google Scholar]
  8. Pearson, K. Contributions to the mathematical theory of evolution, II. Skew variation in homogeneous material. Philos. Trans. R. Soc. A 1895, 186, 343–414. [Google Scholar]
  9. Pearson, K. Mathematical contributions to the theory of evolution, X. Supplement to a memoir on skew variation. Philos. Trans. R. Soc. A 1901, 197, 443–459. [Google Scholar]
  10. Pearson, K. Mathematical contributions to the theory of evolution, XIX. Second supplement to a memoir on skew variation. Philos. Trans. R. Soc. A 1916, 216, 429–457. [Google Scholar]
  11. Lahcene, B. On Pearson families of distributions and its applications. Afr. J. Math. Comput. Sci. Res. 2013, 6, 108–117. [Google Scholar]
  12. Keelin, T.W.; Powley, B.W. Quantile-parameterized distributions. Decis. Anal. 2011, 8, 206–219. [Google Scholar] [CrossRef]
  13. Keelin, T.W. The metalog distributions. Decis. Anal. 2016, 13, 243–277. [Google Scholar] [CrossRef]
  14. Reinhardt, J.C.; Chen, X.; Liu, W.; Manchev, P.; Paté-Cornell, M.E. Asteroid risk assessment: A probabilistic approach. Risk Anal. 2016, 36, 244–261. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, J.; Neil, M.; Fenton, N. A Bayesian network approach for cybersecurity risk assessment implementing and extending the FAIR model. Comput. Secur. 2020, 89, 101659. [Google Scholar] [CrossRef]
  16. Kolmogorov, A.N. Sulla determinazione empirica di una lgge di distribuzione. Inst. Ital. Attuari. Giorn. 1933, 4, 83–91. [Google Scholar]
  17. Smirnov, N. Table for estimating the goodness of fit of empirical distributions. Ann. Math. Stat. 1948, 19, 279–281. [Google Scholar] [CrossRef]
  18. Cramér, H. On the composition of elementary errors. Scand. Actuar. J. 1928, 1928, 13–74. [Google Scholar] [CrossRef]
  19. von Mises, R.E. Wahrscheinlichkeit Statistik und Wahrheit; Springer: Berlin, Germany, 1928. [Google Scholar]
  20. Anderson, T.W. On the distribution of the two-sample Cramér-von Mises criterion. Ann. Math. Stat. 1962, 33, 1148–1159. [Google Scholar] [CrossRef]
  21. Anderson, T.W.; Darling, D.A. Asymptotic theory of certain “goodness of fit” criteria based on stochastic processes. Ann. Math. Stat. 1952, 23, 193–212. [Google Scholar] [CrossRef]
  22. Anderson, T.W.; Darling, D.A. A test of goodness of fit. J. Am. Stat. Assoc. 1954, 49, 765–769. [Google Scholar] [CrossRef]
  23. Hiienkari, M.; Gupta, N.; Teittinen, J.; Simonsson, J.; Turnquist, M.; Eriksson, J.; Anttila, R.; Myllynen, O.; Rämäkkö, H.; Mäkikyrö, S.; et al. A 0.4–0.9 V, 2.87 pJ/cycle near-threshold ARM Cortex-M3 CPU with in situ monitoring and adaptive-logic scan. In Proceedings of the 2020 IEEE Symposium in Low-Power and High-Speed Chips (COOL CHIPS), Kokubunji, Japan, 15–17 April 2020; pp. 1–3. [Google Scholar]
  24. Morán-Vásquez, R.A.; Ferrari, S.L.P. Box–Cox elliptical distributions with application. Metrika 2019, 82, 547–571. [Google Scholar] [CrossRef] [Green Version]
  25. Mazucheli, J.; Alves, B.; Menezes, A.F.B.; Leiva, V. An overview on parametric quantile regression models and their computational implementation with applications to biomedical problems including COVID-19 data. Comput. Methods Programs Biomed. 2022, 221, 106816. [Google Scholar] [CrossRef] [PubMed]
  26. Nagahara, Y. The PDF and CF of Pearson type IV distributions and the ML estimation of the parameters. Stat. Probab. Lett. 1999, 43, 251–264. [Google Scholar] [CrossRef]
Figure 1. Probability distributions fitted to simulated path delay data and their respective ( 1 p ) -quantiles.
Figure 1. Probability distributions fitted to simulated path delay data and their respective ( 1 p ) -quantiles.
Jlpea 13 00022 g001
Figure 2. Histogram of the normalized average path delays of 5 × 10 4 setup paths from STA.
Figure 2. Histogram of the normalized average path delays of 5 × 10 4 setup paths from STA.
Jlpea 13 00022 g002
Figure 3. Heat map of the path correlation of 10 4 setup paths from STA showing large sparsity and therefore small correlation between the paths.
Figure 3. Heat map of the path correlation of 10 4 setup paths from STA showing large sparsity and therefore small correlation between the paths.
Jlpea 13 00022 g003
Figure 4. Maximum critical path delay densities for different values of N in DS 3, when each path has one of the three distributions. (a) Normal distribution. (b) Pearson type IV distribution. (c) Six-term metalog distribution.
Figure 4. Maximum critical path delay densities for different values of N in DS 3, when each path has one of the three distributions. (a) Normal distribution. (b) Pearson type IV distribution. (c) Six-term metalog distribution.
Jlpea 13 00022 g004
Table 1. Goodness-of-fit statistics for DS 1–3.
Table 1. Goodness-of-fit statistics for DS 1–3.
KSCMAD
Normal0.0390.5854.395
DS 1Pearson IV0.0270.0960.586
Metalog 60.0200.0550.460
Normal0.0610.9926.016
DS 2Pearson IV0.0160.0280.198
Metalog 60.0110.0140.102
Normal0.0220.1070.918
DS 3Pearson IV0.0170.0510.370
Metalog 60.0160.0250.211
Table 2. 99.7% confidence intervals of the ( 1 p ) -quantiles (p-quantiles) for DS 2–3 (DS 1), in nanoseconds.
Table 2. 99.7% confidence intervals of the ( 1 p ) -quantiles (p-quantiles) for DS 2–3 (DS 1), in nanoseconds.
p = 1.35 × 10 3 p = 3.17 × 10 5
Normal(5.77, 6.06)(5.04, 5.43)
DS 1Pearson IV(6.02, 6.52)(5.04, 6.23)
Metalog 6(5.90, 6.61)(4.71, 6.36)
Normal(77.16, 79.48)(81.07, 84.01)
DS 2Pearson IV(80.48, 89.28)(87.63, 115.41)
Metalog 6(78.62, 85.13)(82.96, 96.79)
Normal(91.85, 93.57)(94.91, 97.13)
DS 3Pearson IV(92.37, 97.15)(95.82, 107.94)
Metalog 6(92.43, 98.09)(96.12, 108.12)
Table 3. The maximum (minimum) path delays of N = 10 2 -independent critical paths corresponding to the ( 1 p ) -quantile (p-quantile) for DS 2–3 (DS 1), in nanoseconds.
Table 3. The maximum (minimum) path delays of N = 10 2 -independent critical paths corresponding to the ( 1 p ) -quantile (p-quantile) for DS 2–3 (DS 1), in nanoseconds.
p = 1.35 × 10 3 p = 3.17 × 10 5
Normal8.988.82
DS 1Pearson IV9.048.83
Metalog 69.008.81
Normal83.4186.72
DS 2Pearson IV100.77118.30
Metalog 692.60101.04
Normal96.6599.24
DS 3Pearson IV101.99108.37
Metalog 6103.28109.87
Table 4. The maximum (minimum) path delays of N = 10 3 -independent critical paths corresponding to the ( 1 p ) -quantile (p-quantile) for DS 2–3 (DS 1), in nanoseconds.
Table 4. The maximum (minimum) path delays of N = 10 3 -independent critical paths corresponding to the ( 1 p ) -quantile (p-quantile) for DS 2–3 (DS 1), in nanoseconds.
p = 1.35 × 10 3 p = 3.17 × 10 5
Normal9.639.52
DS 1Pearson IV10.049.85
Metalog 69.979.78
Normal85.5088.53
DS 2Pearson IV111.00131.49
Metalog 697.78106.17
Normal98.29100.65
DS 3Pearson IV105.84112.60
Metalog 6107.32113.87
Table 5. The maximum (minimum) path delays of N = 10 4 -independent critical paths corresponding to the ( 1 p ) -quantile (p-quantile) for DS 2–3 (DS 1), in nanoseconds.
Table 5. The maximum (minimum) path delays of N = 10 4 -independent critical paths corresponding to the ( 1 p ) -quantile (p-quantile) for DS 2–3 (DS 1), in nanoseconds.
p = 1.35 × 10 3 p = 3.17 × 10 5
Normal10.1210.03
DS 1Pearson IV11.0510.85
Metalog 610.9110.73
Normal87.4090.21
DS 2Pearson IV122.94147.00
Metalog 6102.96110.87
Normal99.77101.96
DS 3Pearson IV109.90117.11
Metalog 6111.36117.55
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Runolinna, M.; Turnquist, M.; Teittinen, J.; Ilmonen, P.; Koskinen, L. Extreme Path Delay Estimation of Critical Paths in Within-Die Process Fluctuations Using Multi-Parameter Distributions. J. Low Power Electron. Appl. 2023, 13, 22. https://doi.org/10.3390/jlpea13010022

AMA Style

Runolinna M, Turnquist M, Teittinen J, Ilmonen P, Koskinen L. Extreme Path Delay Estimation of Critical Paths in Within-Die Process Fluctuations Using Multi-Parameter Distributions. Journal of Low Power Electronics and Applications. 2023; 13(1):22. https://doi.org/10.3390/jlpea13010022

Chicago/Turabian Style

Runolinna, Miikka, Matthew Turnquist, Jukka Teittinen, Pauliina Ilmonen, and Lauri Koskinen. 2023. "Extreme Path Delay Estimation of Critical Paths in Within-Die Process Fluctuations Using Multi-Parameter Distributions" Journal of Low Power Electronics and Applications 13, no. 1: 22. https://doi.org/10.3390/jlpea13010022

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop