1. Introduction
Achieving the full observability and autonomy of the power system may seem like a pipe dream to many, primarily due to the large requisite financial investment, as well as the lack of sufficient technology. Device computational capacity, communication network limitations, algorithm complexity, and inaccurate measurements are just a few technological barriers preventing an immediate overhaul of the entire measurement apparatus of the power grid.
In addition to these known obstacles are those of the unknown. The ever-increasing percentage of renewable energy sources added to transmission and distribution systems is contributing to never-before-seen phenomena, some of which may be encoded in the production of harmonics superimposed upon the 50 and 60 Hz waves we are so accustomed to working with. In an ideal, fully observable power grid, the various interoperating systems (i.e., metering, protection, control) work together seamlessly to achieve a very strict set of outcomes. The proper operation of these systems requires accurate and precise measurements.
In the Texas Interconnection, a failed surge protector on a combustion turbine connected to a step-up transformer led to a fault during start-up for testing in May of 2021 [
1]. The associated circuit breaker tripped within three cycles, clearing the fault and restoring the system to normal operation after an estimated loss of 192 MW. In addition to these losses, a number of solar photovoltaic (PV) and wind plants saw reductions in active power production caused by the fault; however, it was determined that this loss of renewable generation was not caused by the fault itself. In the North American Electric Reliability Council (NERC) September 2021 report on this disturbance (dubbed the “Odessa” disturbance), it was found that this simultaneous loss of renewable generation during the fault time was caused by “inverter-level or feeder-level tripping or control system behavior within the resources”.
The aforementioned situation is just one of many incidents in which inadequate knowledge of a system led to undesirable, even catastrophic, events. Such scenarios ideally could be reduced or even eliminated through the meticulous design and implementation of a measurement apparatus capable of recognizing, acting on, and even mitigating such abnormalities. This, of course, is a scenario that currently exists only in an ideal world, and we are many years away from achieving such lofty goals.
However, the advancement of power system observability has seen a dramatic shift in recent decades through the deployment of synchrophasors, or phasor measurement units (PMUs). PMUs provide the advantage of directly measuring the voltage magnitude, phase angle, frequency, and rate of change of frequency (ROCOF), quantities that would otherwise be treated as latent variables through other measurement systems, requiring other real-time-capable signal processing solutions to extract. In spite of these recent developments, PMUs are typically only capable of measuring fundamental quantities, that is, the information contained in frequencies beyond 50/60 Hz is lost.
Much work has been carried out on the analysis of errors resulting from PMU measurement chains. In [
2], the Gaussian assumption for PMU errors is re-assessed and corrected using Gaussian mixture modeling (GMM), a technique for describing distributions comprised of more than one Gaussian “mode”. The flaw in the Gaussian assumption is further examined in [
3]. The authors in [
4] explore the various ways in which PMU errors affect different applications such as the power system disturbance location, oscillation detection, island detection, and dynamic line rating. Analysis of PMU errors for state estimation is also a burgeoning field of study [
5,
6,
7,
8].
There have been works published on the estimation of harmonic phasors in the form of
, in which the harmonic
h amplitude
, phase
, frequency
, and sometimes ROCOF
are the variables of interest [
9,
10,
11,
12,
13,
14,
15,
16,
17]. In [
18,
19], harmonic phasors are modeled as complex exponential functions and solved via a least-squares approach applied to sampled frequency-domain models of the harmonic phasors. However, as the sampling rate increases, the necessary computational burden increases dramatically, to the point of being unable to perform the required matrix inversions on an 8-core machine.
Other techniques for extracting harmonic information from signals have been proposed. The authors in [
20] employ a series of frequency-modulated finite impulse response (FIR) filters to estimate instantaneous harmonic parameters. A variation on the estimation of signal parameters using the rotational invariance technique (ESPRIT) is proposed in [
21,
22], where the exact model order is estimated from the data rather than having to be configured and tuned by hand. The literature on accurate harmonic measurement, in general, is extremely sparse.
This study analyzes high-frequency transient electrical current waveforms captured by equipment dubbed hereafter as “point-on-wave (PoW)” sensors. At present, most studies concerning the non-power frequency content of power system phenomena are reduced to total harmonic distortion (THD) and power-quality index computations [
23]. Harmonics are detrimental to the power system and measures are typically taken to eliminate or reduce their effects rather than study them. However, the complete removal of harmonics from the power system is almost impossible and potentially actionable information may be lost.
PoW sensors capture oscillographic representations of the measured phenomena, typically sending the resultant analog measurements to a device capable of digitally sampling at high rates. Two different commercially available PoW sensors are compared with one another, as well as with a “reference” sensor representing the ideal. Statistical analysis of the harmonic amplitude and phase error for each sensor over a variety of transient current waveforms is performed including the estimation of the non-parametric probability density functions (PDFs) at each chosen harmonic. Note that equipment manufacturers are kept anonymous in this paper to avoid any perceived endorsement of one particular technology.
The motivation behind studying individual harmonic error probability distributions is a simple one. In parameter or state estimation applications, measurements are typically modeled mathematically in the form [
24]
where
is an
error term indicating random deviations between
and
. This term is usually considered a lumped parameter, and includes contributions from systematic and random errors. In the power system scope,
Y is usually an electrical parameter of interest such as the voltage magnitude, phase angle, or frequency. The work presented in this study examines errors in electrical current harmonics as a result of high-frequency disturbances and characterizes the behavior of the harmonic amplitude and phase errors over a variety of dominant harmonics frequencies.
To the best of the authors’ knowledge, an individual harmonic error distribution analysis has not been studied in depth. We hope that this study will continue the advancement of the body of knowledge to be used for improving power system observability and situational awareness.
3. Statistical Analysis of Harmonics
Often, error analysis is performed under the assumption of a Gaussian distribution, that is, the error is typically assumed to take the form of additive white Gaussian noise (AWGN). In this section, error metrics for harmonic magnitudes and phases are presented. Two metrics for harmonic amplitudes are first discussed: the percent error and residual error. P = The phase error is computed using a simple difference. Given a sensor under test (SUT) and an ideal reference sensor measuring the same quantity side-by-side, the percent error at a specific harmonic amplitude
h may be quantified as [
29]
This quantity may be multiplied by 100 if it needs to be expressed as a percentage. Otherwise,
is unitless and provides a relative measure of the deviation of a measured harmonic amplitude
from the reference harmonic amplitude
. The residual harmonic amplitude error may be calculated using
Similarly, the phase may be compared with a simple difference
where
and
represent the reference and measured phase angles at harmonic order
h, respectively. It is useful to determine the distributions of
,
, and
. Knowing these distributions can allow measurement devices to make corrections if errors are suspected. These distributions can also give insight into whether a particular sensor measuring a particular harmonic
h possesses systemic errors (loosely equivalent to
biases) or if the errors seem purely random. The natural assumption is to assign Gaussian distribution to the error quantities but as shown later in this work, this is not necessarily the case at each harmonic.
3.1. The Anderson-Darling Test
A common problem in statistical inference is determining a distribution, or family of distributions, that a given sample has come from. It is often not sufficient to simply visualize a histogram of data, and more rigorous methods are required to fully
quantify the “goodness-of-fit” of a distribution family to a given sample. In [
30], T. IT. Anderson and D. A. Darling proposed a test statistic used to accomplish this. Given an ordered sample
with cumulative distribution function
, compute
for
Stephens in [
31] notated the statistic
for various distributions at various significance levels. For a test against a normal distribution with unknown parameters at a significance level of
, the “threshold” is
. This means that if the computation of
yields a number greater than this threshold, the test will
reject the hypothesis that sample
x came from a normal distribution. MATLAB provides a simple function,
adtest(), which accepts a series of numbers as the input and yields a 0 if the input sample likely came from a normal distribution at a significance level of
or less and a 1 otherwise.
3.2. Kernel Density Estimation
Often a random sample does not appear to come from a known family of distributions. In this case, non-parametric techniques are usually applied to estimate the distribution function
. One of the more common approaches to this problem is that of
kernel density estimation (KDE). A density function
f describing the distribution of a random variable
X may be approximated as
using the kernel density estimator
for a kernel function
K and bandwidth or
smoothing parameter
h. In many applications, the standard normal kernel is assumed:
Kernel density estimation essentially overlays the kernel function
K on the data histogram, computes the kernel function on the values of
within the kernel, shifts the kernel function, and sums the results (the summation in (
13)), yielding a continuous function approximating the true density
. The bandwidth parameter
h controls the width of the kernel function. Ideally,
h would be as small as possible; however, too small an
h will result in overfitting. Similarly, too large an
h will result in a curve that is
too smooth. An example is shown in
Figure 2.
Goodness of Fit Using Root Mean Square Error
A simple yet effective metric for gauging the goodness of fit (
) of a probability density estimate
is a simple mean-square error calculation between the empirical cumulative distribution function (ECDF) calculated from the data,
, and the estimated cumulative distribution function (CDF)
, computed given the estimated density function
:
Ideally, the for an estimated distribution function will be as close to zero as possible, indicating little deviation between and .
4. Experimental Setup
Shown in
Figure 3 is a diagram of the setup used in this work. A National Instruments PXIe 6366 Multifunction data acquisition (DAQ) system utilized comma-separated-value (CSV) files that contained disturbance records, which were then converted to analog waveforms by an NI PXIe 5423 waveform generator (WG).
An AE Techron 7228 power amplifier was then utilized to amplify the output WG voltage and convert it to a current. This current amplifier was adjusted to provide frequency response characteristics that were flat to ±1% from 60 Hz to <5 kHz. This is due to the nature of the amplifier when connected to inductive loads (i.e., the step-up CT shown in
Figure 3) when operating in current control mode. Tests were conducted in the lab to find the suitable cutoff frequency of this device when operating under these conditions, which was found to be roughly 5 kHz.
Following this, the current was stepped up using a KOR-11 15 kV 400:5 T200 (manufactured by ABB, North Carolina, USA) CT, whose frequency response was measured to be flat up to 10 kHz. On the high side of this CT, a reference sensor was employed to measure the “actual” current being fed into the equipment under test (EUT) (the sensors being evaluated). This is due to a phase delay of 20 s that was induced between the EUT signals and WG output signals as a consequence of the intermediate equipment.
A reference sensor with a frequency response as close to ideal as possible was desired. For this, the Ultrastab 866 Precision Current Transducer (current ratio of 1500:1, manufactured by Danfysik A/S in Jyllinge, Denmark) was chosen due to its flat frequency response of up to 100 kHz, which was connected to a 10-ohm burden resistor with the capability of measuring currents of several hundred amps with an accuracy higher than 0.1%. The current signals of interest also passed through these EUTs and the obtained measurements were sent back into the DAQ for a time-synchronized side-by-side comparison with the Ultrastab 866 reference sensor’s readings.
Two PoW sensors were used independently as the EUTs for conducting the experiments, denoted hereafter as
and
. The reference sensor is denoted as
. Sensor
was calibrated using a Fluke 6105A calibrator. The frequency response curves for both
and
are shown in
Figure 1. Each event was “played back” through the sensor suite 100 times, yielding 100 comparisons between a sensor’s produced signal and the reference sensor signal.
4.1. Event Descriptions
Three event types were studied: a current inrush event (denoted as
), a microgrid close-in event
, and an event depicting a fault on the terminals of a wind farm connected to a distribution system
. Events
and
were from real-world data, whereas
was simulated in PSCAD, [
32]. Events
and
were sampled at different rates (20 kHz and
kHz, respectively) due to the nature of their originating measurement sources, and the sampling rate for the simulated event
was chosen to be 200 kHz to obtain as many harmonics from the wind-fault waveform as possible, as well as ensure that the Nyquist frequency (i.e., 100 kHz) matched the frequency response limit of the reference sensor.
Additionally, the three event types were selected to examine harmonic information in events
not related to distributed energy resource (DER) (
, as well as those caused by DERs (
and
). The events were taken from different measurement sources in order to account for different sensors which, in most cases, have different hardware capabilities including different ADCs and sampling rates. Therefore, this needed to be accounted for.
Figure 4a–c depict the single-phase current waveforms produced by
,
, and
. It can be seen that in most cases,
produced a significant amount of noise. It can also be seen by examination alone that, although
appeared to follow
more closely at lower frequencies (
Figure 4a), it began to deviate more at higher frequencies, (
Figure 4b,c).
4.2. Sensor Descriptions
Two sensors were independently used to serve as the EUT. The Lindsey 9670 35-kV class line post monitor was first used, with ≤1% accuracy in the current gain up to 6 kHz, as well as an induced phase error of less than 10. The second sensor, a G&W CVS-36-O 36 kV class, possessed a more erratic magnitude response, reaching a maximum magnitude error and 10 phase error up to 6 kHz. The G&W sensor is rated for up to 30 kA (as opposed to approximately 1 kA for the Lindsey sensor) and thus has a much higher noise floor in the range of currents being used in this study.
4.3. Selecting Harmonics
For a given event waveform, it is unlikely that every frequency component between 0 and the Nyquist frequency
is present in the examined signal. For this reason, harmonics were hand-picked from the prominent “peaks” in the waveform frequency spectra.
Table 1 lists the chosen harmonics. For the events shown in
Figure 4a,b, a threshold value of
was used to select the harmonic frequencies of interest. This value was obtained empirically by visually inspecting the frequency spectra of these events and determining which harmonics appeared the most “prominent”. For the event depicted in
Figure 4c, this threshold had to be lowered to capture the clearly prevalent harmonics in the 10s of kHz range (
Figure 4c. Note that the fundamental frequency (60 Hz) was excluded due to the large existing body of knowledge and design characteristics included to ensure peak performance at this frequency.
6. Conclusions and Future Work
A fully situationally aware power system is a goal that, although seemingly impossible to achieve, is something worth pursuing. In this paper, high-frequency transient power system current disturbances and their distorted representations are analyzed through both statistical and probabilistic lenses over a wide variety of harmonic frequencies. The harmonic amplitude error, quantified in terms of percent and residual errors, largely showed characteristics of normally distributed behavior per the AD test for normality in the lower (i.e., less than the 50th harmonic) frequencies.
As the harmonic frequency moved beyond this level, the error distributions tended to drift away from normal behavior, as seen in the wind-fault event results. RMSE was used as an indicator of the goodness of fit between the estimated distribution functions and the empirical data, showing the validity of the presented approach. The results presented in this study go against the assumption that measurement errors can be treated as a normally distributed quantity; however, more transient events need to be studied to obtain a full grasp of the “natural” distributions of these errors.
Future Work
The work presented here only covers the responses to three distinct current disturbance types. The authors believe that we have just scratched the surface and that the results here are indicative of a need to dig deeper into the study of higher-order harmonic measurement errors. These studies, both currently presented and in the future, will be extremely important in assisting with the design of measurement systems capable of accurate representations of high-frequency signals present in the power system.