1. Introduction
The embedded system technology with powerful and low-cost processors permitted high-performance digital controllers in industrial electrical systems. The advancement in power converter technology and its control is driven by applications like renewable energy, automotive, drives, and distributed generation, to name a few. Application-specific control goals and functionalities necessitate extensive research in power converters’ control [
1,
2].
Among the power converter control strategies, model predictive control (MPC) [
3] received more attention due to its ability to control multivariable systems and ease of including constraints and nonlinearities [
4]. It became a popular time-domain control strategy in the process industry in the 1970s [
5,
6]. Initially, some investigations were carried out on its application in power electronics in the early 1980s [
7,
8]. However, the need for online optimization and high computational power limited its application in power electronics. The advent of digital technology with ever-increasing computational capability kindled the research on MPC for power electronics in the past decade [
9,
10,
11,
12]. MPC outpaces traditional control methods in handling time-domain constraints where the control task is devised as an optimization problem [
13].
MPC techniques for power converters are classified as indirect and direct types [
9]. Indirect MPC is a two-stage control with a predictive controller and modulator. Direct MPC unifies the control and modulation problem into a single-stage computational problem resulting in an integer vector [
9,
14,
15]. The direct MPC is further categorized as MPC with hysteresis bound reference tracking and MPC with implicit modulator [
9]. Finite control set MPC (FCS-MPC) [
15], a reference tracking-based technique, is the most commonly used control method for power electronic converters to solve an optimization problem online. A discrete model of the power converter helps reduce the algorithm’s computational burden [
16]. FCS-MPC is advantageous over other control methods due to its simple algorithm structure, flexibility to address nonlinearities and constraints, ease of digital implementation, and fast dynamic response [
16,
17]. However, the computational complexity of the FCS-MPC method necessitates the use of powerful processors for its real-time implementation.
Digital technology is widely accepted for implementing complex control algorithms due to flexibility, noise immunity, and insensitivity to component variations [
18]. However, Analog controllers offer rapidity and better bandwidth but lead to parameter drifting. For this reason, the execution time of the digital controllers must be of utmost importance for better performance [
19]. MPC for power electronic converters can be implemented using digital controllers like microcontrollers and digital signal processors (DSPs) [
10,
12,
20]. DSPs offer highly flexible software-based solutions and exhibit excellent performance for sequential algorithms. Their capability to manage floating-point arithmetic helps in solving complex mathematical problems. However, the main drawback of micro-programmable solutions is the computational delay in finding the optimal solution in one sampling period, especially in complex control techniques like FCS-MPC [
21,
22]. Various delay compensation techniques are addressed in literatures [
12,
20,
23]. However, the design modifications may result in inferior performance compared to conventional control methods [
24].
Field programmable gate array (FPGA) is excellent for computationally intensive power converter control strategies [
21]. They are a matrix of configurable logic blocks called logic elements (LEs) with programmable interconnects [
4,
25]. Application architecture can be either all hardware using logic cell (Hw) or all software using processor core (Sw) or a combination of these [
19]. Nowadays, prominent FPGA manufacturers like Intel, Santa Clara, CA, USA, and Xilinx, Santa Clara, CA, USA offer a broad portfolio of FPGA solutions with better performance, energy efficiency, cost per function, density, and reliability. They offer the highest level of system integration with embedded processors, memory chips, intellectual property (IP) cores, and software tools in a single chip and are termed as “system on chip” (SoC) or field programmable system on chip (FPSoC) [
26,
27]. There has been a spurt in the FPGA fabrication process, which has reached a level of 10 nm SuperFin process technology [
28]. Its key features include high integration density, better performance, and reduction in power. The latest FPGA device from Intel (Intel Agilex) belongs to this category. However, the design complexity and lack of skillset required to interconnect the processors and FPGA limit its wide application [
29]. Customized architecture utilization drastically reduced the design complexity and shortened the “time to market” for FPGA solutions.
FPGA-based control has been successfully implemented in many power electronic applications [
30,
31]. The main advantage of FPGAs over DSP is their deep pipelining and inherent parallelism that helps reduce the execution time of complex algorithms to a few microseconds [
1]. In addition, they offer high performance, cost-effectiveness, and better reliability. FPGA includes high-quality hardware design and verification tools to implement the controller using hardware description languages (HDLs). Typical languages used for electronic design automation (EDA) synthesis in FPGA are Verilog HDL, very high-speed integrated circuit HDL (VHDL), System Verilog, and System C [
32]. They permit the concurrent process design found in hardware [
33]. However, the designer should have profound knowledge about the hardware design for optimized performance, limiting its widespread application in industrial systems [
34]. High-level synthesis (HLS) tools emerged as an alternate enabling FPGA’s act as hardware accelerators to simplify the design process [
32]. HLS tools convert high-level programming language, most commonly C, C++, and System C, to register-transfer-level (RTL) description of the control algorithm and finally into netlist to program FPGA [
35]. A comprehensive review of the evolution of HLS tools and their success and failures are discussed in [
36,
37]. These tools exhibit great potential for implementing complex power converter control strategies with hardware in the loop (HIL). Among various alternatives, Xilinx Vivado, Altera OpenCL, Leg Up, Mentor Graphics Catapult C are the prominent HLS tools available in the market [
38]. Major FPGA vendors currently offer proprietary solutions like Intel
®HLS Compiler (Intel FPGA), Vitis HLS (Xilinx), to convert the high-level language to FPGA code. These vendors also provide conversion from Matlab/Simulink, Open CL (Intel FPGA, Xilinx), and LabVIEW (Xilinx) [
39]. However, HLS tools present poor performance and high area consumption compared to direct HDL [
32,
36,
39]. Despite the familiarity, C language lacks timing information, bit accuracy, hierarchy, memory bandwidth, and others critical for the hardware design [
40].
As FPGA technology is experiencing a great revolution in packaging, integration level, design tools, and methodologies, it poses a challenge for the designers as the design complexity grows faster than the productivity of the designers, also known as the Design productivity gap [
41]. Moreover, the industries always focus on the economic benefits and show reluctance to adopt new control techniques due to their higher investment cost. Although complicated, control techniques like FCS-MPC provide better performance; for the industries to adopt such control methods, the investment cost for the power electronic system needs to be reduced without compromising the system’s performance [
24]. The computational burden and more resource utilization of the predictive control is another aspect that needs more attention. A suitable design optimization method is essential for the better utilization of resources and to fit the design in the target FPGA.
A detailed survey on recent research trends and challenges of MPC is discussed in detail in [
9], and the design guidelines to maximize system performance are given in [
24]. The HIL-based FCS-MPC current control using MATLAB/Simulink integrated Xilinx System Generator (XSG) is explained in [
42]. This intuitive solution allows the designer to generate automatic HDL code without an extensive understanding of HDL manual coding. FPGA-based FS-MPC implementation for PMSM drives is given in detail in [
43]. XSG environment generates the HDL code and implements it in a Xilinx Vivado platform. However, in both these cases, the resource utilization will be more compared to manual coding. The MATLAB HDL Coder extends support to FPSoC devices for IP Core generation. However, this is limited to specific FPSoCs from Xilinx and Intel [
44]. A Xilinx Zynq-based FPSoC with ARM core is utilized for FCS-MPC prototyping in [
45]. A Hw/Sw-based approach is applied, and the results are compared with a DSP-based solution. Since a low-cost solution is one of the primary objectives, FPSoC-based implementation is not adopted for the proposed work.
This paper discusses a low-cost solution for efficiently implementing a computationally intensive predictive direct current control technique in detail. A design optimization method is adopted to ensure better resource utilization with faster computation. A low-cost new generation Intel Altera MAX 10® FPGA is utilized for the control implementation, and the implementation process is discussed in detail to address the design productivity gap. Since control performance is of prime concern in power converter applications, low-level HDL workflow is preferred over HLST, which also helps utilize the FPGA device’s full potential. This article also highlights single-chip integration benefits with the increased processing capability of the FPGA device with inbuilt ADC functionality, intellectual property cores (IP cores), and memory controllers. The predictive direct current control is employed in a two-level three-phase VSI to focus on the power SoC-based algorithm implementation utilizing the Quartus® Prime design tool.
The rest of the paper is organized as follows;
Section 2 describes the finite set-predictive direct current control (FS-PDCC) algorithm.
Section 3 explains the design methodology, and
Section 4 elaborates the realization of the control algorithm in the FPGA device.
Section 5 discusses the simulation results. Finally, experimental validation of the control strategy is elaborated in
Section 6, and
Section 7 summarizes the conclusions with future scope.
2. FS-PDCC Control Strategy
FS-PDCC, a current control method based on FCS-MPC, utilizes power converters’ discrete nature to apply control action in a single-stage without intermediate modulators. The algorithm predicts the system’s future behavior based on its model over a horizon. It utilizes only a fixed number of inverter switching states and evaluates a cost function [
21]. First, the cost function evaluates the error between each voltage vector’s reference and predicted currents [
46]. Then, the voltage vector with the lowest value is selected, and the corresponding switching signals are applied at the next sampling interval [
12]. Online implementation of FS-PDCC involves three significant steps: estimation, prediction, and optimization [
47]. During the first phase, the optimal switching state from the previous period is applied at an update instant “
k,” and phase currents are measured at
tk. The current switching state is maintained till the end of
tk+1, where the new switch state is applied. In the prediction phase, the value of the controlled variables for the next sampling instant is predicted for all the finite switching states covering all the future instants. The number of such sampling periods is the prediction horizon (N). When horizon length increases, the number of possible switching sequences becomes quite large, making its implementation quite tricky. Finally, the optimal voltage vector is selected in the optimization phase, and the corresponding switching state is applied to the inverter [
48].
A three-phase VSI has eight switching states with six active and two zero vectors. Two switches per phase operate in a complementary manner. The switching function S
x represents “1” and “0” (x = A, B, C) for the closed and open states of the switch. A two-level three-phase VSI is shown in
Figure 1.
The voltage vectors generated by the VSI is given by the expression [
12]:
where
are the phase to neutral voltages of the VSI. The following expression relates the voltage vector and the switching state sequence and is given by:
represents the DC input and , the switching state for I = 0,1,2…7.
The balanced three-phase quantities are converted to two-phase orthogonal reference quantities by Clarke’s transformation, for simplifying the analysis of three-phase circuits, as given by:
The load dynamics represented in the stationary reference frame for the inverter circuit with RL load is given by the differential equation [
46]:
where
Vαβ and
iαβ are the load voltage and current vectors,
R and
L are the load resistance and inductance, respectively.
The finite switching states with voltage vectors are illustrated in
Figure 2.
The load current derivative is based on the forward Euler equation with sampling interval
TS is expressed as [
46]:
Equation (4) is substituted in Equation (2) to obtain the expression for the load current prediction at instant
tk+1 and is given as [
46]:
“p” denotes the predicted load current value for all the voltage vectors generated by the inverter at an instant
k + 1. Therefore, the absolute value of cost function (
Gn) defined to indicate the error between the reference and predicted values of load current is expressed as [
46]:
where
,
are the load current references and
are the load current predictions, respectively. The cost function is determined for
n = 0~7, and the switching state corresponding to the minimum
Gn is applied in the subsequent sampling interval. The orthogonal components of reference current are obtained from an external loop and
during a particular sampling interval.
3. Design Methodology
As mentioned earlier, FPGA controllers can drastically reduce the computational effort of complex control algorithms. However, for implementing a predictive control algorithm, prediction and optimization stages present a significant computational load. By exploiting the parallel and pipelined architecture of FPGA, computation time can be reduced but at the expense of consumed resources. In this regard, the implementation aspect of the control algorithm needs to be well designed to improve the time-area efficiency. An efficient design methodology will enable the designer to optimize resource utilization. FPGA permits the use of a specific architecture to implement the control algorithm in a flexible environment with available memory blocks, multipliers, and logic elements. However, the designer must follow specific rules and steps to make it more manageable. Simple and less intuitive design methodology will ensure optimization and reusability of the available resources with proper sharing and streaming.
Furthermore, the architecture should be scalable enough to include future modifications and constraints. The modular architecture of the FPGA with IP core improves the control performance of the entire system with a shorter execution time. MATLAB/Simulink provides a friendly environment to generate the FPGA architecture using dedicated toolboxes like HDL coder for Altera and System Generator for Xilinx. However, the control performance may deteriorate due to unoptimized architecture leading to more usage of FPGA resources. So, in this article, Verilog HDL coding is adopted for FPGA architecture. The initial system design followed by algorithm formulation, FPGA architecture optimization, and implementation forms the entire design process [
4,
18,
19]. The design process overview is given in
Figure 3 [
19].
3.1. Algorithm Design
Preliminary system design includes source and load considerations, selection of digital controllers, sensors, and finalizing the hardware specifications. The algorithm modification process to adapt to the available FPGA resources is very important in the design stage. First, the algorithm is split into modules directly executable by finite state machines (FSM). The module partitioning can be done based on the concepts of hierarchy and reusability, whereby the modules can be divided into smaller ones that are more manageable and reusable [
49]. The designer has the freedom to remodel the algorithm to reduce the number of operations to limited hardware resources. Some reusable modules for controlling electrical systems with different hierarchy levels are specified in [
18]. A continuous-time functional validation of the algorithm using MATLAB/Simulink tools is performed during this stage, making it ready for digital implementation.
The following important task is the choice of fixed-point format. Usually, FPGAs handle fixed-point calculations effortlessly. Intel’s newest generation 10 FPGAs like Arria®10 and Stratix®10 possess IEEE-754 single-precision floating-point units (FPU) inside the DSP block. However, the sequential operation of the FPU may slow down the entire calculation process, and capital cost is a matter of concern for the proposed system. MathWorks has a Fixed-point DesignerTM tool that provides data types and optimization tools for implementing fixed/floating-point algorithms in embedded systems. The tool will enable the designer to implement the data in a floating-point with the same efficiency and performance as a fixed point. However, the result may consume more FPGA resources, and the design will not fit the target FPGA.
3.2. Architecture Optimization and Scheduling
Optimization of the algorithm and architecture is another vital stage involved in the design process. The designer can opt for automatic code generation using an HDL coder, but it may result in an unoptimized solution regarding the available FPGA resources. So, for better resource allocation and architecture optimization, the designer has to perform HDL coding by himself with the help of the Algorithm Architecture Adequation (AAA) methodology [
50].
Application of AAA methodology in FPGA processors aims to prototype real-time applications with potential factorization quickly. This optimization methodology takes care of the size and timing constraints of the algorithm. Potential factorization leads to maximum operations with minimum operators. The process consists of three stages: Design of data flow graph (DFG), data dependency validation, and design of factorized DFG (FDFG) [
19]. However, a compromise between hardware resources and computation time should be made as factorization leads to more computation time and less consumed resources. Each elementary module consists of a data and control path coded in Verilog HDL, and they are synchronized with a clock (CLK) signal. The data path consists of various operators like multipliers, adders, and registers, and the data transfer is managed by a global control unit-FSM. The module control units are triggered via a start signal and generate an end signal upon process completion. The architecture scheduling is done by FSM, which sequences the operation of different modules via start and end signals. As a result, the AAA methodology can achieve better time/area performance [
19]. The methodology is applied to sine wave generation, ABC-αβ transformation, and predictive control to reduce hardware resources. The repetitive patterns in the data flow graph can be optimized using FDFG, and a sample of the methodology is represented in
Figure 4.
System validation can be performed using both software simulation tools and hardware testing. The FPGA controller used in the proposed work is Intel®MAX®10 FPGA. They are low-cost, non-volatile, small form factor, single-chip programmable logic devices with advanced processing capabilities. The device highlights include integrated Analog to digital converter, DSP blocks, Nios II Gen 2 Embedded soft processor support. In addition, Intel FPGAs are equipped with highly efficient design tools like Intel Quartus® Prime Lite edition, Nios® II Embedded Design Suite (EDS), and Platform Designer.
The functionality and performance validation of the Verilog coded architecture is performed using the EDA tool Modelsim*-Intel
® FPGA Starter Edition Software available with Quartus Prime. Furthermore, a relevant set of test bench inputs is provided for validating the architecture. In addition, Intel Quartus prime provides System Console, a fast and efficient real-time debugging tool, to realize the communication interface between the host PC and the target platform. System Console helps to efficiently debug the design while the design runs at full speed. The entire design and implementation process is summarized in
Figure 5.
4. FPGA Implementation Process
The basic procedure involved in the FPGA implementation of the control algorithm is depicted in
Figure 6.
FPGA design starts with design entry with RTL using Verilog/VHDL in Quartus II environment. GUI-based Platform Designer tool or an HLS Compiler can also be used to create the logic. Altera provides optimized and verified IP cores and third-party IP cores that can be integrated into the design to improve the design performance and efficiency. IP Catalogue and the parameter editor generate customized IP cores for a specific design. The parameter editor customizes and creates a Quartus IP (.qip) file representing the IP core in the design. The Platform Designer (Standard), a system integration tool available in Quartus®, integrates customized IP components into the FPGA design. The Platform Designer improves the device performance with shorter design cycles and enables design reuse in a GUI.
Following the design entry, the Assignment Editor and Pin Planner interfaces help to constrain the design. The Assignment Editor permits the specification of different options and settings in the design logic and attempts to match the resource assignments with the available resources. Individual and group pin assignments are made using the Pin Planner. The design fitting stage (compiler stage) includes analysis and synthesis, fitter (place & route), assembler, timing analysis, and EDA Netlist Writer. The design analysis and synthesis stage check the design source file for errors, build the database, synthesize and optimize the design, and map the design logic to the device resources. Fitting the design logic by utilizing fewer resources is the principal design challenge in an FPGA implementation. The fitter stage includes Placement and Routing. Placement ensures optimum location for the design logic, and Routing connects the nets between the logic. Quartus II provides powerful tools to view and analyze different design and synthesis results through RTL Viewer, State Machine Viewer, and Technology Map Viewer [
51]. In addition, the Chip Planner displays the utilization of the design resources. These design analysis tools can be used throughout the design, debug, and optimization stages [
51].
Synchronization and timing analysis is crucial for a successful FPGA design. The clock is the synchronizing signal, and timing analysis checks for timing violations relative to the clock. Quartus II is equipped with a Timing Analyzer tool to analyze the critical paths in the design and view the violations using an industry-standard SDC (Synopsys Design Constraints) package. A .sdc file is created, which specifies the constraints and validates the timing performance of the design logic. After the compilation stage, the design validation and timing simulation are performed using Modelsim. A test bench is created to specify the parameters of all the signals in the design. This tool will help us to test and understand the operation of the design logic.
In the configuration stage, the design is loaded into the FPGA using the Programmer. Programmer prompts to specify the correct JTAG device, USB Blaster. SRAM Object File (
.sof) is transferred into the FPGA using the USB Blaster. The debugging tools provide concurrent verification of the design at system speed. The signals are routed to the debugging logic, and debugging tools utilize a combination of logic, memory, and routing resources [
52]. Signal Tap logic analyzer is used for the design verification of the algorithm in hardware. The signals are routed to the Quartus environment through a JTAG interface for analysis. The performance of ADC in the MAX
® 10 device is analyzed using ADC Toolkit available in System Console.
4.1. Complete Hardware Architecture of the FS-PDCC Controlled Three-Phase VSI
Figure 7 depicts the complete hardware architecture of predictive current-controlled two-level three-phase VSI. The experimental setup consists of a three-phase voltage source inverter with RL load. It includes ACS722 current sensor for sensing the load current and an ACPL-C870 voltage sensor for sensing the input DC voltage to the inverter. Intel
®MAX
®10 (10M08SAE144C8G) FPGA containing 8 K logic elements with a 50 MHz clock oscillator was used as the target device for the control implementation. The architecture consists of six functional blocks: ADC IP core, Dual-Port RAM IP core, the reference current generation block, ABC to αβ conversion block, prediction and optimization block, and switching state generation block. The data flow between these blocks is controlled via a global control unit (FSM).
Finite state machine (FSM) controls the transition among a limited number of internal states, as determined by the current state and external input [
53]. It gets triggered via a start signal and controls all the modules over a sampling period T
s. A
Start_ADC signal is provided to start ADC conversion and waits for the
End_ADC signal. Once the conversion gets over, the prediction and optimization block is activated via
Start_Pred, and the
End_Pred signal indicates the completed process. End signal marks process completion, and optimized switching states are applied to the inverter. The system clock is 50 MHz (sampling period 20 ns), and controller performance is greatly influenced by the computation time. A signal latency is considered in each block. The finite state machine of the control algorithm is shown in
Figure 8.
4.2. ADC IP Core Implementation
The ADC IP core converts the analog voltage and current sensor outputs to digital data for predictive control implementation in FPGA. It consists of hard IP blocks and soft Modular ADC Core Intel
® FPGA IPs for logic implementation. The FPGA used in the design is a 12-bit SAR ADC with a sampling rate of 1 MSPS. It has one dedicated analog input channel and eight dual function channels with an input voltage range of 0–2.5 V. The voltage range can be extended to 3–3.3 V using the ADC Prescaler function, and the full-scale voltage is full scale-1 LSB. The Quartus
® software consists of a Modular ADC Core Intel
® FPGA IP to create, configure, and compile the ADC design. Modular ADC Core Intel
® FPGA IP is a soft controller to instantiate on-chip ADC hard IP blocks, and PLL provides a 50 MHz input clock to the ADC. Each ADC block can use internal or external voltage references [
54].
ADC IP Core design is performed using the Platform Designer Interface in Quartus II. Modular ADC Core Intel
® FPGA IP controls the Hard IP Block in the ADC. Using the parameter editor in Modular ADC Core Intel
® FPGA IP, ADC clock, sampling rate, analog channel selection, and sequencing of the channels are made. Modular ADC Core IP Core has four configuration alternatives for various ADC applications [
54]. Among them, Standard Sequencer with External Sample Storage configuration is utilized in the proposed system. In this configuration, the ADC design exports the ADC conversion data to the core for post-processing, and the ADC Toolkit monitors and analyzes the ADC data when the design is running. This data can be accessed through the debugging tool-System Console, and the analog performance of each channel is verified [
55].
4.3. Dual-Port RAM IP Configuration
The 12-bit output data from the ADC block is provided to an internal memory block. Several IP cores are available in the Quartus environment to implement various memory modes. The proposed system uses a simple dual-port RAM IP core to perform simultaneous read and write operations. Simple dual-port RAM selected from the IP catalog is customized using the parameter editor. The number of ports, memory size, the width of the input data bus, memory block type, clocking method, and options for output file generation is specified in the parameter editor, and a
.qsys file is generated representing the IP core [
55]. The simple dual-port RAM IP core output is analyzed using the system debugging tool, Signal Tap Logic Analyzer. The tool utilizes on-chip memory for the functional verification of the design. The test nodes are sampled to display in the Quartus environment for analysis. The available resources can be estimated using the logic analyzer interface before its compilation into the design. Add an instance in the signal configuration window with sample depth and RAM type. The memory buffers and communicates the data to the analyzer interface [
55].
4.4. Reference Current Generation Block
The three-phase sinusoidal reference current is generated using a Single port ROM memory IP core. The IP Catalog provides the ROM:1-PORT IP Core, and the parameters are customized using the parameter editor. It has one port for read-only operations, and the initial content of the memory is specified using a .mif file. Three 1-PORT ROM IP cores are generated for each of the three-phase reference currents iA*, iB*, and iC* and instantiated in the top module. The ADC outputs 12-bit data of actual load current, and the reference generator block also generates 4096 samples of sinusoidal reference output.
4.5. ABC to αβ Conversion Block
The
ABC to αβ conversion, also known as Clarke’s transformation, transforms balanced three-phase quantities to two-axis orthogonal reference quantities for simplifying the analysis of three-phase circuits. The transformation equation is given in Equation (3). The conversion is coded in Verilog in fixed-point format. Both the reference and actual values of three-phase currents are converted to the
αβ reference frame. Nios II Floating-Point Hardware 2 (FPH2) component can be utilized for the floating-point operations in MAX 10. However, only single-precision VHDL coding is supported in FPH2 [
56].
4.6. Prediction and Optimization Block
The prediction and optimization block is the most computationally intensive stage during the implementation. A well-designed architecture with optimized use of resources (multipliers, multiplexers, adders, static RAM) and appropriate calculation time will reduce the computational burden of the system. For long prediction horizons (N > 1), the computational burden is addressed by a branch-and-bound algorithm [
57]. However, for short prediction horizons (N < 2), the optimization problem can be solved via exhaustive search enumerating all the possible switching states of the inverter [
58]. Resource sharing and streaming is another approach to ease the computational burden of the system. In the proposed system, only a short prediction horizon is evaluated using an exhaustive search approach. The FPGA is well suited for parallel and pipelined architecture implementation, forming the exhaustive search core. By utilizing the concurrent nature of FPGA, prediction of future states and cost function calculation is performed in parallel, and the resources can be decoupled. The cost function evaluation is then pipelined, and cost function values are obtained for different inputs. Only sequential operation is possible for cost function evaluation which gets modified at every input. The optimum switching state corresponding to the minimum cost function is then applied to the inverter. This operation is performed sequentially with the proper scheduling of the calculation core [
48]. The data flow among different modules is controlled by FSM shown in
Figure 8. The measurement, prediction, and switching state generation must be complete in the same sampling interval.
5. Simulation Results of the FS-PDCC
The simulation of the FS-PDCC is performed in MATLAB/Simulink to verify the performance of the control algorithm. The output power of 200 W is considered for the design. The LC filter parameters are designed based on the load current rating and cut-off frequency. The RL load parameters are fixed based on the three-phase VSI design. DC input voltage of 30 V is provided for the analysis. The parameters used for the experimental setup are utilized for the simulation analysis. The simulation results for a sampling time of
Ts = 20 μs are presented here. The simulation parameters are given in
Table 1.
Figure 9 shows the switching pulses to drive the MOSFETs generated by the proposed FS-PDCC technique. The optimum switching state is applied to the upper and lower switches of the three-phase inverter bridge circuit in a complementary manner. The three-phase output voltages,
(phase-to-neutral) of the proposed system are shown in
Figure 10.
Figure 11 represents the load current waveform of FS-PDC controlled two-level three-phase VSI for a sampling time of
Ts = 20 μs.
Figure 12 shows the phase “A” load current waveform with the actual current i
A_act tracking the reference i
A_ref for a sampling period of
Ts = 20 μs. The dynamic response of the load current under transient conditions is given in
Figure 13. A step-change in reference is provided, and the load current immediately tracks the reference value.
The FPGA enables the control implementation at a very high sampling rate. The FPGA under consideration has a clock frequency of 50 MHz (20 ns). Large sampling intervals result in high ripples at the output resulting in poor output quality.
Figure 14 shows the load current waveforms for different sampling intervals. The waveform quality is much lower for a sampling interval of
Ts = 50 μs, as shown in
Figure 14a. For a small sampling interval of
Ts = 5 μs, the load current exhibits better output quality with reduced ripples represented in
Figure 14b. However, this results in high switching frequency and can be addressed by adding constraints in the cost function.
Figure 15 depicts variation in load current THD (ITHD) for different sampling intervals, and it can be observed that THD is much lower for small sampling intervals but at the expense of more resources.
The resource utilization of the predictive control algorithm is another impeding factor for employing very small sampling intervals in the range of nanoseconds. Many resources will be utilized, and the design will not fit the target device. The cost/performance trade-off is one of the primary objectives for the complex predictive current control discussed in the article. Intel MAX 10 device is highly cost-effective compared to its conventional counterparts from Xilinx. Hence, a sampling time of
Ts = 20 μs is used for the experimental analysis, and its load current spectrum is given in
Figure 16a.
Figure 16b shows the load current spectrum for
Ts = 5 μs.
Comparison with Conventional Control Techniques
Control strategies for power electronic converters include linear and non-linear techniques. Among them, sinusoidal pulse width modulation (SPWM) and hysteresis current control are the prominent ones. The FS-PDCC is also a non-linear control technique, and hence a comparison is made between FS-PDCC and hysteresis current control for validating the predictive current control using MATLAB/Simulink.
Figure 17a shows the load current waveform of a hysteresis current controlled three-phase VSI. The load current waveform exhibits high distortion than the load current waveform of FS-PDC-controlled three-phase VSI (T
s = 20 μs) in
Figure 17b. The comparison chart representing THD vs. sampling rate is given in
Figure 18, and it can be identified that FS-PDCC exhibits better output quality compared to hysteresis current control.