a synchronous distributed control and communication network …€¦ · in such distributed control...
TRANSCRIPT
A Synchronous Distributed Control andCommunication Network for High-Frequency
SiC-Based Modular Power Converters
Yu Rong
Thesis submitted to the faculty of the
Virginia Polytechnic Institute and State University
in partial fulfillment of the requirements for the degree of
Master of Science
in
Electrical Engineering
Jun Wang, Chair
Dushan Boroyevich, Co-Chair
Rolando Burgos
November 21st, 2019
Blacksburg, Virginia
Keywords: communication network, distributed control, synchronization, low latency,
modular power converters, SiC MOSFET
A Synchronous Distributed Control andCommunication Network for High-Frequency
SiC-Based Modular Power Converters
Yu Rong
(ABSTRACT)
Numerous power electronics building blocks (PEBB) based power conversion systems
have been developed to explore modular design, scalable voltage and current ratings, low-
cost operations, etc. This paper further extends the modular concept from the power stage
to the control system. The communication network in SiC-based modular power convert-
ers is becoming significant for distributed control architecture, with the requirements of
tight synchronization and low latency. The influence of the synchronization accuracy on
harmonics under the phase-shifted carrier pulse width modulation (PSC-PWM) is evalu-
ated. When the synchronization is accurate, the influence of an increase in harmonics can
be ignored. Thus, a synchronous distributed control and communication protocol with well-
performed synchronization of 25 ns accuracy is proposed and verified for a 120 kHz SiC-based
impedance measurement unit (IMU) with cascaded H-bridge PEBBs. An improved synchro-
nization method with additional analog circuits is further implemented and verified with
sub-ns synchronization accuracy.
A Synchronous Distributed Control andCommunication Network for High-Frequency
SiC-Based Modular Power Converters
Yu Rong
(GENERAL AUDIENCE ABSTRACT)
The power electronics building block (PEBB) concept is proposed for medium-voltage
converter applications in order to realize the modular design of the power stage. Tradition-
ally, the central control architecture is popular in converter systems. The voltage and current
are sensed and then processed in one central controller. The control hardware interfaces and
software have to be customized for a specified number of power cells, and the scalability of
controller is lost. In stead, in the distributed control architecture, a local controller in each
PEBB can communicate with the sensors, gate drivers, etc. A high-level controller collects
the information from each PEBB and conducts the control algorithm. In this way, the design
can be more modular, and the local controller can share the computation burden with the
high-level controller, which is good for scalability.
In such distributed control architecture, a synchronous communication system is required
to transmit data and command between the high-level controller and local controllers. A
power converter always requires a highly synchronized operation to turn on or turn off the
devices. In this work, a synchronous communication protocol is proposed and experimentally
validated on a SiC-based modular power converter.
Acknowledgments
First and foremost, I would like to express my deepest gratitude to my advisor, Dr. Jun
Wang, for his continuous support and guidance of my master’s study and research. He
helped transfer me from an undergraduate student to a graduate student, and fostered my
abilities of logical thinking, brainstorming, exploring, and presenting. I cannot imagine a
better advisor and mentor, and I am grateful for his patience, motivation, enthusiasm, and
immense knowledge.
I would like to state my sincere appreciation to my co-advisor Dr. Dushan Boroyevich.
He has guided and encouraged me greatly, stressing the importance of being professional
and doing the right thing. His new ideas and thoughts have deeply inspired me. I would
also like to give a great amount of thanks to my committee member Dr. Rolando Burgos.
His insightful comments and suggestions have helped me immensely throughout my master’s
degree program in CPES.
I would like to give a special thanks to Dr. Zhiyu Shen for his extremely thorough
guidance. When problems arose, discussions with him were illuminating, and I could always
get feasible suggestions. I would not have been able to achieve any of my accomplishments
without his persistent help.
I am grateful to the following staff at CPES, both past and present: Ms. Marianne
Hawthorne, Ms. Linda Long, Ms. Lauren Shutt, Ms. Trish Rose, Ms. Na Ren, Ms. Yan
Sun, Ms. Audri Cunningham, Ms. Teresa Shaw, Mr. Dennis Grove, Mr. David Gilham, and
Mr. Matthew Scanland.
I would like to thank my other peers at CPES who helped me along the way: Dr. Sizhan
Zhou, Dr. Bo Wen, Dr. Boran Fan, Dr. Igor Cvetkovic, Ms. Qian Li, Mr. Yue Xu, Mr.
Jianghui Yu, Ms. Ye Tang, Ms. Jiewen Hu, Mr. Cong Tu, Ms. Tianyu Zhao, Mr. Bo Li, Ms.
Emma Raszmann, Ms. Grace Watt, Mr. Shuo Wang, Mr. Feiyang Zhu, Mr. Yinsong Cai,
Mr. Keyao Sun, Ms. Le Wang, Mr. Slavko Mocevic, Mr. Joseph Kozak, Mr. He Song, Ms.
iii
Ning Yan, Mr. Xiang Lin, Mr. Joshua Stewart, Mr. Tam Nguyen, Mr. Vladimir Mitrovic,
and countless other students, visiting scholars, and faculty.
I am extremely grateful to my parents for their endless love, support, and encouragement.
Thank you for giving me the strength to follow my heart and chase my dreams. I would also
like to thank my boyfriend for being with me. I am grateful to have him by my side for his
love, understanding, and encouragement.
iv
Contents
List of Figures vii
List of Tables x
1 Introduction 1
1.1 Research Motivations and Challenges . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Communication Network for Modular Power Converters 5
2.1 Requirements of Communication for Modular Power Converters . . . . . . . 5
2.1.1 Transmission Medium . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 Communication Topology . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.3 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.4 Communication Bandwidth . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Review of Synchronization Methods . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Power Electronics System Network (PESNet) . . . . . . . . . . . . . 9
2.2.2 Time-Stamping-Based Synchronization (TSBS) . . . . . . . . . . . . 11
2.2.3 Ethernet for Control Automation Technology (EtherCAT) . . . . . . 13
2.2.4 White Rabbit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Review of Communication Protocols Used in Power Converters . . . . . . . . 16
3 Proposed Communication Protocol for Power Converter 18
3.1 Protocol Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
v
3.2 Synchronization Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.1 Synchronization Based on Precision Time Protocol (PTP) . . . . . . 21
3.2.2 Syntonization Based on Oversampling Clock and Data Recovery (CDR) 23
3.3 Protocol Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.1 Physical Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.2 Data Link Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.3 Network Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.4 Application Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4 Protocol Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4 Experiment Verification 36
4.1 Principles of the Converter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2 Distributed Control Scheme and Communication Network . . . . . . . . . . 36
4.3 Measurement Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.3.1 Synchronization Performance . . . . . . . . . . . . . . . . . . . . . . 41
4.3.2 Closed-Loop Control Results . . . . . . . . . . . . . . . . . . . . . . 44
4.4 Improved Synchronization Performance with Additional Analog Circuits . . 53
4.5 The Influence of Synchronization Accuracy on Harmonics under PSC-PWM 56
5 Summary and Future Work 60
5.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
References 62
vi
List of Figures
1.1 Conceptual power electronics systems reference model. . . . . . . . . . . . . 2
1.2 Distributed control and communication network. . . . . . . . . . . . . . . . 3
2.1 Central control network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Synchronization method in PESNet 1.2. . . . . . . . . . . . . . . . . . . . . 10
2.3 Synchronization method in PESNet 2.2. . . . . . . . . . . . . . . . . . . . . 11
2.4 Propagation delay measurement in time-stamping-based synchronization. . . 12
2.5 Propagation delay measurement in EtherCAT. . . . . . . . . . . . . . . . . . 14
2.6 Propagation delay measurement in White Rabbit. . . . . . . . . . . . . . . . 16
3.1 Timing diagram of EtherCAT protocol. . . . . . . . . . . . . . . . . . . . . . 19
3.2 Timing diagram of PESNet 3.0 protocol. . . . . . . . . . . . . . . . . . . . . 20
3.3 Principle of PTP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Synchronization based on PTP with syntonization. . . . . . . . . . . . . . . 24
3.5 Synchronization based on PTP without syntonization. . . . . . . . . . . . . 24
3.6 Sampling counter syntonization. . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.7 Line-topology communication network for four nodes. . . . . . . . . . . . . . 30
3.8 Time stamping state simulation. . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.9 Offset calculation state simulation. . . . . . . . . . . . . . . . . . . . . . . . 32
3.10 Communication implementation for interleaved PWM control. . . . . . . . . 33
3.11 Simulation for normal communication. . . . . . . . . . . . . . . . . . . . . . 34
3.12 Clock domain crossing interface of master. . . . . . . . . . . . . . . . . . . . 35
3.13 Clock domain crossing interface of slave. . . . . . . . . . . . . . . . . . . . . 35
4.1 PIU for shunt current injection. . . . . . . . . . . . . . . . . . . . . . . . . . 37
vii
4.2 PIU for series current injection. . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3 Control block diagram of shunt injection mode. . . . . . . . . . . . . . . . . 39
4.4 Control block diagram of series injection mode. . . . . . . . . . . . . . . . . 40
4.5 Phase-shifted carrier pulse width modulation. . . . . . . . . . . . . . . . . . 40
4.6 PIU hardware setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.7 Synchronization among four nodes. . . . . . . . . . . . . . . . . . . . . . . . 43
4.8 Synchronization performance among four nodes. . . . . . . . . . . . . . . . . 43
4.9 Interleaved PWM generation for three slaves. . . . . . . . . . . . . . . . . . 44
4.10 Distributed control and communication network for shunt injection mode. . 45
4.11 Closed-loop control enabled in shunt injection mode. . . . . . . . . . . . . . 46
4.12 Capacitor voltage balancing in shunt injection mode. . . . . . . . . . . . . . 47
4.13 200 Hz shunt current injection. . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.14 1 kHz shunt current injection. . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.15 Distributed control and communication network for series injection mode. . . 48
4.16 Closed-loop control enabled in series injection mode. . . . . . . . . . . . . . 49
4.17 Inductor current sharing in series injection mode. . . . . . . . . . . . . . . . 49
4.18 500 Hz series voltage injection. . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.19 1 kHz series voltage injection. . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.20 Fault response when a fault is detected in slave 1. . . . . . . . . . . . . . . . 51
4.21 Fault response when a fault is detected in slave 2. . . . . . . . . . . . . . . . 51
4.22 Fault response when a fault is detected in slave 3. . . . . . . . . . . . . . . . 52
4.23 Comparison between two syntonization methods. . . . . . . . . . . . . . . . 53
4.24 Simplified synchronization principle with additional analog circuits. . . . . . 54
4.25 Improved synchronization among three nodes. . . . . . . . . . . . . . . . . . 55
viii
4.26 Improved synchronization performance among three nodes. . . . . . . . . . . 55
4.27 Single-phase 7-level cascaded H-bridge converter simulation. . . . . . . . . . 56
4.28 Simulation result of FFT with ideal synchronization. . . . . . . . . . . . . . 57
4.29 Simulation result of FFT with synchronization error from different carrier
frequencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.30 Simulation result of FFT with synchronization error from the delay of carrier. 58
ix
List of Tables
3.1 Packet from Master to Slave . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Broadcast Section of the Packet from Master to Slave . . . . . . . . . . . . . 27
3.3 Complete Non-Periodic Message . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4 Periodic Section of the Packet from Master to Slave . . . . . . . . . . . . . . 28
3.5 Packet from Slave to Master . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.6 Periodic Section of the Packet from Master to Slave . . . . . . . . . . . . . . 29
x
Chapter 1
Introduction
1.1 Research Motivations and Challenges
1.1.1 Motivations
Power electronics technology is commonly used in modern power systems for high-efficiency,
high-density and high-quality electrical power conversion. In order to reduce the cost and im-
prove the reliability, the power electronics building block (PEBB) concept has been proposed
and applied in various power converters [1].
A standardized hierarchical power electronics systems reference model is shown in Fig. 1.1
based on the open systems interconnection (OSI) seven-layer reference model [2]. In medium-
voltage (MV) applications such as a shipboard power conversion system, connecting more
PEBBs in the system makes the control a complex task.
In general, the central control system and the distributed control system are two typical
architectures used for power conversion controls [3]. The central control system has only one
controller for all of the control tasks, which requires very powerful computation capability
and a large number of interfaces [4], [5]. With the increased number of PEBBs, the cable
system will be very complicated. In addition, due to the uncontrollable latency between
the central controller and each PEBB, it may cause a large current ripple with the increase
of the switching frequency, which is not applicable for SiC applications. For distributed
control, each PEBB owns a less powerful individual controller that communicates with a
central controller. The PEBB controller can share the computational burden with the central
controller, which increases the scalability and modularity of the power system [6], [7], [8].
However, it also brings the synchronization requirement to all of the controllers.
1
Chapter 1 1.1. Research Motivations and Challenges
Figure 1.1: Conceptual power electronics systems reference model.
1.1.2 Challenges
In order to realize the distributed control, a communication network, as shown in Fig. 1.2,
should be established. In the distributed control architecture, there is a local controller in
each PEBB, generating gate signals and collecting sensing information. For the system
operation, all of the controllers will be connected with a certain topology, such as star, ring,
bus, tree, mesh, etc, exchanging the sensing information to realize the control algorithm. A
suitable communication topology needs to be chosen, considering the number of the nodes,
the maximum latency of the network, and the control algorithm.
Another challenging issue in the distributed control and communication network is the
synchronization among all of the controllers. In addition to the reliable and correct data
transmission in each communication cycle, it is necessary to ensure the synchronous opera-
2
Chapter 1 1.2. Thesis Outline
tion of the devices, otherwise the converter’s operation will not follow the designed control
algorithm.
Figure 1.2: Distributed control and communication network.
1.2 Thesis Outline
The outline of the thesis is given as follows:
Chapter 2 discusses some requirements for the communication network in modular power
converters, including the transmission medium, communication network topology, synchro-
nization, and communication bandwidth. Several synchronization methods used in different
applications are reviewed and compared. Additionally, several communication protocols
used in power converters are analyzed and compared.
3
Chapter 1 1.2. Thesis Outline
Chapter 3 proposes a communication protocol for modular power converters. The syn-
chronization is realized based on the precision time protocol (PTP) and oversampling clock
and data recovery (CDR). The protocol is presented in detail, including the physical layer,
data link layer, network layer, and application layer.
Chapter 4 describes the hardware setup for a modular power converter. The distributed
control and communication network is implemented. The measurement results verified the
proposed synchronization method and communication protocol in Chapter 3. Finally, the
influence of the synchronization accuracy on the converter is evaluated.
Chapter 5 presents a summary and future work.
4
Chapter 2
Communication Network for Modular Power
Converters
2.1 Requirements of Communication for Modular Power Convert-ers
The distributed control and communication network comprises a central controller and
multiple PEBB controllers. They exchange the information of PWM references, commands,
fault conditions, voltage, current, temperature, etc., and must complete the process within
a time constrained by the control period. For the SiC-based modular converters, typically
switching at 10 kHz–20 kHz with interleaved carriers, the control period is limited to tens of
microseconds [9], [10]. Accordingly, the communication protocol for such applications must
be designed with fast communication speed and low latency. In addition, high-accuracy and
low-jitter synchronization among the controllers is a key requirement due to unequal latency
between any two of them.
The requirements of the communication network [11] for modular power converters are
herein discussed, with respect to transmission medium, communication topology, synchro-
nization, and communication bandwidth.
2.1.1 Transmission Medium
Due to fast switching SiC power devices (high dv/dt and di/dt) used in medium-voltage
power converters, the optical fiber is a desired transmission medium, which is more immune
to electromagnetic interference.
5
Chapter 2 2.1. Requirements of Communication for Modular Power Converters
2.1.2 Communication Topology
For the distributed communication network, the communication network topology should
be chosen and designed. Several basic communication topologies are introduced below:
1) Star Topology
In a star topology, each PEBB controller is connected to a central controller. All PEBB
controllers send local information directly to the central controller. The central controller
collects all of the information and generates PWM references for all PEBBs [12]. The high-
lighted advantage of the star topology is the simple communication protocol for implementa-
tion. The primary disadvantage of the star topology is the poor scalability and modularity.
Adding more PEBBs to the system requires more interfaces on the central controller, which
is limited by the available I/Os on the central controller.
2) Bus Topology
In a bus topology, each node (the PEBB controller or the central controller) is connected
by the interface connector to a commonly shared central cable, the bus. All data in the
network is transmitted over this bus and is able to be received by all nodes in the network
simultaneously. It requires a complex communication protocol and special “T” connectors
for optical fibers [13].
3) Ring Topology
In a ring topology, each node requires only one transmitter and one receiver, which is a
modular design [14]. However, when the PEBB controller sends data to the central controller,
the data passes through each intermediate node on the ring until it reaches the central
controller, which leads to a long latency and limits the control bandwidth [15].
6
Chapter 2 2.1. Requirements of Communication for Modular Power Converters
4) Tree Topology
A tree topology is a collection of star networks arranged in a hierarchy. The functionality
of the central controller can be assigned to some other nodes, and the communication protocol
should be designed to deal with the synchronization and packet routing.
2.1.3 Synchronization
For the distributed control and communication network, PWM signals are modulated in
each PEBB controller on its own FPGA. It is necessary to ensure the synchronous operation
to follow the designed modulation scheme. The required synchronization accuracy can be
set to be smaller than the time resolution of the PWM references used in each PEBB. The
time resolution can be defined in (2.1)
∆tresolution =Tsw
2nPWM_bit − 1(2.1)
where Tsw is the switching cycle, and nPWM_bit is the bit number of the PWM reference
data.
For example, if the switching frequency is 10 kHz, and the PWM reference is 12-bit data,
the desired synchronization accuracy should be better than 1/10kHz/(212 − 1) = 24.4 ns.
For a system with a higher switching frequency or a special control algorithm like integrated
capacitor-blocked transistor (ICBT) control [16], the synchronization accuracy should be
even better.
7
Chapter 2 2.2. Review of Synchronization Methods
2.1.4 Communication Bandwidth
The required communication bandwidth (BW) for power converters can be estimated as
follows:
BW = nPEBB ∗ nbit/PEBB ∗ fsw ∗ (1 + koh) (2.2)
where nPEBB is the number of PEBBs in the system; nbit/PEBB is the bit number per
PEBB; fsw is the switching frequency; and koh is the overhead percentage of the packet.
For each PEBB, only short messages are transmitted, at least one dc capacitor voltage
and one ac current. The resolution for both is usually between 12 to 16 bits. Moreover, the
PEBB temperature (e.g., 8 bits) is also necessary to transmit to the central controller for
monitoring the thermal performance. There should also be some bits (e.g., 8 bits) indicating
the state of the PEBB, such as fault conditions, etc. In total, each PEBB needs about 48
bits. The central controller sends PWM references to each PEBB, and the resolution is
usually between 12 to 16 bits. Additionally, for the system operation, the central controller
needs some bits (e.g., 8 bits) to send different commands, such as address distribution,
start-up synchronization, normal operation, etc. Taking a 100-PEBB converter system as an
example, assuming the switching frequency is 100 kHz and the overhead percentage is 50%,
the communication bandwidth should be larger than 100∗48∗100k∗(1+50%) = 720 Mbit/s.
2.2 Review of Synchronization Methods
The central control architecture with a star topology network is shown in Fig. 2.1. If the
length of all of the optical fiber cables is the same, and all of the PEBB nodes are identical
in terms of the signal transmission from the central controller, the synchronization among
all of the PEBBs is easy to realize. However, in the distributed control and communication
network shown in Fig. 1.2, with certain topologies other than the star topology, the latency
8
Chapter 2 2.2. Review of Synchronization Methods
from the central node to different PEBB nodes are not identical, and the synchronization
is not naturally realized. In this chapter, several synchronization methods are reviewed and
analyzed.
Figure 2.1: Central control network.
2.2.1 Power Electronics System Network (PESNet)
Power electronics system network (PESNet) is the first approach for the distributed con-
trol and communication system in modular power converters. In PESNet 1.2 [13], the syn-
chronization method is to add fillers in the synchronization packet to compensate for the
propagation delay between adjacent nodes. As an example, taking a four-node system, in-
cluding one master and three slaves shown in Fig. 2.2, a particular synchronization packet
is defined with a synchronization command followed by the address data in the same se-
quence as the physical connection. Between two adjacent address data, some filler bytes
are added to compensate for the physical delay between two adjacent nodes. The master
node sends the synchronization packet every switching cycle. Each slave node receives the
9
Chapter 2 2.2. Review of Synchronization Methods
synchronization command first and waits until it receives its own address data to generate
the synchronization signal and update the new PWM reference. In order to guarantee a
good synchronization performance, the fillers need to be added to compensate for the delay
and make sure that each slave is able to receive its own address data at the same time.
In this method, 4B/5B data encoding is utilized for the communication channel, and the
shortest filler to add is 4 bits, which sets a limitation to the synchronization performance.
The propagation delay between nodes is measured and predetermined, and the modularity
and scalability are poor. For instance, if the cable is changed or more nodes are added to
the system, the data frame should be modified.
Figure 2.2: Synchronization method in PESNet 1.2.
An updated protocol is PESNet 2.2 [17], [18]. The principle of this synchronization
method is shown in Fig. 2.3. Instead of transmitting a synchronization packet with addresses
and fillers in the network, the local netclock concept is implemented. In each node, there is
a local time counter representing the local time. The synchronization packet is transmitted
10
Chapter 2 2.2. Review of Synchronization Methods
with a predefined period, ∆t. During the synchronization process, the master node will send a
synchronization packet with its local netclock inside the packet. Each slave node will receive
the packet, adjust its local netclock to be the same with the received netclock data, and
forward the synchronization packet to the next node with the netclock data inside the packet
incrementally adjusted. Eventually, all of the nodes will be synchronized with the master
node. The local netclock concept is implemented in this synchronization method, which is
more generic and convenient for handling the synchronization issue. The synchronization
accuracy of PESNet 2.2 is within 80 ns for a three-PEBB system.
Figure 2.3: Synchronization method in PESNet 2.2.
2.2.2 Time-Stamping-Based Synchronization (TSBS)
The time-stamping-based synchronization (TSBS) is proposed in [19]. The synchroniza-
tion can be realized in two steps. The first step is the propagation delay measurement shown
in Fig. 2.4. The master node initiates a packet transmission and stamps the transmission
time of the packet based on its local clock. Each slave receives and transmits the packet to
the next node, and stamps the reception and transmission time based on its local clock. Then
11
Chapter 2 2.2. Review of Synchronization Methods
the slave node can calculate the pass-through time, ∆tsi, and send it forward in the packet.
Eventually, the master node will receive the packet with the pass-through time for all of the
slave nodes. The master stamps the reception time, and can calculate the total time from
the packet transmission to reception in the network, ttot. Assuming the propagation delay,
tavg_pro, between two adjacent nodes is identical, it can be calculated as
tavg_pro =ttot −
∑ni=1∆tsi
n+ 1(2.3)
where n is the number of the slave nodes.
Figure 2.4: Propagation delay measurement in time-stamping-based synchronization.
12
Chapter 2 2.2. Review of Synchronization Methods
The second step is the offset compensation to the reference clock. The propagation delay
tavg_pro calculated in the first step will be sent to each slave node. Thus, each slave node is
able to calculate the delay from the master node, tdelay_si, and adjust its local time counter
to be identical with the master’s local time counter.
The advantage of this method is that the synchronization process is decoupled from the
normal operation cycle. However, the assumption of the identical internode delay limits the
synchronization performance when the fiber cables between adjacent nodes are not the same
length.
2.2.3 Ethernet for Control Automation Technology (EtherCAT)
Ethernet for control automation technology (EtherCAT) implements its synchronization
mechanism, known as distributed clocks [20]. In order to implement the synchronization
method, there should be bidirectional communication links in the network, which is different
from the unidirectional links utilized in the previous methods.
The synchronization can be realized in two steps. The first step is the propagation
delay measurement shown in Fig. 2.5. The communication network is a line topology with
bidirectional communication links. During the propagation delay measurement process, the
master node will send a packet to the slave nodes on the processing path. At the last slave
node, the packet will be transmitted back from the slave nodes to the master node on the
forwarding path. During both paths, each slave stamps the reception time of the packet,
and the time duration between the two reception times can be calculated. For example, the
time durations for slave 1 and slave 2 are ts1 and ts2, respectively. Assuming the processing
time tP and the forwarding time tF are identical for all the slave nodes, and the propagation
delay of two adjacent nodes from packet transmission to reception on the processing path
and the forwarding path tpro are the same, the propagation delay of slave 1 and slave 2 from
13
Chapter 2 2.2. Review of Synchronization Methods
packet reception to reception on the forwarding path can be calculated as
tdelay_1,2 =1
2(ts1 − ts2) +
1
2(tP − tF) (2.4)
Similarly, the propagation delay of any two adjacent nodes from packet reception to re-
ception on the forwarding path can be calculated. The second step is the offset compensation
to the reference clock. Each slave node is able to calculate the delay from the reference node,
such as the slave 1 node in the example, and adjusts its local time to be synchronized with
the reference clock.
It is claimed in [21] that the distributed clocks are synchronized within much less than
1 µs of each other. In some literatures, the jitter of the synchronization signals is claimed to
be ±20 ns [22] or 15 µs [23].
Figure 2.5: Propagation delay measurement in EtherCAT.
14
Chapter 2 2.2. Review of Synchronization Methods
2.2.4 White Rabbit
The White Rabbit is originally utilized for the accelerator, which can achieve sub-ns
synchronization [24]. As shown in Fig. 2.6, an analog phase-locked loop (PLL) can be
implemented to adjust the phase of the clock to improve the synchronization performance.
The synchronization method is based on precision time protocol (PTP). The offset between
two nodes can be calculated and compensated by the four timestamps including the reception
and transmission timestamps. However, there is still a phase difference between the two
clocks. The basic principle of the analog PLL implementation is that the phase difference
between two nodes can be measured and then compensated using a voltage-controlled crystal
oscillator.
The closed-loop PLL control in the slave node is shown in Fig. 2.6. The reference (REF)
clock is the FPGA clock, which drives all of the blocks related to synchronization. The RX
clock is the recovered clock from the received serial data. There are two PLL control loops.
The dual mixer time difference (DMTD) clock loop is used to lock the DMTD clock to the
REF clock with a programmable frequency offset, in order to calculate the phase difference
between the REF clock and the RX clock. The REF clock loop is used to lock the REF clock
to the RX clock and apply a programmable phase shift to compensate for the phase shift.
Inside the FPGA, there are control blocks for the closed-loop control of the PLL, including
the phase detector, frequency detector, PI blocks, etc. A digital-to-analog converter is used
to convert the digital output to the analog voltage signal, and then applies to a voltage-
controlled crystal oscillator. The oscillator will adjust the frequency to compensate for the
phase difference [25].
15
Chapter 2 2.3. Review of Communication Protocols Used in Power Converters
Figure 2.6: Propagation delay measurement in White Rabbit.
2.3 Review of Communication Protocols Used in Power Convert-ers
SiC devices can switch at 10 kHz–20 kHz in MV high-current applications [26]. In the
distributed control architecture, a short cycle time in the order of tens of microseconds is
necessary, which requires a communication protocol with fast communication speed and low
latency.
Several papers have investigated different communication protocols for power converter
systems. Controller Area Network (CAN) is a robust vehicle bus standard with the data
rate of 1 Mbps. In [7], a 6.6 kV, 2 MVA, three-phase six-layer cascaded H-bridge multilevel
inverter was demonstrated with a CAN communication protocol. The switching frequency
is 1 kHz, and the cycle time is 1 ms. The low data rate of the CAN network sets a limit
on the cycle time, which is not capable of high-frequency SiC-based applications. Process
Field Net (PROFINET) is an industry technical standard for data communication over
Industrial Ethernet. In [8], the worst-case cycle time for PROFINET is analyzed to be
8.349 ms, which is not suitable in power converter systems. Motion and Control Ring
Optical (MACRO) is also an Ethernet style protocol for motion control. Based on MACRO
16
Chapter 2 2.3. Review of Communication Protocols Used in Power Converters
and Fiber Distributed Data Interface (FDDI), PESNet is proposed as the communication
protocol in power converter systems. The data rate is 125 Mbps. In [8], the worst-case cycle
time for PESNet is estimated to be 85.24 µs, which can achieve a 10 kHz data sampling rate.
17
Chapter 3
Proposed Communication Protocol for Power
Converter
3.1 Protocol Overview
In the proposed communication protocol PESNet 3.0, the master-slave communication
mode with bidirectional line topology is utilized in the distributed communication system.
There are three main steps to run the system. The first step is the start-up address dis-
tribution to arrange the address number for each slave. The second step is to establish
synchronization from the master to all of the slaves. The third step is the normal commu-
nication. During every communication cycle, slaves send voltage and current to the master
and generate gate signals based on the calculated PWM references received from the master.
Some non-periodic information such as the synchronization command, stamped transmission
and reception times, the temperature and fault signals should be transmitted during normal
communication as well.
Among the communication protocols used for power converters, EtherCAT has received
particular attention due to its real-time abilities, synchronization performance, fault diag-
nostics [23], [27], [28], etc. EtherCAT is an Ethernet-based field bus system, with the short
cycle time of less than 100 µs and low synchronization jitter of less than 1 µs. A specific
performance index of a control system is the minimum cycle time allowed. The higher the
sampling rate (the control bandwidth), the higher accuracy of the control system. The
comparison of the minimum cycle time between EtherCAT and PESNet 3.0 is analyzed as
follows.
The working principle of PESNet 3.0 is different from that of EtherCAT. Taking the
18
Chapter 3 3.1. Protocol Overview
single-phase seven-level cascaded converter shown in Fig. 3.1 as an example, the master
sends references to all the PEBBs and the PEBBs sample the local voltage and current
values every 1/2 switching cycle with the EtherCAT protocol. However, in the PESNet 3.0
protocol shown in Fig. 3.2, each switching cycle is divided to six time slots. Each time slot is
assigned for the data exchange for one PEBB. Thus, the current will be sampled every 1/6
of the switching cycle. The inner loop control bandwidth is increased three times compared
with EtherCAT-based control.
Figure 3.1: Timing diagram of EtherCAT protocol.
From the frame perspective, EtherCAT uses the summing frame method in which one
single frame is transmitted from the master to slaves and finally returns to the master via a
line or ring topology. The slaves exchange the local sampling data with the reference from
the master on the fly. The minimum cycle time (TC,min) of EtherCAT can be estimated by
the summation of the frame time (TF) and the propagation delay (TP) [29]
TC,min = TF + TP (3.1)
19
Chapter 3 3.1. Protocol Overview
Figure 3.2: Timing diagram of PESNet 3.0 protocol.
where TF is the time to send all the bits of the EtherCAT frame over the network, and
TP is the time for this frame to travel across all the nodes in the network.
Assuming the size of the telegram for each slave is the same, the frame time (TF) can be
estimated by
TF = (sH +N · sT)Tbyte (3.2)
where sH is the size of all the header parts; N is the number of slaves; sT is the size of
the telegram for each slave; and Tbyte is the transmission time for one byte.
However, in PESNet 3.0, the frame only contains the data for one slave, and the frame
time (T ′F ) can be estimated by
T ′F = (sH + sT)Tbyte (3.3)
Assuming the same topology used for EtherCAT and PESNet 3.0, the propagation delay is
20
Chapter 3 3.2. Synchronization Method
similar, but the frame time is less in PESNet 3.0, almost 1/N of the frame time in EtherCAT.
Thus, the minimum cycle time of PESNet 3.0 is less than that of EtherCAT.
3.2 Synchronization Method
For the distributed control, each PEBB is controlled by an individual controller, so the
synchronization among all the controllers is significant for the system operation. Based on
the literature review in chapter 2, it is standard and beneficial to realize the synchroniza-
tion based on IEEE 1588 precision time protocol (PTP). In this thesis, the synchronization
methods are the syntonization based on oversampling clock and data recovery (CDR) to
get a constant offset between two nodes, and the synchronization based on precision time
protocol (PTP) to compensate for the constant offset.
3.2.1 Synchronization Based on Precision Time Protocol (PTP)
The clocks operate with speed k and offset b with respect to a physical time scale. Hence,
an ideal clock would have k = 1 and b = 0. Considering one master and one slave, the local
time can be expressed as (3.4) and (3.5), respectively.
m(t) = kmt+ bm (3.4)
s(t) = kst+ bs (3.5)
The synchronization principle of PTP [30] is shown in Fig. 3.3. Assuming there is a
constant offset o between the local time of master m(t) and slave s(t),
o = s(t)−m(t) (3.6)
21
Chapter 3 3.2. Synchronization Method
Figure 3.3: Principle of PTP.
The synchronization steps are:
1) Master sends a synchronization packet to slaves.
2) Master records the transmission time T1 and reception time T ′2 of this packet based
on its local clock. The slave records the transmission time T2 and reception time T ′1 of this
packet based on its local clock.
T ′1 − T1 = o+ d (3.7)
T ′2 − T2 = −o+ d (3.8)
22
Chapter 3 3.2. Synchronization Method
where o is the offset between master and slave, and d is the propagation delay between
master and slave, assuming the propagation delays on both communication links are the
same.
3) Master sends its transmission time and reception time to each slave.
4)Each slave calculates the offset from the master based on the transmission time and
reception time of slave and master.
o =(T ′
1 − T1)− (T ′2 − T2)
2(3.9)
5) Each slave adjusts the local clock synchronized to master clock based on the calculated
offset.
If the clock frequency of the two nodes are assumed to be identical, i.e., km = ks, the
synchronization between the master and the slave can ideally be realized by one PTP execu-
tion as shown in Fig. 3.4. However, the clock frequency of different nodes are not identical.
Then the PTP based synchronization process should be executed more frequently in order to
guarantee a small mismatch between two nodes as shown in Fig. 3.5. Thus, if the clock fre-
quency can be adjusted to be identical, which is called the syntonization, the PTP execution
frequency can be reduced, which saves the communication bandwidth [31].
3.2.2 Syntonization Based on Oversampling Clock and Data Recovery (CDR)
A clock oscillator with a certain frequency is utilized on the conventional digital control
board. However, the clock frequency for different control boards will inevitably have some
deviation. In this thesis, a sampling counter is created for each node, and the frequency of
the sampling counter is adjusted to be identical.
23
Chapter 3 3.2. Synchronization Method
Figure 3.4: Synchronization based on PTP with syntonization.
Figure 3.5: Synchronization based on PTP without syntonization.
24
Chapter 3 3.2. Synchronization Method
Using five times oversampling, the frequency of local clock is five times of the frequency of
the serial data. There is a sampling counter counting from “0” to “4” in each node. The CDR
block adjusts the local time counter based on the input data transition. For example, in an
ideal case, when a new bit is received by the slave, the value of the sampling counter should
be “0”. If it is assumed that the slave clock runs a little bit faster than the master clock as
shown in Fig. 3.6, when the sampling counter counts to “1”, it detects a data transition of
the input data, then the sampling counter in the slave will count to “5” for the final state.
In this way, the sampling counter is adjusted slower.
Compared with the conventional oversampling CDR, this improved method has a func-
tionality of local time counter adjustment through the data recovery, which can reduce the
PTP execution frequency for synchronization.
Figure 3.6: Sampling counter syntonization.
25
Chapter 3 3.3. Protocol Layers
3.3 Protocol Layers
The proposed protocol follows the ISO/OSI model [2], and is simplified to contain four
layers, including the physical layer, the data link layer, the network layer, and the application
layer. The lower three layers build the communication network, and the application layer is
flexible for any control algorithms implementation in power converters.
3.3.1 Physical Layer
In this work, the physical layer is 1 mm diameter plastic optical fiber (POF) for noise
immunity. The transmitter (AFBR-16xxZ) utilizes an integrated 650 nm LED and a driver
IC with TTL input logic. The receiver (AFBR-26x4Z) contains an integrated PIN diode and
digitalizing IC with TTL output logic. Both can support up to 50 MBd at a distance up to
50 m with 1 mm POF over operating temperature range.
3.3.2 Data Link Layer
In this work, the 4B/5B encoding method is implemented to get enough logic “1”. For
this method, the input is a 4-bit character and the output is a 5-bit character based on a
4B/5B data encoding table and command encoding table. Besides, the non-return to zero
inverted (NRZI) encoding method is implemented to guarantee enough data transition edges
to detect for the clock and data recovery. For NRZI encoding, the input and output are both
serial data. The input logic “1” is represented by a transition of the physical level at the
output, and the input logic “0” is represented by no transition of the physical level at the
output.
3.3.3 Network Layer
In order to realize the high-frequency SiC application, the communication frequency
should also be very high. Thus, a short packet is designed to reduce the data transmis-
26
Chapter 3 3.3. Protocol Layers
sion time. The data transmission is based on the timing slots which are synchronized among
all the nodes. Thus, the address information can be eliminated from the packet. During
each communication cycle, the PWM reference and sensing data for one slave node are
transmitted in the network.
The packet for the communication link from master to slave is defined in Table 3.1. The
start of the packet is one bit of “0”, and the end of the packet is nine bits of “1”.
Table 3.1: Packet from Master to Slave
Start of packet Broadcast Non-periodic Periodic End of packet1 bit 4 bytes 1 byte 3 bytes 9 bits
The four-byte broadcast section contains five parts as shown in Table 3.2. The broadcast
section is defined to be recognized by all the slave node. One reset bit is used to reset the
timing slots for all the nodes. One bit of “PWM_EN” is used to enable or disable the PWM
gate signal generation in slave nodes. The action type indicates whether the slaves will wait
until the action time to send a packet to the master. The 20-bit action time is the start
time of the next communication cycle. One byte cyclic redundancy check (CRC) is added
to check the broadcast section.
Table 3.2: Broadcast Section of the Packet from Master to Slave
Reset PWM_EN Action type Action time CRC1 bit 1 bit 2 bits 20 bits 8 bits
There is only one byte for the non-periodic message transmission in the packet, as shown
in Table 3.1. The non-periodic section is designed for the asynchronous data transmission,
such as the address distribution command, synchronization command, stamped transmission
and reception time, etc. Since the non-periodic message is not necessarily updated every
communication cycle, it can be transmitted slowly, in this case, one byte per communication
27
Chapter 3 3.3. Protocol Layers
cycle. One complete non-periodic message is shown in Table 3.3. The destination indicates
the slave address that is expected to respond to the non-periodic message. The address could
be one particular slave node or all of the slave nodes. The total byte number equals to “n”
and defines the length of the following non-periodic data. One byte CRC is added to check
the non-periodic message.
Table 3.3: Complete Non-Periodic Message
Destination Total byte number Non-periodic data CRC1 byte 1 byte n bytes 1 byte
The periodic section contains one byte of command and two bytes of data as shown in
Table 3.4. In this thesis, the two bytes data contains 12 bits of PWM reference data and 4
bits of CRC.
Table 3.4: Periodic Section of the Packet from Master to Slave
Command Data1 byte 2 bytes
The packet for the communication link from slave to master is defined in Table 3.5. The
start of the packet is one bit of “0”, and the end of the packet is nine bits of “1”.
Table 3.5: Packet from Slave to Master
Start of packet Non-periodic Periodic End of packet1 bit 1 byte 5 bytes 9 bits
Similar to the packet from master to slave described above, only one byte in the packet
is for the non-periodic section. For instance, the device temperature can be sent from slave
to master in the non-periodic section.
There are five bytes in total for the periodic section. As shown in Table 3.6, there are
four bytes of data, two bytes for current sensing data and CRC, and two bytes for voltage
28
Chapter 3 3.4. Protocol Implementation
sensing data and CRC. One byte of state can be utilized to indicate the state of slave node,
such as normal state, fault state, etc.
Table 3.6: Periodic Section of the Packet from Master to Slave
Data State4 bytes 1 byte
3.3.4 Application Layer
In this work, the data link layer and network layer are implemented on the FPGA to
establish the communication network. Based on the PWM reference and sensing data trans-
mitted in the network, the control algorithms can be implemented as the application layer,
such as the current loop control, voltage loop control, balancing control, etc. In this work,
the application layer is implemented on the DSP.
3.4 Protocol Implementation
In this section, the proposed communication protocol implementation on a four-node sys-
tem shown in Fig. 3.7 is developed and presented in detail. The distributed communication
system consists of one master and three slaves with bidirectional line topology. Besides the
communication network among the nodes, there is also communication inside the node. For
instance, the master will calculate the PWM references; and slaves will sample the local
voltage and current as well as performing the PWM modulation.
In this network, on the communication link from master to slave, slaves cannot change
data or insert data to the packet. It can only receive the data from the master. On the
communication link from slave to master, the slave can change data or insert data, such as
the sensing data, to the packet depending on the timing slot. In general, there are three
main steps to run the system. The first step is start-up address distribution to arrange the
29
Chapter 3 3.4. Protocol Implementation
Figure 3.7: Line-topology communication network for four nodes.
address number for each slave. The second step is start-up synchronization to synchronize
all of the slaves with the master.
For the synchronization process, there are two states, including the time stamping state
and the offset calculation state.
1) Master sends the time stamping command to all the slaves and stamps the transmission
time.
2) Each slave receives the time stamping command on the master-to-slave link and stamps
30
Chapter 3 3.4. Protocol Implementation
the reception time.
3) Each slave transmits the time stamping command back on the slave-to-master link and
stamps the transmission time.
4) Master receives the time stamping command on the slave-to-master link and stamps
the reception time.
5) Master sends the offset calculation command and its stamped transmission and recep-
tion time.
6) Each slave receives the complete non-periodic message of master’s transmission and
reception time after several communication cycles, then calculates the offset from the master
node and compensate for the offset.
To simplify the synchronization simulation, one master and one slave with bidirectional
communication links is simulated in Quartus. The time stamping state simulation is shown
in Fig. 3.8. The command “00000010” is the time stamping command. The master and
slaves stamp the transmission and reception time, T1, T2, T ′1, and T ′
2. The offset calculation
state simulation is shown in Fig. 3.9. The command “00000110” is the offset calculation
command. The master sends its transmission and reception time to the slave and the slave
calculates the offset “o”.
The third step is normal communication, where slaves send voltage and current to the
master and generate gate signals based on the calculated PWM references received from the
master. In the second step, the synchronization between master and slaves is realized. Based
on the synchronized timing among all the nodes, a synchronized periodic timing slot can be
created in each node. Each timing slot represents one communication cycle. As an example,
the communication implementation for the interleaved PWM control is shown in Fig. 3.10.
31
Chapter 3 3.4. Protocol Implementation
Figure 3.8: Time stamping state simulation.
Figure 3.9: Offset calculation state simulation.
A periodic timing slot from “1” to “6” is generated based on the synchronized timing. The
timing slot “1” is assigned to slave 1. At the beginning of the timing slot “1”, the slave starts
the A/D sensing of current and voltage. The sensing data of slave 1 and the PWM reference
for slave 1 from master are transmitted in this timing slot. At the end of the timing slot
“1”, the PWM reference can be updated to generate gate signals for slave 1. Similarly, the
32
Chapter 3 3.4. Protocol Implementation
timing slot “2” is assigned to slave 2, and the timing slot “3” is assigned to slave 3. In this
way, the slave address information is not necessarily transmitted in the packet. Each slave
updates the PWM reference at the end of its assigned timing slot, and the interleaved PWM
generation is realized. In addition, with the synchronized timing among all the nodes, it
is also possible to implement the non-interleaved PWM generation by defining the instant
when the PWM reference should be updated.
Figure 3.10: Communication implementation for interleaved PWM control.
The communication network of one master and three slaves, as shown in Fig. 3.7, is sim-
ulated in ModelSim. The simulation result for normal communication is shown in Fig. 3.11.
The master node receives the sensing information from all of the slave nodes. The slaves
can receive the PWM references from the master and generate the interleaved PWM signals
based on the synchronization signal for PWM reference update.
In this work, in order to realize the desired 200 MHz FPGA clock frequency, the FPGA
timing requirement is considered to guarantee the correct FPGA operation. For example,
33
Chapter 3 3.4. Protocol Implementation
Figure 3.11: Simulation for normal communication.
the clock domain is separated for the VHDL implementation. For the high-level packet
processing blocks with complex interfaces and logic calculations, it is better to run at a slow
clock domain. Some blocks related to the serial data transmission and time counter should
still be running at the fast clock domain to guarantee the data rate and the synchronization
accuracy. The clock domain crossing interfaces are set as shown in Fig. 3.12 and Fig. 3.13.
At the clock domain crossing interface, the mux-based synchronizer and handshaking-based
synchronizer are used to avoid the metastability issue [32], [33].
34
Chapter 3 3.4. Protocol Implementation
Figure 3.12: Clock domain crossing interface of master.
Figure 3.13: Clock domain crossing interface of slave.
35
Chapter 4
Experiment Verification
4.1 Principles of the Converter
Power electronics technology is commonly used in modern power systems for high-efficiency,
high-density and high-quality electrical power conversion. However, the small-signal stabil-
ity becomes a critical issue. The small-signal stability assessment can be determined by the
source and load impedances with Generalized Nyquist Criterion (GNC) [34], [35], [36]. The
source and load impedances can be extracted by an impedance measurement unit (IMU)
that is configured in a “shunt” and a “series” injection mode, respectively. The IMU power
stage is a perturbation injection unit (PIU) that mainly consists of one single phase modular
converter with several PEBBs [37], [38]. The PEBB comprises an H-bridge with remarkably
powerful 1.7 kV SiC MOSFETs switching at 10 kHz and above [39], [40], which significantly
improves the performance of the IMU [41].
In this work, there are three PEBBs in the PIU. In shunt injection mode, the PIU will
operate as a single-phase 7-level cascaded H-bridge converter and will inject a perturbation
current between two phases as shown in Fig. 4.1.
In series injection mode, the PIU forms a single-phase interleaved two-level converter and
injects a series voltage perturbation into the system as shown in Fig. 4.2.
4.2 Distributed Control Scheme and Communication Network
For the shunt injection mode, the control scheme shown in Fig. 4.3 is designed to regulate
the injected current, maintain the capacitor voltage of the PEBB, and keep the voltage
balancing for each PEBB. The voltage loop controls the average dc capacitor voltage to be the
reference value vdcref. The current loop reference ipref is generated by multiplying the output
36
Chapter 4 4.2. Distributed Control Scheme and Communication Network
Figure 4.1: PIU for shunt current injection.
of the voltage compensator with the output of PLL. The injected current reference iper_ref is
added to the current loop reference. The voltage balancing control is also implemented so
that the capacitor voltage for each PEBB is tightly controlled to the reference value.
For the series injection mode, the control scheme shown in Fig. 4.4 is designed to regulate
the injected voltage vp, maintain the capacitor voltages of the PEBB vdc1, vdc2, and vdc3,
regulate the inductor currents iL1, iL2, and iL3, and keep the voltage balancing for each
37
Chapter 4 4.2. Distributed Control Scheme and Communication Network
Figure 4.2: PIU for series current injection.
PEBB.
In this work, the phase-shifted carrier pulse width modulation (PSC-PWM) is applied
for the converter. The PSC-PWM has the benefits of low THD, high equivalent switching
frequency and an equal distribution of the power and semiconductor stress. For the phase-
shifted PWM modulation, the modulating wave vm, carrier waves vcr1, vcr2, and vcr3, and
38
Chapter 4 4.2. Distributed Control Scheme and Communication Network
Figure 4.3: Control block diagram of shunt injection mode.
the gate signals, vg1, vg2, and vg3 for the three switches in the three PEBBs are shown in
Fig. 4.5.
The communication protocol proposed in chapter 3 is implemented for the PIU. The line-
topology bidirectional communication network is shown in Fig. 3.7, including one master
node and three slave nodes. The closed-loop control algorithm is conducted in the master
controller, and the PWM reference data will be sent to all of the slave controllers through
the communication network. Each slave controller modulates the PWM locally based on
the received PWM reference data from the master and generates the gate signals for the
H-bridge PEBB.
39
Chapter 4 4.2. Distributed Control Scheme and Communication Network
Figure 4.4: Control block diagram of series injection mode.
Figure 4.5: Phase-shifted carrier pulse width modulation.
40
Chapter 4 4.3. Measurement Results
4.3 Measurement Results
The experiment setup is shown in Fig. 4.6. In each node, there is a controller with fiber
optics interface. The controller hardware is based on a TI TMS320 DSP and an Altera
MAX 10 FPGA. The closed-loop control algorithm is mainly implemented in the DSP,
and the communication network is mainly implemented in the FPGA. In addition to the
communication network between the master controller and the slave controllers, there is also
communication inside each node. For example, in the master node, the master controller
communicates with five contactors to change the circuit connection between shunt injection
mode and series injection mode. In the slave node, the slave controller communicates with
four gate drivers and two sensors, including one current sensor to measure the inductor
current of the PEBB and one voltage sensor to measure the dc capacitor voltage of the
PEBB. All of the sensing data will be transmitted to the master controller for the closed-
loop control.
4.3.1 Synchronization Performance
Since each PEBB is controlled by an individual controller, the synchronization among
all of the controllers needs to be realized for the converter operation. The synchronization
method is the syntonization based on the oversampling clock and data recovery and the syn-
chronization based on precision time protocol as described in chapter 3. For each controller,
the local time counter is a 20-bit register time_cnt(19 : 0), which is utilized to coordinate
the synchronous operation of different nodes. For instance, with the same time base, the
interleaved PWM generation can be realized in the three slaves.
A square wave based on the local time counter is shown in Fig. 4.7.The time scale is
2.0 µs/div. The signals align together, and the synchronization is realized. Keeping the
system running for around six hours, the synchronization accuracy is observed in Fig. 4.8.
41
Chapter 4 4.3. Measurement Results
Figure 4.6: PIU hardware setup.
The time scale is 5.0 ns/div, and the total sampling points is 730, 744 samples. The maximum
jitter for the system is 25 ns.
42
Chapter 4 4.3. Measurement Results
Figure 4.7: Synchronization among four nodes.
Figure 4.8: Synchronization performance among four nodes.
43
Chapter 4 4.3. Measurement Results
Figure 4.9: Interleaved PWM generation for three slaves.
Based on the synchronization, the interleaved PWM waveforms for the three slaves are
shown in Fig. 4.9. On the left is the trigger signal of a switching cycle for the three slaves,
and on the right is the PWM waveform. The switching frequency for each PEBB is 20 kHz,
and the communication frequency is 120 kHz.
4.3.2 Closed-Loop Control Results
The distributed control and communication network for shunt current injection mode is
shown in Fig. 4.10. The dc capacitor voltage is sensed by the isolated digital sensor in each
PEBB and received by the slave controller in each PEBB. The slave controllers then send the
capacitor voltage data to the master controller. In shunt injection mode, the three PEBBs
are connected in series, and the inductor current is the same for all of the PEBBs. In this
work, the inductor current is sensed by the isolated digital sensor in slave 1 and utilized as
the converter current ip for the closed-loop control. The system voltage vs is sensed by the
isolated digital sensor in the master. The closed-loop control calculation is implemented in
the master, and the PWM modulation is implemented in the slaves.
44
Chapter 4 4.3. Measurement Results
Figure 4.10: Distributed control and communication network for shunt injection mode.
As shown in Fig. 4.11, when the closed-loop control is enabled in the master node, the dc
capacitor voltage is charged to the reference value that is set to be 100 V. The dc capacitor
voltage balancing among the three PEBBs is also realized as shown in Fig. 4.12. All of the
dc capacitor voltages are maintained at 100 V.
The 200 Hz shunt current injection is shown in Fig. 4.13. The dc capacitor voltage is 1
kV. The system voltage is 60 Hz. The 1 kHz shunt current injection is shown in Fig. 4.14.
The distributed control and communication network for series voltage injection mode is
shown in Fig. 4.15. The dc capacitor voltage and inductor current are sensed by the isolated
digital sensor in each PEBB and received by the slave controller in each PEBB. The slave
controllers then send the capacitor voltage data and inductor current data to the master
controller. The series voltage vp and system current is are sensed by the isolated digital
sensor in master. The closed-loop control calculation is implemented in the master, and the
PWM modulation is implemented in the slaves.
As shown in Fig. 4.16, when the closed-loop control is enabled in the master node, the
dc capacitor voltage is charged to the reference value that is set to be 150 V. The current
45
Chapter 4 4.3. Measurement Results
Figure 4.11: Closed-loop control enabled in shunt injection mode.
sharing among the three PEBBs is also realized as shown in Fig. 4.17. The totol PIU current
is 15 A peak, which is equally shared among the three PEBBs.
The 500 Hz series voltage injection is shown in Fig. 4.18. The dc capacitor voltage is 150
V. The system voltage is 60 Hz. The 1 kHz series voltage injection is shown in Fig. 4.19.
The fault response in the system is also verified. For example, when a gate driver fault is
detected in one slave node, the PWM modulation will be disabled immediately in the node,
and the PWM modulation in other nodes will also be disabled with the fault information
transmitted in the network. As shown in Fig. 4.20, when a fault is detected in slave 1, the
46
Chapter 4 4.3. Measurement Results
Figure 4.12: Capacitor voltage balancing in shunt injection mode.
Figure 4.13: 200 Hz shunt current injection.
slave 1 fault signal becomes “1”. The PWM modulation is disabled immediately in slave 1.
The fault information will be transmitted in the communication network to inform the other
47
Chapter 4 4.3. Measurement Results
Figure 4.14: 1 kHz shunt current injection.
Figure 4.15: Distributed control and communication network for series injection mode.
two slaves to disable the PWM signals in the next communication cycle. The case is similar
when a fault is detected in slave 2 or slave 3 as shown in Fig. 4.21 and Fig. 4.22.
48
Chapter 4 4.3. Measurement Results
Figure 4.16: Closed-loop control enabled in series injection mode.
Figure 4.17: Inductor current sharing in series injection mode.
49
Chapter 4 4.3. Measurement Results
Figure 4.18: 500 Hz series voltage injection.
Figure 4.19: 1 kHz series voltage injection.
50
Chapter 4 4.3. Measurement Results
Figure 4.20: Fault response when a fault is detected in slave 1.
Figure 4.21: Fault response when a fault is detected in slave 2.
51
Chapter 4 4.3. Measurement Results
Figure 4.22: Fault response when a fault is detected in slave 3.
52
Chapter 4 4.4. Improved Synchronization Performance with Additional Analog Circuits
4.4 Improved Synchronization Performance with Additional Ana-log Circuits
The syntonization method used in the above system is the digital syntonization method.
In order to improve the synchronization performance, the digital and analog syntonization
method can be implemented. The comparison between the two methods is shown in Fig. 4.23.
Figure 4.23: Comparison between two syntonization methods.
For the digital syntonization method, the key component on the controller is a crystal
oscillator (XO) and a FPGA. The syntonization accuracy is limited to the resolution of
the timestamp resolution, which is typically the FPGA clock frequency related to the syn-
chronization process. For example, if the syntonization and synchronization adjustment is
running at 200 MHz, the synchronization accuracy between two nodes is limited to 5 ns. For
the analog and digital syntonization method, the additional key component on the controller
is a voltage controlled crystal oscillator (VCXO). The clock frequency of different controllers
can be tuned to be identical. According to [24], the synchronization accuracy between two
53
Chapter 4 4.4. Improved Synchronization Performance with Additional Analog Circuits
nodes can be less than 1 ns.
As shown in Fig. 2.6, in [24], the additional chips contain two digital-to-analog convert-
ers (DAC), two phase-locked loop (PLL) clock generators, one voltage controlled crystal
oscillator (VCXO), and one voltage controlled temperature compensated crystal oscillator
(VCTCXO). In this work, the additional chip is only one voltage controlled crystal oscillator
(VCXO). The synchronization principle is simplified as shown in Fig. 4.24. With the VCXO
on the controller, the phase and frequency of the oscillator for all of the slave controllers
can be adjusted to be identical. The precision time protocol (PTP) based synchronization
is still implemented to calculate and compensate for the offset between the slave controller
and the master controller.
Figure 4.24: Simplified synchronization principle with additional analog circuits.
In order to verify the synchronization, an experiment with three controllers in the line
topology is conducted, based on the digital and analog syntonization and PTP-based syn-
chronization. For each controller, the local time counter is a 32-bit register time_cnt(31 : 0),
which is utilized to coordinate the synchronous operation of different nodes.
A square wave based on the local time counter is shown in Fig. 4.25. The time scale
54
Chapter 4 4.4. Improved Synchronization Performance with Additional Analog Circuits
is 200 µs/div. The signals align together, and the synchronization is realized. Keeping the
system running for around six hours, the synchronization jitter is observed in Fig. 4.26. The
time scale is 5 ns/div, and the total sampling points is 515, 682 samples. The maximum jitter
for the system is 1.6 ns.
Figure 4.25: Improved synchronization among three nodes.
Figure 4.26: Improved synchronization performance among three nodes.
55
Chapter 4 4.5. The Influence of Synchronization Accuracy on Harmonics under PSC-PWM
4.5 The Influence of Synchronization Accuracy on Harmonics un-der PSC-PWM
In this work, a single-phase 7-level cascaded H-bridge converter, as shown in Fig. 4.27, is
simulated for the analysis of the influence of synchronization accuracy on harmonics under
PSC-PWM. In the simulation, the modulation index M is 0.9, the voltage reference frequency
fref is 1 kHz, and the carrier frequency fc is 20 kHz.
Figure 4.27: Single-phase 7-level cascaded H-bridge converter simulation.
In the distributed control and communication network, an individual controller is utilized
56
Chapter 4 4.5. The Influence of Synchronization Accuracy on Harmonics under PSC-PWM
to control each H-bridge PEBB. Assuming the synchronization performance is ideal, the
harmonics of the PEBB output voltages v1, v2, and v3 and the total output voltage v in the
simulation are shown in Fig. 4.28. Under the PSC-PWM, the low frequency harmonics can
be cancelled in the total output voltage v. The most significant sideband harmonic is at 6fc.
Figure 4.28: Simulation result of FFT with ideal synchronization.
Due to the synchronization error among different controllers, it can be assumed that
the carrier frequency for the PEBBs are different. In the simulation, the carrier frequency
difference is set to be 40 Hz. The simulation result is shown in Fig. 4.29. Under the PSC-
PWM, the low order carrier sideband harmonics cannot be cancelled in the total output
voltage v. The most significant sideband harmonic is at 2fc.
Due to the synchronization error among different controllers, it can also be assumed that
there exists a delay in the carrier among controllers. In the simulation, the carrier delay is
set to be 1 µs. The simulation result is shown in Fig. 4.30. Under the PSC-PWM, the low
order carrier sideband harmonics cannot be cancelled in the total output voltage v. The
most significant sideband harmonic is at 2fc.
57
Chapter 4 4.5. The Influence of Synchronization Accuracy on Harmonics under PSC-PWM
Figure 4.29: Simulation result of FFT with synchronization error from different carrierfrequencies.
Figure 4.30: Simulation result of FFT with synchronization error from the delay ofcarrier.
58
Chapter 4 4.5. The Influence of Synchronization Accuracy on Harmonics under PSC-PWM
In summary, the synchronization error leads to additional low order carrier sideband
harmonics in the output voltage and the increase of the THD for the output voltage. When
the synchronization error is tiny, the influence of the synchronization can be ignored.
59
Chapter 5
Summary and Future Work
5.1 Summary
Numerous power electronics building blocks (PEBB) based power conversion systems
have been developed to explore modular design, scalable voltage and current ratings, low-
cost operation, etc. This modular concept can be further extended from the power stage to
the control system. The distributed control architecture shows a significant advantage for
larger converter systems, but also brings more challenges to the communication network in
terms of synchronization, latency, topology, etc.
In this work, a synchronous distributed control and communication protocol is designed
and verified for a PEBB-based power converter, and a new synchronization method is imple-
mented in the system. A communication frequency of 120 kHz is realized with the reduced-
size packet. The synchronization accuracy can be well controlled with the maximum syn-
chronization accuracy of 25 ns among four nodes. An improved synchronization method
is further implemented with the maximum synchronization accuracy of 1.6 ns among three
nodes.
5.2 Future Work
In order to extend to a larger converter system with more PEBBs, the network latency
needs to be further reduced to guarantee the desired control frequency. The gigabit com-
munication links can be utilized to reduce the data transmission time. Additionally, the
communication topology with more flexibility can be considered, such as 2D torus, multiple
ring, etc.
The fault-tolerant communication and dynamic reconfiguration of the network is another
60
References
target for future work. For example, when there is a communication link failure, the network
should be able to detect and be reconfigured dynamically.
Redundancy control will be further explored for the slave-slave communication with the
removal of the “master” node. The data can be transmitted and shared over the communica-
tion network, which provides more flexibility and redundancy for the control implementation.
61
References
[1] T. Ericsen, “Power electronic building blocks-a systematic approach to power electron-
ics,” in 2000 Power Engineering Society Summer Meeting (Cat. No.00CH37134), vol. 2,
July 2000, pp. 1216–1218 vol. 2.
[2] Information technology - Open Systems Interconnection - Basic Reference Model,
ISO/IEC 7498-1, 1994.
[3] K. Sharifabadi, R. Teodorescu, H. P. Nee, L. Harnefors, and S. Norrga, Design, Control
and Application of Modular Multilevel Converters for HVDC Transmission Systems.
Chichester, West Sussex, United Kingdom: John Wiley and Sons, 2016.
[4] J. Wang, “Switching-cycle control and sensing techniques for high-density SiC-based
modular converters,” Ph.D. dissertation, Virginia Polytechnic Institute and State Uni-
versity, 2017.
[5] B. Zhang, C. Zhao, C. Guo, X. Xiao, and L. Zhou, “Controller architecture design for
mmc-hvdc,” Advances in Electrical and Computer Engineering, vol. 14, no. 2, pp. 9–16,
2014.
[6] I. Milosavljevic, Zhihong Ye, D. Boroyevich, and C. Holton, “Analysis of converter
operation with phase-leg control in daisy-chained or ring-type structure,” in 30th Annual
IEEE Power Electronics Specialists Conference. Record. (Cat. No.99CH36321), vol. 2,
July 1999, pp. 1216–1221 vol.2.
62
References
[7] Y. Park, J. Yoo, and S. Lee, “Practical implementation of pwm synchronization and
phase-shift method for cascaded h-bridge multilevel inverters based on a standard serial
communication protocol,” IEEE Transactions on Industry Applications, vol. 44, no. 2,
pp. 634–643, March 2008.
[8] C. L. Toh and L. E. Norum, “A performance analysis of three potential control network
for monitoring and control in power electronics converter,” in 2012 IEEE International
Conference on Industrial Technology, March 2012, pp. 224–229.
[9] D. Vaidya, S. Mukherjee, M. A. Zagrodnik, and P. Wang, “A review of communica-
tion protocols and topologies for power converters,” in IECON 2017 - 43rd Annual
Conference of the IEEE Industrial Electronics Society, Oct 2017, pp. 2233–2238.
[10] C. L. Toh and L. E. Norum, “Synchronization mechanisms for internal monitoring and
control in power electronics converter,” Journal of Electrical Engineering, vol. 14, no. 3,
pp. 1–8, 2014.
[11] T. P. Corrêa, L. Almeida, and F. J. Rodriguez, “Communication aspects in the dis-
tributed control architecture of a modular multilevel converter,” in 2018 IEEE Inter-
national Conference on Industrial Technology (ICIT), Feb 2018, pp. 640–645.
[12] Poh Chiang Loh, D. G. Holmes, and T. A. Lipo, “Implementation and control of
distributed pwm cascaded multilevel inverters with minimal harmonic distortion and
common-mode voltage,” IEEE Transactions on Power Electronics, vol. 20, no. 1, pp.
90–99, Jan 2005.
[13] I. Milosavljevic, “Power electronics system communications,” Master’s thesis, Virginia
Polytechnic Institute and State University, 1999.
[14] B. Fan, K. Wang, P. Wheeler, C. Gu, and Y. Li, “A branch current reallocation based
63
References
energy balancing strategy for the modular multilevel matrix converter operating around
equal frequency,” IEEE Transactions on Power Electronics, vol. 33, no. 2, pp. 1105–
1117, Feb 2018.
[15] H. Tu and S. Lukic, “A hybrid communication topology for modular multilevel con-
verter,” in 2018 IEEE Applied Power Electronics Conference and Exposition (APEC),
March 2018, pp. 3051–3056.
[16] E. Aeloiza, F. Canales, and R. Burgos, Power Converter Having Integrated Capacitor-
Blocked Transistor Cells, U.S. Patent, 9,525,348 B1, Dec 2016.
[17] G. Francis, “A synchronous distributed digital control architecture for high power con-
verters,” Master’s thesis, Virginia Polytechnic Institute and State University, 2004.
[18] J. H. Guo, “Distributed, modular, open control architecture for power conversion sys-
tems,” Ph.D. dissertation, Virginia Polytechnic Institute and State University, 2005.
[19] T. Laakkonen, T. Itkonen, J. Luukko, and J. Ahola, “Time-stamping-based synchro-
nization of power electronics building block systems,” in 2009 35th Annual Conference
of IEEE Industrial Electronics, Nov 2009, pp. 925–930.
[20] EtherCAT Slave Controller: Section I - Technology, EtherCAT Slave Controller
datasheet, Beckhoff, July 2014.
[21] EtherCAT Technology, EtherCAT – The Ethernet Fieldbus, 2018. [Online]. Available:
https://www.ethercat.org/download/documents/ETG_Brochure_EN.pdf
[22] C. L. Toh and L. E. Norum, “A high speed control network synchronization jitter
evaluation for embedded monitoring and control in modular multilevel converter,” in
2013 IEEE Grenoble Conference, June 2013, pp. 1–6.
64
References
[23] P. Dan Burlacu, L. Mathe, and R. Teodorescu, “Synchronization of the distributed pwm
carrier waves for modular multilevel converters,” in 2014 International Conference on
Optimization of Electrical and Electronic Equipment (OPTIM), May 2014, pp. 553–559.
[24] T. Włostowski, “Precise time and frequency transfer in a white rabbit network,” Mas-
ter’s thesis, Warsaw University of Technology, 2011.
[25] P. Moreira, P. Alvarez, J. Serrano, and I. Darwazeh, “Sub-nanosecond digital phase
shifter for clock synchronization applications,” in 2012 IEEE International Frequency
Control Symposium Proceedings, May 2012, pp. 1–6.
[26] H. Wen, J. Gong, Y. Han, and J. Lai, “Characterization and evaluation of 3.3 kv 5 a sic
mosfet for solid-state transformer applications,” in 2018 Asian Conference on Energy,
Power and Transportation Electrification (ACEPT), Oct 2018, pp. 1–5.
[27] L. Mathe, P. D. Burlacu, and R. Teodorescu, “Control of a modular multilevel converter
with reduced internal data exchange,” IEEE Transactions on Industrial Informatics,
vol. 13, no. 1, pp. 248–257, Feb 2017.
[28] C. Carstensen, R. Christen, H. Vollenweider, R. Stark, and J. Biela, “A converter control
field bus protocol for power electronic systems with a synchronization accuracy of ±5ns,”
in 2015 17th European Conference on Power Electronics and Applications (EPE’15
ECCE-Europe), Sep. 2015, pp. 1–10.
[29] G. Cena, S. Scanzio, A. Valenzano, and C. Zunino, “A distribute-merge switch for
ethercat networks,” in 2010 IEEE International Workshop on Factory Communication
Systems Proceedings, May 2010, pp. 121–130.
[30] IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measure-
ment and Control Systems, IEEE Std. 1588-2008, Jul. 2008.
65
References
[31] Y. Rong, J. Wang, Z. Shen, R. Burgos, D. Boroyevich, and S. Zhou, “Distributed control
and communication system for pebb-based modular power converters,” in 2019 IEEE
Electric Ship Technologies Symposium (ESTS), Aug 2019, pp. 627–633.
[32] C. E. Cummings, Clock Domain Crossing (CDC) Design Verification Techniques Using
System Verilog, Sunburst Design, Inc.
[33] S. Verma and A. S. Dabare, Understanding clock domain crossing issues, Atrenta, Inc.
[34] M. Jakšić, “Identification of small-signal dq impedances of power electronics converters
via single-phase wide-bandwidth injection,” Ph.D. dissertation, Virginia Polytechnic
Institute and State University, 2014.
[35] M. Jakšić, Z. Shen, I. Cvetković, D. Boroyevich, R. Burgos, C. DiMarino, and F. Chen,
“Medium-voltage impedance measurement unit for assessing the system stability of
electric ships,” IEEE Transactions on Energy Conversion, vol. 32, no. 2, pp. 829–841,
June 2017.
[36] B. Wen, R. Burgos, D. Boroyevich, P. Mattavelli, and Z. Shen, “Ac stability analysis
and dq frame impedance specifications in power-electronics-based distributed power
systems,” IEEE Journal of Emerging and Selected Topics in Power Electronics, vol. 5,
no. 4, pp. 1455–1465, Dec 2017.
[37] Z. Shen, I. Cvetkovic, M. Jaksic, C. DiMarino, D. Boroyevich, R. Burgos, and F. Chen,
“Design of a modular and scalable small-signal dq impedance measurement unit for grid
applications utilizing 10 kv sic mosfets,” in 2015 17th European Conference on Power
Electronics and Applications (EPE’15 ECCE-Europe), Sep. 2015, pp. 1–9.
[38] Z. Liu, I. Cvetkovic, Z. Shen, D. Boroyevich, R. Burgos, and J. Liu, “Analysis and
control of a transformerless series injector based on paralleled h-bridge converters for
66
References
measuring impedance of three-phase ac power systems,” in 2018 IEEE Applied Power
Electronics Conference and Exposition (APEC), March 2018, pp. 1841–1845.
[39] J. Wang, R. Burgos, D. Boroyevich, and Z. Liu, “Design and testing of 1 kv h-bridge
power electronics building block based on 1.7 kv sic mosfet module,” in 2018 Interna-
tional Power Electronics Conference (IPEC-Niigata 2018 -ECCE Asia), May 2018, pp.
3749–3756.
[40] J. Wang, Z. Shen, I. Cvetkovic, N. R. Mehrabadi, A. Marzoughi, S. Ohn, J. Yu, Y. Xu,
R. Burgos, and D. Boroyevich, “Power electronics building block (pebb) design based
on 1.7 kv sic mosfet modules,” in 2017 IEEE Electric Ship Technologies Symposium
(ESTS), Aug 2017, pp. 612–619.
[41] I. Cvetkovic, Z. Shen, M. Jaksic, C. DiMarino, F. Chen, D. Boroyevich, and R. Burgos,
“Modular scalable medium-voltage impedance measurement unit using 10 kv sic mosfet
pebbs,” in 2015 IEEE Electric Ship Technologies Symposium (ESTS), June 2015, pp.
326–331.
67