zigbee test strategies and implementations

10
ZigBee Test Strategies and Implementations Menno Mennenga Ventocom Requirement + Test Engineering Arndtstr. 9, D – 01099 Dresden Email: [email protected] Phone: +49 173 565 8134 Abstract Technologies for Wireless Sensor Networks (WSN) have generated considerable interest among Embedded Developers. Freedom from wires promises easy deployment, reduced maintenance, increased flexibility, additional comfort and reduced cost in gathering up-to-the- second information. These advantages motivated engineers to develop a large number of exciting products over the last years, based on open-standard technologies such as IEEE 802.15.4, ZigBee, WirelessHART, or 6lowPAN. WSN are, by definition, distributed, parallel networks of nodes communicating over wireless links. Engineers dealing with such systems are aware that distributed development is a complex task. This paper intends to unveil some of the complexity by giving insight into the development of WSN applications by specially considering test activities. It identifies challenges in implementing test system for both ZigBee stacks and ZigBee applications by introducing two studies which demonstrate how these challenges can be effectively met. 1 Introduction The Ventocom team has been helping to bring WSN technology into reality since 2004. Initially, the focus was on implementing IEEE802.15.4 (15.4) ICs and both 15.4 and ZigBee protocol stacks. The expertise was later applied in supporting companies who integrated 15.4 and ZigBee communication capability into their products. Based on this experience, this paper presents approaches for the testing of ZigBee stacks and applications. It is hoped that it will be valuable to R&D managers and engineers alike in the continuing adoption of WSN technology. The paper is organized as follows. Section 2 introduces test activities in the frame of the product life cycle, and it identifies the main challenges in test design for wireless, distributed systems in the design phase. Section 3 includes two case studies, of a test system designed for stack testing and a test system for application testing, respectively. The test execution and the test process are described in detail, and similarities and differences are highlighted. Section 4 concludes the paper. 2 Test in Perspective Figure 1 shows the phases of the product life cycle model, i.e., the conception, design, realization, and service phases. In each stage, different test activities are taking place. In the design phase they include unit testing, integration testing and system testing of, for example, the modules of a radio IC or the software components of microcontroller software. Other tests include environment tests and electrical tests. In the realization stage, tests are used to ensure product quality on the manufacturing line, and in the service phase, once a product

Upload: others

Post on 19-Dec-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

ZigBee Test Strategies and Implementations

Menno Mennenga

Ventocom Requirement + Test Engineering

Arndtstr. 9, D – 01099 Dresden

Email: [email protected]

Phone: +49 173 565 8134

Abstract

Technologies for Wireless Sensor Networks (WSN) have generated considerable interest

among Embedded Developers. Freedom from wires promises easy deployment, reduced

maintenance, increased flexibility, additional comfort and reduced cost in gathering up-to-the-

second information. These advantages motivated engineers to develop a large number of

exciting products over the last years, based on open-standard technologies such as IEEE

802.15.4, ZigBee, WirelessHART, or 6lowPAN.

WSN are, by definition, distributed, parallel networks of nodes communicating over wireless

links. Engineers dealing with such systems are aware that distributed development is a

complex task. This paper intends to unveil some of the complexity by giving insight into the

development of WSN applications by specially considering test activities. It identifies

challenges in implementing test system for both ZigBee stacks and ZigBee applications by

introducing two studies which demonstrate how these challenges can be effectively met.

1 Introduction

The Ventocom team has been helping to bring WSN technology into reality since 2004.

Initially, the focus was on implementing IEEE802.15.4 (15.4) ICs and both 15.4 and ZigBee

protocol stacks. The expertise was later applied in supporting companies who integrated

15.4 and ZigBee communication capability into their products. Based on this experience, this

paper presents approaches for the testing of ZigBee stacks and applications. It is hoped that

it will be valuable to R&D managers and engineers alike in the continuing adoption of WSN

technology.

The paper is organized as follows. Section 2 introduces test activities in the frame of the

product life cycle, and it identifies the main challenges in test design for wireless, distributed

systems in the design phase. Section 3 includes two case studies, of a test system designed

for stack testing and a test system for application testing, respectively. The test execution

and the test process are described in detail, and similarities and differences are highlighted.

Section 4 concludes the paper.

2 Test in Perspective

Figure 1 shows the phases of the product life cycle model, i.e., the conception, design,

realization, and service phases. In each stage, different test activities are taking place. In the

design phase they include unit testing, integration testing and system testing of, for example,

the modules of a radio IC or the software components of microcontroller software. Other

tests include environment tests and electrical tests. In the realization stage, tests are used to

ensure product quality on the manufacturing line, and in the service phase, once a product

has been installed or is being used, a maintenance technician may perform tests and checks

for preventive maintenance and for defect location prior to repair. For the discussion in this

paper, we will focus on the tests performed during the design phase and we exclude any

other tests from our considerations.

Figure 1: Phases of the Product Life Cycle Model

The V-Model is often used to define the activities in the design phase. It prescribes a process

sketching out a top-down system design approach, an implementation phase, and a bottom-

up test process, the different test stages being component test, integration test and system

test.

Figure 2: Extended process model illustrating increased test complexity in the design phase

Of course, the V-Model is primarily a process template – taking it literally would mean that

development activities terminate once the system components have passed the three test

steps (unit, integration, system integration test) in a single sweep. Yet the development of

distributed networks such as a system of ZigBee nodes may require a more refined process

model. We suggest that a better understanding of the development process can be achieved

by introducing two levels in the process model – node level and network level – and by

replicating the system design and system architecture activities on both levels. This

amended model is shown in Figure 2. We think that this model works better in capturing the

different test stages that are involved in designing distributed, networked systems.

2.1 Network Level System Testing Using the concept of node level and network level testing allows test experts to separate

functionality that can be tested on a node level from functionality that can only be

meaningfully tested on the network level. The software components of a ZigBee node

suitable for node level tests are, for example, the implementations of data processing and

signal conditioning algorithms. For these components, unit tests and integration tests do not

significantly differ from single-node development. But network nodes will also contain

communication components that can only be tested in conjunction with other nodes on the

network level.

The different test stages on the node level are well understood and have been practiced by

developers for many years. Yet some aspects of the network level test, especially for

wireless systems, are comparatively unfamiliar: the influence of the wireless channel and the

potentially large number of communication interfaces to be stimulated and observed. The

paper will therefore concentrate on the network-level system integration test for WSN

systems.

2.2 Network Level Test Automation An additional aspect that is directly related to network-level system testing is test automation.

Both system complexity and the need for short release cycles require that the network-level

system tests run automatically.

Release management requires efficient testing: developers and managers track the progress

of implemented software features and of defects by freezing the development state at regular

intervals and generating a release configuration. Especially short release cycles call for

automated test systems as they offer the advantage of executing a large number of test

cases on a DUT quickly and reliably.

Another argument for test automation is sheer DUT complexity: the overall state space in a

network grows exponentially with the number of nodes. In our experience, it becomes difficult

to manually track all communication when the number of nodes reaches about 5. From that

point on, a test system capable of automatically executing test cases becomes more

effective than manual testing, even though a certain development effort has to be invested in

setting up the system.

The next section will show (using two case studies) how automated network-level test

systems have been implemented. The first case study involves the testing of IEEE 802.15.4

and ZigBee stacks by the stack development team of an IC manufacturer. The second case

study describes a test system used in an industrial-product development project for a

logistics application with WSN communication features.

3 Test System Case Studies

3.1 Case Study: Test System for Stack Development Several semiconductor manufacturers offer integrated circuits to build IEEE 802.15.4 or

ZigBee-based solutions. Their customers expect that ICs are complemented by development

tools, application notes, technical support and software, including MAC and ZigBee stacks.

Semiconductor manufacturers therefore develop stacks in-house or work with development

partners to realize this objective.

The first case study deals with the stack development at an IC manufacturer where a stack

development group works closely with the IC design team to build software tightly integrated

with the radio transceiver and microcontroller ICs. The case study focuses on the network-

level integration and system test stages. Previous tests on the IC level and basic paired-

communication tests will not be discussed.

3.1.1 Test Objective

The Device under Test (DUT) is the radio/controller/software stack system. The test objective

is to validate the stack software against the IEEE 802.15.4 and ZigBee specifications and to

verify the correct operation of the entire system implementation.

3.1.2 Test Process

Since the stack software architecture is typically layered and consists of physical layer

(lowest layer), MAC layer, network layer, profile layer, and application layer (highest layer),

the developer team started the test with the physical layer and progressed by integrating

additional layers until the complete system had been be tested. For the purpose of the

discussion in this paper, we assume that the development team focuses on a particular level

instance.

The test process is shown in Figure 3. A “developer track” and a “tester track” had been

established – the developers took care of the stack software implementation while the tester

team took care of the verification task. The team used the open-standard documentation

(either IEEE 802.1.54 or ZigBee, depending on the focus) to derive both the system

specification and the test specification. Software implementation on the developer track and

test implementation on the tester track followed as the next steps.

Figure 3: Stack Development Test Process

Developers executed initial developer tests on their own test setups. Having sufficiently

tested their code, a release would be organized among the developer team. Then the test

experts ran the system test on their own setups and communicated defects back to the

developer team.

Developers and testers used compatible test systems which differed only in complexity (in

comparison to the developer test setups, the test track system included a larger number of

nodes, and more elaborate feature to control the influence of the wireless channel). The test

system compatibility meant that test cases defined for the developer test system executed on

the tester system, and vice versa. This advantage lead to considerable productivity gains in

the development process as defects could be comprehensibly traced, and once-attained

knowledge (in the form of possible error scenarios, i.e., test cases) was preserved among the

entire team.

3.1.3 Test Interface

The devices under test (DUT) are of PCBs, each carrying a radio transceiver, a

microcontroller and the required active and passive components. Each PCB had an SNA

connector for RF input and output and a UART interface connected to the microcontroller.

Via the UART the test system had access to the software API of the DUT. Since several

PCBs were used together to verify the system on the network level (from 2 to several 100),

the RF and UART connections of all the PCBs made up the complete test interface.

Observation points included the wireless channel (via RF measurement equipment and

Sniffer devices) as well as the UART interfaces of the DUTs. The test system allowed a

stimulation of the DUTs through a Sniffer device and the RF devices over the air and through

the UART interfaces of the DUTs.

3.1.4 Test System

Figure 4: Test system used for stack development

The structure of the test system is shown in Figure 4. It included several components

observing or stimulating the wireless channel to set certain conditions of the environment and

to introduce certain error scenarios. This included the free parameterization of the link

budgets between the participating devices to simulate different network topologies without

having to physically move the devices. Additionally, interferer scenarios could be established,

including the presence of pulse and other interferes, for example in order to check and

mitigate the influence of WiFi devices.

A test bus connected the DUTs and the other test system components to the Test Execution

System (TES). The physical nature of the test bus had been abstracted such that RS-232

connections could be used in simpler setups, while Ethernet connections were used for more

complex configurations with up to several 100 nodes requiring a high-throughput interface.

Via the test bus, the TES issued stimulus instructions and received observation data from all

test system components (DUT interface, Sniffer, EMP generator, and other RF measurement

equipment) and compared them with the expected results (for some test, the expectations

could even be formulated in a “fuzzy” manner).

3.1.5 Test Execution and Defect Reporting

The TES executed a test run by first reading an XML file containing as parameters the

software type (the layer under test) and the software version under test. Accordingly, the

TES checked the corresponding code and the corresponding version out of the code

repository and distributed it via the test bus to the DUTs, which were automatically

programmed with the configured software version. Then the TES started the test execution

by fetching the test cases from the test case repository.

For each test case, the environment parameters of the channel model were adjusted, and

the test case executed by translating the test case description into stimulus data. Via the test

bus, the TES observed results and compared them with the expected results, which were

also defined by the test case. The test report was amended with the test verdict (PASS or a

FAIL) and, if the test case failed, with additional information to ease the tracking of the

defect.

3.2 Case Study: Test System for a Logistics Application Preventive maintenance of logistics equipment helps to manage maintenance cost and avoid

equipment down-time. For electrically powered fork lift trucks, stringent battery maintenance

maximizes the battery life time and keeps the battery capacity at a high level, thus avoiding a

need for ever shorter usage periods before recharging.

A fork lift truck manufacturer therefore developed a management system capable of tracking,

for each truck battery, a large number of parameters (such as charge cycles, complete

charge cycle, charged and extracted power and charge, among others). In addition, since

many different battery types may be used in a warehouse, charging stations should be able

to recognize the type of connected batteries to enable individual charging characteristics for

each battery type to maximize battery life and operation cycles - prior to charging, batteries

should communicate individual charging information to the connected charger.

The management system could contain three basic components: a charger device (CD), a

battery controller (BC), and an interface to the central controller (CC) of the fork lift truck. For

the reasons given above, the system components needed to exchange information, and for

cost and comfort reason, a wireless connection was chosen.

3.2.1 Test Objective

The test objective was to verify the correct function of each device (BC, CD, and CC)

separately and to verify the correct communication between all devices via the wireless

network and the CANopen interface. Therefore, the tests included both device acceptance

test and network-level system integration test.

3.2.2 Test Process

Figure 5: Development and test process of the logistics application project

The test process is shown in Figure 5. Overall, three external suppliers and an internal

development team were responsible for developing the different devices of the battery

management system.

The fork lift truck manufacturer was responsible for the system specification. From a

functional point of view, it defined the use cases in the different stages of the product life

cycle (operation, installation, and maintenance), the device interfaces, and the algorithms for

sensor data acquisition and processing (as the basis for battery lifetime computations). In

addition, electrical, safety, and regulatory compliance requirements had been defined under

the guidance of the internal test and simulation experts.

Each supplier (be it internal or external) developed its own device and subjected it to its own

functional tests. At regular intervals, all suppliers submitted their deliveries to the test team.

The suppliers also submitted, if applicable, any documentation that summarized their own

test results for review.

The test team verified the correct communication of all devices by subjecting it to a network-

level system test. In addition, to cover gaps in the supplier tests or their test documentation,

the test team executed acceptance tests to verify the correct sensor data processing, signal

processing, and generation and processing of charging information for each device. These

test activities are summarized in the “integrator track” of Figure 5.

Missing information in the supplier test reports or hardware or software defects were logged

by the test team in the internal issue management system. On this basis, test reports were

generated and shared with the development teams to enable them to remove the defects, or

provide additional information as requested.

3.2.3 Test Interface

The devices under test (DUTs) were the battery controller (BC), the charger device (CD), and

the truck’s central controller (CC). Each device was capable of wireless communication using

a radio transceiver/microcontroller system and their associated components.

The RF output and the UART connecting the wireless communication to the rest of the

device could be observed at all devices. Any additional test interfaces were specific for each

device. They included the sensor inputs of the BC (battery voltage, battery current,

symmetric battery voltage, battery water level, and battery temperature), the power outputs

of the CD, and the CANopen interface of the CC, in addition to several other interfaces

components.

3.2.4 Test System

Figure 6: Test system for application development

For meaningful test results, complete battery charge and discharge cycles and different

failure and maintenance scenarios were tested (including software updates and event log

extraction). This required the replication of those fork lift truck components that are relevant

for the battery management system in these scenarios. Therefore the test setup included a

drive motor, a drive motor controller, an 800 Ah traction battery, a high-power current

source/sink and high-precision, high-power current measurement equipment as well as

assorted fuses, switches, and a CANopen interface.

A National Instruments real-time system interfaced with the test setup and the DUTs. It was

controlled from a PC that hosted a LabView application. The LabView application enabled

the tester to comfortably stimulate and observe the DUTs. TestBench from imbus AG was

used for test case definition, test result reporting and potentially test automation. The

generated test reports were the basis for defect messages entered into the issue

management system.

As mentioned previously, the test system had the flexibility to perform the network level

integration test as well as device acceptance tests that had not been executed or

documented by the suppliers. The test results established the basis of meaningful technical

discussions for the diverse supplier team and a quick resolution of open problems. Thus, the

chosen test approach did not only lead to a stable, customer-relevant battery management

system but it also helped to attain this objective quickly and productively.

3.2.5 Test Execution and Defect Reporting

A test cycle included basic system operation tests and other scenarios such as the injection

and logging of various event conditions (including truck events and battery events – such as

over temperature, deep discharge, battery cell defect, over charge, over current and others)

and the testing of maintenance procedures (event logging, log book reading and reset, over-

the-air firmware upgrade). The test cases that made up the test cycle were run by the

National Instruments real-time computer. Via the test system interface, the real-time system

brought the test system into the corresponding operating states. The previously mentioned

interfaces were observed and the results compared with the expected results. Each test case

verdict (pass or fail) was logged with imbus TestBench.

The test cycle also included tests for communication-throughput performance and long-term

communication stability. Long-term stability tests were run in normal or maintenance

operation stages. Throughput tests were executed by gradually increasing the rate of data

requests (for example by reading the current battery voltage) via the CANopen interface,

which were bridged into the wireless network. The test passed if the wireless communication

reached a throughput threshold but continued regular operation.

Device acceptance tests of the CD and the BC made up the third area of a test cycle. For

that purpose, charging and discharging cycles were emulated with the help of a current sink.

Precision high-power voltage and current measurement equipment was used to log current

and voltage reference data. In addition, power and charge reference values were computed

and compared against the long-term data recorded by the BC to verify the correctness of

sensors and sensor data computation in the BC. The reference data was also used to track

the battery charge characteristics, and therefore allowed the verification of the CD

implementation as well.

3.3 Case Study Summary The previous sections discussed two case studies on the testing of distributed systems with

wireless communication capabilities. A test system for stack development and a test system

for application development were introduced. Common to both test systems was a central

test execution engine which, from test case descriptions, stimulated the DUT, observed its

behavior, and compared the observations with the expected results to form a test verdict.

The stack test system was characterized by a potentially large number of DUTs sharing

similar interfaces, while the application test system interface was much more varied including

analog, digital, high-power, and communication interfaces.

Less obviously, both case studies highlighted the need to precisely synchronize timing

information across the entire test interface to be able to order stimuli and observations

across the test interface. In the stack test, the timing was synchronized via the test bus; in

the application test system this task was performed internally by the NI real-time system. Had

it been necessary in the latter case to test a considerably larger number of interfaces, a

similar approach as for the stack test may have been required.

Both test setups included an environment model. Since the focus of the stack test was on the

capability of the stack software to react to a variety of disturbances in the channel, the

wireless channel model was very elaborate. The battery management test system focused

on a different environment model – the environment surrounding the battery management

system. The challenge here was to safely deal with voltages and currents very much outside

the CMOS voltage level of 1.8V.

4 Conclusion

The need for a differentiation between node-level and network-level tests for distributed,

networked system was discussed, and the need for test automation was established. Two

case studies were introduced that discussed test systems for network-level tests in stack and

application development. It became clear that stack and application developers alike invest

considerable time and expertise in the design and the operation of test system for wireless,

distributed systems. Also, test experts responsible for network-level testing of products that

involve wireless capability need to combine test knowledge, wireless expertise, and product

domain expertise, making test projects in these areas both demanding and challenging.

5 Acknowledgement

The author is indebted to Dipl.-Phys. (Univ.) Bernd Mattern for his highly competent guidance

in our co-operation. The author would like to thank him as well as Dr.-Ing. Attila Römer and

Dr.-Ing. Axel Wachtler for their helpful comments and suggestions in formulating this paper.