Automated Testing For Automotive Infotainment Systems: Ning Yin
Automated Testing For Automotive Infotainment Systems: Ning Yin
Ning Yin
Ning Yin
iv
Automated testing for automotive infotainment systems
NING YIN
Department of Computer Science and Engineering
Chalmers University of Technology
Abstract
With the development of automotive industry, the complexity of infotainment sys-
tems is increasing due to the growing number of electronic control units (ECUs).
In-vehicle infotainment (IVI) is gradually becoming one of the main features in high-
class vehicles nowadays. Automotive companies find it a challenge to test these
complex functions for ensuring product quality before start of production. There-
fore, it is highly demanded to carry out high volume of infotainment tests by test
automation to shorten test cycles, improve the quality and save resources.
v
Acknowledgements
This thesis project was performed by Ning Yin from Chalmers University of Tech-
nology; it was carried out at ÅF in Trollhättan and Gothenburg. First of all, I
would like to thank to my supervisor Ola Wennberg, Manager of Infotainment, SW
& HMI at ÅF Automotive department who gave me the chance to perform this
master thesis. I am grateful for all the guidance and valuable information offered
by Dan Carlsson, Mikael Karlsson and Smitha Mohan from ÅF infotainment team.
Second, I would like to take the opportunity to thank Johansson Andreas LV, Oskar
Andersson and their colleagues for helping me with the second task of this master’s
thesis. Third, I want to express thanks to my supervisor Lena Peterson at Chalmers.
Thank you for helping me with report writing and giving continuous feedback along
with the project. Last, I am also thankful to my examiner Per Larsson-Edefors for
reviewing and giving feedback on my thesis report.
vii
Contents
List of Acronyms xi
1 Introduction 1
1.1 Project Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Project Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Ethical Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Report Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Methodology 25
3.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 Research Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3 Tools and resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
ix
Contents
7 Conclusions 51
7.1 Achievements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Bibliography 53
x
List of Acronyms
xi
List of Acronyms
xii
1
Introduction
1
1. Introduction
test automation with support of proprietary tools and scripts is highly demanded
by car manufacturers. An automated testing system would decrease the need of
human resources and the intervention of experts as well as lead to enhancement of
efficiency, performance and shorter lead time [6].
1.2 Motivation
The customer expectations on new entertainment application and services in their
cars forces the automobile infotainment industry to evolve continuously. Meanwhile
increasing complexity of in-vehicle electronics makes infotainment system integration
difficult. In today’s superb vehicles, the infotainment system works as a distributed
system and integrated system, where hardware and software components interact
by means of in-vehicle network. The typical problems such as interruption from
another feature, concurrency and consistency usually occur in service interaction.
Functional testing, regression testing and robustness testing need to be performed
on these features against erroneous action. Hence, new testing methods should be
adopted for replacing old hardware or software test approaches with more flexibility.
For instance, a new testing method for supporting multiple kinds of input/output,
such as GPS data from navigation, information from advanced driver assistance
systems (ADAS) in a single IVI system. Another example is finding a common
solution to ever-increasing types of infotainment standards and connectivity proto-
cols [7]. If a tester wants to check car performance under circumstance of running
multiple tasks, i.e. a driver uses GPS while connecting the phone to Bluetooth and
adjusting indoor temperature at the same time, testing environment needs to be
simulated involving all task features. This means that the combination of naviga-
tion system, Bluetooth connectivity and communication network must have correct
synchronization. Therefore, new tests should take this into account and analyze the
exact occurrence timing when these tasks are carried out.
Another challenge lies in the automation oracle problem, which is also known as
automatic generation of test cases [8]. When applying automated testing in an agile
project, efforts need to be made to quickly react to consecutive changes to the sys-
tem requirements. New test cases are invoked correspondingly and should be added
to the existing test automation in parallel with the development of the project. For
example, tests will fail if the order of two buttons in test case is changed while this
change has not been generated as a new test case. The test-generation problem
becomes more intractable when part of the testing environment is out of engineers’
control or the testing environment is indeterminate [9].
2
1. Introduction
The decision of what tests to automate should be taken at the inception stage
and based on the comparison between the value of test automation and effort to
produce them. The initial phase of assessing different testing goals within the in-
fotainment area is vital since it gives the direction where testing is laborious and
time consuming to perform manually. Other challenges such as architecture of test
code, test code language and test environment specification must also be considered
in this project [10].
• The first aim is to analyze a suitable area using automated test for IVI system.
The chosen area should be agreed on based on motivation and rationale of the
domain. Afterwards, the area deemed as suitable needs further investigation
regarding how automation should be applied and consider test effort estima-
tion.
• The second goal is to create a small useful Python script which can be applied
to an ECU test in an automotive system. The validation of the script will be
achieved by comparing test results with expected behavior of the application
in the vehicle test environment.
1.4 Limitations
In this project, the limitations are:
• In the first part of this thesis, the author will not have access to the real
infotainment systems and hence this part work is a purely theoretical imple-
mentation.
• By the time the author finishes the thesis, the script has not be performed on
the complete automotive system due to the project progress in the company.
However, the test principle is the same and the existing testing result proves
that the method can be applied to future ECU test when the hardware is ready.
3
1. Introduction
Chapter 3 describes the methodology used for conducting the thesis work and
outlines the research questions.
Chapter 4 demonstrates the testing framework for ECU diagnosis. A new test
cost estimation model is also presented in this chapter.
Chapter 6 analyzes the result achieved and answers the research questions from
Chapter 3.
Chapter 7 arrives at the conclusion made from this project and proposes areas
to investigate in future.
4
2
Technical Background and
Current Situation
This chapter introduces background knowledge related to the thesis work. The
first section refers to automotive electronics, especially automotive infotainment
system. Relevant understanding includes diagnosis functionality, various network
communication and network gateway. Next comes the state of automated testing
technology. The comprehension of these two sections will ease the implementation
of automated test framework raised in the later chapter of this thesis.
5
2. Technical Background and Current Situation
the fuel level, door security, traffic information and weather forecasts are required
by sensors and shown on the display screen.
Generally, IVI systems must have the help of human machine interface (HMI) to de-
liver and process the services mentioned above. The usability of HMIs has increased
considerably due to the features of IVI systems. The correctness of a user interface
is essential to have a faultless experience of IVI system. A typical HMI consists of
central touchscreens, control unit, keypads, multi-function buttons, etc. Figure 2.1
shows an infotainment system example of Volvo V60.
Figure 2.1: Infotainment system of Volvo S90, where 1. Voice control unit 2.
Touch screen and 3. Keypads. Photograph taken by author.
• The user interface provides all input modalities such as touching screen and
haptic buttons for entering command from drivers. The output modalities like
6
2. Technical Background and Current Situation
• Applications can be divided into two groups generally. One is called head unit
that offers driver control over the vehicle’s entertainment media [14]. It is usu-
ally integrated in the center of the dashboard. The other is electronic instru-
ment cluster unit which includes a set of instrumentation such as speedometer,
odometer and instrument.
In each part, different embedded ECUs interact via one or more communication
buses such as media oriented systems transport (MOST) and controller area net-
work (CAN) shown in Fig. 2.3. The amplifiers are to increase the signal amplitude
provided by the head unit [16]. The head unit communicates with other functions
inside the vehicle through the CAN bus.
7
2. Technical Background and Current Situation
MOST
Amplifiers
External Devices
both low-speed and high-speed CAN bus and a local interconnect network (LIN) bus.
Rodelgo-Lacrus et al. [18] classified networks into several levels based on data rate
and functions. LIN is an example of Class A network, which is used for simple data
transmission with a speed lower than 10 kbit/s. It is the cheapest among automotive
networks and used when there is no high versatility needed. Class B network such as
low-speed CAN offers transmission rates up to 125 kbit/s. Together with high-speed
CAN belonging to Class C network which operates from 125 kbit/s to 1 Mbit/s it
composes the CAN bus network. CAN is also called multi-master broadcast serial
bus since each node can transmit and receive messages with fault tolerance. Class D
networks such as MOST, are designed for speed exceeding 1 Mbit/s, mainly serving
as gateways between subsystems and carriers for audio, video or other media data.
In the following part, a more detailed introduction for CAN, MOST and Ethernet
is presented.
CAN Network
CAN is an event-triggered bus system for real-time nodes such as temperature sen-
sors, driver door module. It was first released in 1986 by Robert Bosch GmbH and
applied in vehicles. After that other application areas, for instance trams, under-
grounds and aircraft, started adopting it for many applications. The CAN network
provides a single interface for ECUs instead of having several input nodes on the
device. The broadcast communication feature of CAN makes every device linked to
the network notice the transmitted messages and decide if the message should be
8
2. Technical Background and Current Situation
accepted or neglected [20]. Therefore, the additional nodes can be added without
changing the topology of the network. CAN offers non-interrupted transmission
of messages that frames with the highest priority get access to the CAN bus and
CAN transmitted in broadcast. Another advantage of CAN network is error-detection
capabilities supported by cyclic redundancy check (CRC) to detect global and local
transmission errors. The topology of CAN network with both high speed and low
speed bus is illustrated in Figure 2.4.
High/low CAN bus
Node
CAN message has four types denoted as: data frame, remote frame, error frame and
overload frame [19]. Data frame, as the name suggests, is used for transmitting data
Issuer: [Johnny Liu] [XLIU77]; [CRD, EESE, 87410]; [PowerPoint]; Security Class:[Proprietary] 9-May-18 6
to other nodes in the network. Remote frame is similar to data frame except that
the remote transmission request (RTR) bit and missing of data field. Error frame
is generated when a node detects an error and triggers all other nodes to send error
frame as well. The purpose of overload frame is to require more time for a busy
node which causes extra delay between messages [21][22].
MOST Network
MOST network is used for multimedia and infotainment applications such as videos,
radios and other GPS navigation in the car. The using of plastic optic fiber (POF)
cables offers a better performance against electromagnetic interference (EMI). It has
a ring topology which can manage up to 64 devices (nodes). The MOST networking
technology eases the way of connecting multiple devices in today’s infotainment sys-
tems by plug and play functionality. MOST25, MOST50 and MOST150 are three
versions of MOST network. The bandwidth is 25 Mbit/s for MOST25, 50 Mbit/s
and 150 Mbit/s for MOST50 and MOST150. Not only the bandwidth is improved
from the first generation of MOST, but also the frame length is increased to 3072
bits. Thus, MOST becomes more efficient when handling the increasing streaming
of audio and video data.
In a MOST ring, the distance distance between two nodes is 20 meters. The com-
munication direction is one-way transmission as shown in Figure 2.5. A time master
sends MOST frames to the next node in the logical ring with a consistent frame rate
9
2. Technical Background and Current Situation
(44 kHz-48 kHz) and all other time slaves with different sampling rates synchronize
their operation with the frame preamble [23]. The data can be sent through syn-
chronous, asynchronous and control channel by different bandwidths [24]. The data
is called synchronous data, asynchronous data and control data correspondingly.
• Asynchronous data: Transmission of multimedia data with high data rate and
error handling.
FlexRay
FlexRay has versatile topologies such as passive bus and active star type. Figure 2.6
shows two basic layouts of FlexRay. In Figure 2.6a a node can connect to one or
both of the channels. Node A, Node C and Node E are connected to both channels
while Node B and Node D only connected to either Channel A or Channel B. The
active star structure shown in Figure 2.6b is free from closed ring. The received
signal from one node can be transmitted to all other nodes connected. Similarly, a
node can link to any other channel in the topology. FlexRay can be implemented
as a combination of communication bus system which improves the flexibility and
adaptivity for more applications. Figure 2.7 is an example of hybrid configuration
of FlexRay.
10
2. Technical Background and Current Situation
Node A Node C
Node H Node I
Active Star Network
Star Star
Node B Node J
1A 2A
Ethernet
Ethernet is a high-speed system with data rate 100 times higher than that of a
CAN bus, which is necessary for infotainment and active safety application. The
low-cost and specific quality of Service (QoS) are also the motivation for Ethernet.
Ethernet is also widely used in diagnostics [27] by using local area networking (LAN)
technology.
Two common Ethernet topologies are depicted in Figure 2.8. In a bus style config-
uration, all the nodes connected to the bus share one channel under Carrier-sense
Multiple Access with Collision Detection (CSMA/CD) method. In infotainment
systems, with the help of Ethernet switch, the messages from head unit can be
broken down into small packets and sent to the target address. The transmission
is simultaneous and bidirectional. The example in Figure 2.9 shows that there are
two frames in flight on the bus between display and console node. No frame exists
between head unit and speaker node since they are not involved in the transaction.
The function of switch also makes Ethernet more flexible and scalable than other
network topologies.
11
2. Technical Background and Current Situation
Node
2
Node Node
1 2
Node Switch Node
1 3
Node
3
Node Node
5 4
Figure 2.8: Two common Ethernet topologies: bus and star based
Table 2.1 gives an overview of typical networks used in today’s vehicles. The CAN
network has high flexibility, compared with Ethernet, multi-master mechanism en-
ables ECUs to be added to the CAN network easily without requiring a port for
each switch. However, CAN has low performance due to limited “short” messages
(max message size is 8 bytes) and low maximum speed.
Although MOST has long been regarded a wise option for infotainment systems,
it is on the way out due to less flexibility and expensive cost of optical fiber. The
ring topology of MOST leads to disconnected connection if a problem found in one
node. Volvo and Geely, for example, have reduced the number of MOST bus for
modern vehicles.
The main advantage of FlexRay is that it has built-in redundancy by two chan-
nels while Ethernet needs to add additional switch path (extra cost) to achieve the
same performance. However, the drawback of FlexRay is low versatility. FlexRay
lacks the bandwidth and protocols to support the purposes other than X-by-wire
and safety-critical applications.
12
2. Technical Background and Current Situation
due to its high bandwidth. Moreover, the configuration is easy to change based
on different requirements. Nevertheless, the restriction that every node must ren-
dezvous at switch-point makes it less flexible and incurs significant costs for the
added switches for a complex network configuration.
From the comparison, it can be concluded that the benefit of Ethernet in future
automotive electronic market outweighs other automotive networks. The lack of
bandwidth network CAN can be solved by Ethernet, which makes it more adapt-
able to ever-increasing functionality and performance controlled by ECUs in future
vehicles. Furthermore, Ethernet enables emerging driving-related features in a car,
for example, autonomous driving with a real-time video camera and ADAS. The
broadened applications supported by Ethernet become an attraction for consumers
and a battleground for car manufactures as well. However, it is not easy to adopt
a new networking technology. Transitioning Ethernet into vehicles means the test
method should also be changed accordingly. The existing data analysis tools used
among different networks cannot meet the requirement of Ethernet. The special
physical and IP protocol layer of Ethernet call for new test tools for verifying the
correct integration between Ethernet and other protocols.
13
14
Table 2.1: Comparison of in-vehicle network
- TDMA (static)
Access scheduling - CSMA/CD CSMA/CA CSMA/CD
- FTDMA (dynamic)
Topology Bus Ring/Star - Bus/Star/Hybird Point-to-point/Star/Bus
- 16-bit CRC
- CRC - 24-bit CRC - CRC
Error detection - Error counter
- Plug & Play feature - Bus guardians - Bit error detection
- bus-off state schemes
- Asynchronous - Asynchronous
Transfer Mode Asynchronous Synchronous
- Synchronous - Synchronous
2. Technical Background and Current Situation
Gateways are similar to routers but have more complex configuration. A gate-
way can be applied on the networks with more than one protocol technology. The
data frame format of one protocol needs to be translated to another protocol before
it can be read. For network using the same protocol, the role of gateway is to trans-
late traffic between multiple buses of the same protocol, for example, whether the
transfer speed is too fast or there is traffic congestion on the bus.
Figure 2.10 is an example of an automotive gateway. Both high and low speed
CAN, FlexRay, LIN and MOST can be connected together by a central gateway.
All the protocol translation is performed by only one gateway ECU, thus this ar-
chitecture has low fault-tolerance, which means the communication will be off if the
central gateway fails. FlexRay or Ethernet backbone gateway can be used to share
the load of central gateway. Figure 2.11 shows a combination of ECUs and network
mentioned in this chapter.
Central
Gateway
Low speed MOST
High speed
CAN CAN FlexRay
LIN
The basic function of gateway includes diagnostics, routing and network manage-
ment etc. The detailed description of main function is presented as below:
• Message Routing: The message routing ensures the message goes to the correct
network bus. For example, one engineer determines the message path coming
from MOST bus by certain algorithm to the CAN bus with frame identifier 4.
• Packet Routing: The gateway sends the data packet to the destination node
by a routing table. The routing table mainly contains network ID, the desired
15
2. Technical Background and Current Situation
Diagnostic port
Gateway
Ethernet
FlexRay Driver Information
System
USB
Telematics
Speaker
Infotainment
MOST Head Unit LIN
Audio Central Screen Display
Speaker
High CAN
Video
address and cost. Thus the gateway can have the record and keep track of
how the data packets are transferred [29].
1. The current vehicle behavior is compared with the expected performance. When
there is any deviation and abnormal conditions detected, the observed discrepancies
are noted as symptoms.
3. If the fault is defined, the driver should be informed with an alert. (e.g. warnings
shown on the central panel)
16
2. Technical Background and Current Situation
Diagnostic Protocol
International Standard ISO 14229, unified diagnostic service (UDS) protocol has
been established on the fifth and seventh layers of open systems interconnection
(OSI) model for requirements of diagnostic services. This model allows a diagnostic
tester (client) to control diagnostic functions in an on-vehicle ECU (server) [33].
The action from ECU is represented by service identifiers (SID) and Sub-function
ID. The diagnostic protocol uses “request-respons” model for checking. As shown in
Figure 2.12 , the request is composed of SID, sub-function and request parameter.
The response is different to the request with and without sub-function, the detailed
requirement can be found in protocol ISO14229-1.
Message Data
Service ID
Service Request Sub-function Request Parameters
SID
17
2. Technical Background and Current Situation
saved from automated test. Second, the project cycle should be long enough. The
time spent on determination of test requirements, test framework development, test
script building and debug ought to be guaranteed during the project development.
Generally, the benefit of test automation will not be observed immediately. Third,
test scripts ought to be re-utilized. Low rate usage of test scripts will lead to huge
cost put on script development becoming wasted, and thus losing the meaning of
test automation. Flexibility and compatibility should be taken into account when
developing test scripts. Fourth, tests with low cost-benefit should not be automated.
The following guidance summarizes the typical test types suitable for automated test
and manual test.
• Monkey tests: Random inputs are used during the testing and testers check
the performance of system accordingly. Test automation is deemed as prof-
itable way for stochastic input data and enormous steps in these tests.
• API based tests: Application Program Interface (API) specifies the interac-
tion methods among software components. A range of requests and extreme
inputs are utilized in testing to verify if the responds from software is correct.
Test automation is recognized as suitable for API based tests. Unit testing
and functional testing are involved in API based tests.
18
2. Technical Background and Current Situation
• Maintenance, installation and setup tests: Usually these tests require hu-
man intervention when re-configuring system architecture and installing soft-
ware/hardware is need.
• Localization tests: If the test target related with specific language or culture,
only a specialist can judge whether the translation is reasonable and without
culture biased.
Although at first glance there are more areas where automated test can be em-
ployed, manual test still plays an important role when tests require human sense,
thinking and knowledge. Moreover the goal of having tests is to improve the qual-
ity of product. Automated test only verifies the relation between test result and
expected outcome without showing how to enhance accuracy and project quality.
Even the test result is the same as anticipated behavior, test cases can be improved
by imagination and creativity of human and thus need further verification. Other
tests need high maintenance cost and have low utilization frequency should also be
performed manually.
However, automated test has limitation in tests which only need one-time effort
and involve human thought, which means manual testing cannot fully replaced by
automated testing [38]. Another drawback is enormous cost of buying tools and
investment of how test automation can be done at early stage. Third, due to the
fact that the performance of test automation relies largely on the quality of scripts,
there will be higher technical requirement on engineers to develop test cases and
framework [39]. The education time for new testers will also add cost to automated
test.
19
2. Technical Background and Current Situation
The common test automation frameworks can be classified into record and play-
back, data-driven and keyword-driven in terms of approach. There is also a new
technology of testing software which relies on model-based testing (MBT) frame-
work. The right choice of test automation framework should be based on reusability,
maintainability, extensibility, repeatability and stability. Moreover, the framework
should also be easy to understand by non-testers such as customers and business
stake holders. An effective test framework results in a success of test automation,
otherwise it can cause deviation of test objective.
Record and playback is the first generation of linear test automation framework
which is based on the concept of simulation. It captures the users’ action on PC
and replay it. The main problem is that it is difficult to have any change on the
system due to strong dependency on system environment. Therefore the framework
is hard to maintain because of large amount of separate test scripts and non-reusable
modules. The test execution cannot be iterative.
Data-driven testing (DDT) framework is also known as “table driven” type of test.
The tests are executed in terms of data tables, which provide test input and output
values from data files. The test table is then loaded into variables in the driver test
scripts. DDT allows the same test to be executed multiple times with different data-
set. The creation of test cases is no longer dependent on systems as well as becomes
more flexible to fix bugs. The conception of DDT is shown in Figure 2.13. However,
new driver test scripts are required for new kinds of tests to be understood. That
means adaption is needed when either changing driver test scripts or introducing
new test data files.
20
2. Technical Background and Current Situation
Test Library
Input
Data File Driver Script 1
Test Function
1
Test Function
2 SUT
.
.
Expected Output .
Driver Script 2 Test Function
n
Actual Output
General Function
Test Case Steps Application Function Library
Library
Keyword-driven Script
User Input
Initialization Script
Test Case List Test Input Data Test Result
Start Execution
21
2. Technical Background and Current Situation
information is sent back to implementation stage as well as system model and initial
system requirements stage to find reason of failure.
Create Feedback
System Model
Generate
Feedback Feedback
Test suites
Implementation
22
2. Technical Background and Current Situation
Test libraries work as a communication bridge between the framework and appli-
cation under test AUT. The interaction is handled by test library keywords from
both standard and external test libraries supported by Python or Java. Standard
libraries contain methods of dealing with operating system, string manipulation and
verification, telnet communication etc. [43]. In this way the framework can access
system under test without knowing the details, which makes it easier for non-testers
to understand tests. The communication can either be direct or indirect access by
test tools. Robot Framework provides four built-in tools, i.e., Rebot, Testdoc, Lib-
doc and Tidy to ease building tests.
23
2. Technical Background and Current Situation
• Easy-to-use: Test cases can be created in the same style due to tabular syntax.
• Automatic report generation: The result and log are generated automatically
and provided in HTML format.
• High adaptability: The Robot Framework is suitable not only for acceptance
level testing but also cater to web testing, GUI testing, Telnet etc.
24
3
Methodology
This chapter introduces the methodology used for conducting research. The project
begins in a planning stage and followed by a literature study phase, which give
sufficient background knowledge within infotainment-related areas. A detailed de-
scription of research method and study method is explained in the following.
Since the existing studies and surveys on testing methods used in automotive in-
dustry are limited, the following research questions are essential for the future test
automation and ÅF business:
The answer of this question could be helpful and interesting for the engineers
in ÅF. By analyzing the challenges behind automated testing in vehicle info-
tainment system, the company can get enough information to decide whether
or not to expand their business in the market of interest.
• How to decide whether to automate a test or run it manually within the con-
sidered systems?
Due to the wide coverage of in-vehicle systems, some tests require a test
automation method to enhance the efficiency while others need manual in-
tervention. It is a perennial challenge to find a balance between these two
approaches and to decide when to focus on test automation. A standard in-
cluding cost-benefit analysis and quality assurance need to be made during
the project to solve this question.
25
3. Methodology
• Planning. The initial planning stage determines the thesis research question,
limitations, motivation and timing plan.
• Evaluation. The study results is continuously held along with the thesis project
to make sure the requirements are met.
26
4
Diagnosis Test Automation and
Automation Cost
In this chapter, the implementation of the first goal in this thesis is introduced. Info-
tainment ECUs diagnosis testing is selected for test automation based on motivation
and feasibility. A framework of how diagnosis function test should be done is illus-
trated from a theoretical perspective. During the first phase of thesis project, the
automated test estimation especially in terms of money is also one of the concerns.
Based on literature review, a revised way of calculating test automation benefit is
provided in this chapter as well.
In order to make ECUs implemented with correct diagnosis function, testers must
test diagnosis services during the product development. However, current in-car di-
agnosis testing technology has technical bottlenecks in several aspects. For example,
different diagnosis protocols and data formats among various ECUs; test coverage of
diagnosis testing; low efficiency when testing large amount of ECUs etc. Hence, au-
tomated diagnostic testing facilitates the improvement of diagnosis features among
on-vehicle ECUs.
27
4. Diagnosis Test Automation and Automation Cost
The first step of ECU diagnostic protocol test is gathering ECU diagnosis speci-
fication such as ECU configuration data or DTC [46]. The data is sent to CANde-
laStudio, which is a tool provided by Vector for editing ECU diagnostic description,
to generate .cdd files. The output files are added together to become a diagnosis
database. CANoe.Diva is an extension of CANoe which can support different in-
vehicle networks and is used for generating test cases based on a diagnosis database.
CANoe is used for generating testing environment. Testers can select wanted test
cases in CANoe as well as determine test flow. In the final step of a protocol test,
a report is produced automatically with analysis and comparison between expected
result and actual outcome. The test flow is shown in Figure 4.1.
.cdd File Test Case
ECU diagnosis
CANdelaStudio CANoe.Diva CANoe Result
specification
28
4. Diagnosis Test Automation and Automation Cost
(test cases) can be defined as below, however, it is important to mention that ECU
function test cases vary from ECUs to ECUs.
• Input/output DTC test: The test case is used to evaluate if ECUs could set
correct DTC when there is a short circuit to power supply or ground between
input/output node of ECUs. Alternatively, if DTC will be set when there is
an open circuit or other damaged electrical wiring. The test is also intended to
check whether the function recovers according to the ECU specification after
faults have been solved.
• Abnormal-state DTC set test: The purpose of this test is to inspect whether
ECUs can set DTC as required in diagnosis strategy under abnormal working
operation (e.g. too low voltage).
• State reading test: The aim of examining ECU state reading is to check that
ECU function status records is the same as ECU current working condition.
Figure 4.2 shows a conceptual framework for the diagnosis functional test based
on the four test cases above. The main facilities are: The PC, VT system created
by the company Vector, CANcase XL, programmable power supply and direct cur-
rent (DC) power supply. PC works as a control interface including both hardware
and software applications such as CANoe. ECUs under test can either be virtual or
real by editing configuration in CANoe. Vector system consists of different modules
which simulate diverse digital and analogue inputs/outputs. The transmission and
generation of CAN messages can be done through CANCase XL who has two CAN
controllers. The CAN message can use either 11 bit or 29 bit identifiers [47].
29
4. Diagnosis Test Automation and Automation Cost
DC Power Supply
Programmable
Power Supply
CANCase XL
(Connector)
VT7001
VT1004
VT8012 Backplan
VT6011
interface
VT2516
VT2816
VT2004
• VT7001: Power Supply Module. It has two output sections used for up to two
ECUs with different voltage levels. The supporting voltage for ECU under
test comes from internal power supply by VT7001. VT7001 also can be used
for current measurement.
• VT6011: Real-time Module. The module has two USB2.0 ports which can
support interface of module VN2610. VT6011 is regarded as a PC module on
VT system for real-time execution of CANoe [48]. The module is connected
30
4. Diagnosis Test Automation and Automation Cost
• VT2816: General Purpose Analog I/O Module. The module provides 12 input
channels for voltage measurement and 4 output channels. Eight out of twelve
channels can also be used for current measurement by shunt.
• VT2516: Digital Module. The modules involves 16 channels, which can sim-
ulate digitally used I/Os of ECUs, short circuit between input and ground or
power supply.
Similar with the choice of test cases, alternative VT system modules can be selected
other than what has been mentioned above according to certain ECU diagnosis
function test.
For the test cases discussed before, the detailed implementation of VT system is:
• Input/output DTC test: The module VT2004 can be used as an analog input
while VT2516 can be utilized as an input for digital signal. Module VT1004
is used for output load and can connect four outputs of ECUs at most. When
faults are injected on Vector system either to input signal or output load, DTC
should be set according to the error type. Otherwise, the diagnosis test fails.
• Abnormal-state DTC set test: Generally, the test modules are the same as
the first test except that programmable power supply is needed for changing
voltage level of VT7001. The successful tests should have right DTC if the
power is lower than normal.
• State reading test: As the same for input/output DTC test, input signals
should be simulated for detecting the current of data. Moreover, the CAN
message needs to be generated by CANcase XL if the output is triggered by
CAN network signal. The analog input is emulated from VT2004 and VT2516
for digital input. Both VT2816 and VT2516 modules can be used as an output
of and ECU.
1. Choosing suitable VT system modules and connect each module with input/out-
put of ECUs.
31
4. Diagnosis Test Automation and Automation Cost
5. Analyzing report from step 3 and finding fault reasons. In this step, revision
could be done for test cases and test environment.
In his “Test Automation ROI” paper, Dion Johnson proposed three ways of cal-
culating ROI [49]. In “Simple ROI”, the calculations are considered in terms of
monetary savings. It takes fixed costs such as tools, training, machines into account
and converts the time factor of automated testing into the form of money. The merit
of “Simple ROI” calculation is that it makes the project investment more intuitive
to the upper level management. However, this method assumes that automated test
can completely replaced manual tests, which oversimplifies the cases when tests are
done with a combination of manual and automated test. In contrast, “Efficiency
ROI” only considers time investment and calculates the benefit from that to assess
test efficiency. The way of calculating is only suitable when test tools are used long
enough to be neglected. “Efficiency ROI” allows testers to estimate the project ‘bud-
get’ and present benefits of doing test automation in terms of days. Nevertheless,
32
4. Diagnosis Test Automation and Automation Cost
the method is based on the assumption that full regression can be performed during
test cycles even without test automation, which is rarely true. At last, Dion intro-
duced a “Risk Reduction ROI” which assesses the risk of not having test automation
and calculates the loss caused by the risk. For example, doing automated test saves
a huge amount of time for execution while providing more time to do analysis and
test framework development. This could help to increase the test coverage and thus
reduces the risk of project failures. One disadvantage of this model is that it is
hard for testers to estimate how much money will be lost. Moreover, the lack of
comparison between manual and automated test cannot answer of the reason for
choosing automated test instead of manual test.
Aa (Va + n × Da )
En = = (4.1)
Am (Vm + n × Dm )
Aa (Va + n1 × Da )
En‘ = = (4.2)
Am (Vm + n2 × Dm )
Ba
ROIautomation = (4.3)
Ca
4Ba
ROIautomation‘ = (4.4)
4Ca
Douglas Hoffman once introduced four equations for calculating ROI, which are
listed in equations 4.1 to 4.4 [50]. However, each equation has several drawbacks
so that it is not accurate enough to show the cost benefits from test automation.
In equations 4.1 and 4.2, Aa stands for automated costs and Am is manual costs.
The expenditure for preparation before manual tests is expressed as Vm ; Va is noted
as implementation cost for automated test. Analysis work after automated test is
represented by Da and Dm means the execution of manual tests. In equation 4.1
and 4.2, test automation is regarded suitable to implement when ratios are smaller
than one. The difference between these two equations is the test time for automated
and manual tests, which is the same in equation 4.1 while different in the other.
Nevertheless, the assumption is often not applicable in real situation. Moreover,
overhead spend such as hardware and software expenditure and maintenance cost
for test automation are not included in any of these two equations. Equations 4.3
and 4.4 are calculated basically the same, which takes the ratio of benefit from
automation over cost of automation. The improvement of equation 4.4 is using
added benefit and added cost of automation over manual instead. The shortcoming
of both equations lies in its practicality, which means it is hard to compute the
benefit in figures absolutely. All four equations are too general to show what should
be considered when calculating the cost-benefit. In addition, when comparing two
ROI figures with the same number, the hasty decisions to automate both projects
can be made without taking time into account. Therefore, the time factor should
be included to make calculation more reliable.
33
4. Diagnosis Test Automation and Automation Cost
Gain Cm − Ca
ROIautomation = = (4.5)
Investment Ca
In the equation, Cm represents cost of manual testing and Ca represents cost of test
automation. The unit of both factors is man-hour. Manual testing cost is composed
of manual test cases creation cost, Cmcc and manual tests execution cost, Cme , as
described by equation 4.6. The overall execution cost consists of expenditure for
the first-time testing and afterwards regression testing. The automated testing cost
contains more elements:
Chs : It can be regarded as a fixed cost of hardware and software used for auto-
mated testing. Machine, tool license and acquisition costs are counted in this item.
The unit is man-hour. The depreciation factor of Chs can be noted as k.
Ct : The notation is the cost spent on training. When the new test automation plat-
form, tools, framework etc. are introduced, testers need to spend time on mastering
these new technology. For an already-existed and familiar testing environment, this
term can be ignored. The unit is man-hour.
Caf c : It is the measurement of research, design and creation cost for the first-
time test automation framework development. This item generally accounts for a
bulk of expenditure during the initial stage of an automated test while it goes down
in subsequent executions. The unit is man-hour.
Cacc : The item stands for cost of creating test cases in automated testing. It
depends on the number of automated test cases, Nac and average cost of individual
automated test case development cost Cacca . During the test cases development,
testers need to write test scripts and debug them. The unit for this term is man-hour.
Cae : This notation represents the overall execution cost for automated test. Gen-
erally, test automation is introduced after several iterations of manual testing and
is applied from M round. Therefore, the execution cost of automated test is calcu-
lated from round M to the end of test round N rather than from the first test. The
number of automated tests for calculating ROI can be expressed by N − M + 1.
34
4. Diagnosis Test Automation and Automation Cost
The number of test cases varies for manual and automated test due to the capabil-
ity of the two test methods. Before exploring detailed of equations, there are three
assumptions for this calculation model [51]:
• First, the number of errors is exponentially suppressed as the test times in-
crease. Suppose that there are 40 check points and error rate is 50%, in
consequence there are 20 errors found and need to be fixed after the first test.
For the second test, 10 errors (40*0.5*0.5) will be checked out of 20. After
the third test, only 5 errors will remain. The test will go on until the errors
are all fixed. If d% is average error probability, then the number of fault is
d%*Np for Np checkpoints. The number of error-prone points after i-times
test automation is Np *di , where i is the test order. The illustration is shown
in Figure 4.3.
• Second, assuming that the same number of errors, d%*Np , can be found either
from a manual test or an automated test. This assumption holds even though
the time spent on manual test is longer than that on automated test.
• Third, since analyzing after test execution is needed for both test ways, the
average cost of analyzing a fault, Cepa_sa , after test is the same. Combining
with previous assumptions, the overall erroneous problems (erroneous check-
points) analysis cost, Cepa , is the same for both test methods.
ith test
Second test
d%*d%*d%* Np Errors
Test case First test d%*d%*Np Errors
Figure 4.3: Error-prone check points after the ith test iteration
In equation 4.6, the manual test cases creation cost, Cmcc , is determined by the
number of test cases, Nmc , and the average cost of a test case design Cmcca . When
it comes to manual test execution cost, Cme , it depends on the number of tests,
single-test execution cost, Cmce , and single-test maintenance cost, Cmsm . For sim-
plicity, problem analysis cost, Cepa , is also counted in the term Cme . It is calculated
35
4. Diagnosis Test Automation and Automation Cost
by multiplying number of fault check-points with the average unit analysis cost,
Cepa_sa . After each test, manual test cases need to be maintained due to require-
ment changes and modifications in the test. Thus, maintenance cost is the product
of quantity of test cases, Nmc , and the average single-test-case maintenance cost,
Cmsm_sa . Similarly, The single manual test execution cost is calculated by multi-
plying Nmc with the average cost of single-test-case execution cost, Cmse_sa .
Cmcc = (Number of test cases) · (Average single test case design cost)
(4.8)
= Nmc · Cmcca
In order to compare with cost of automated test, manual test execution cost, Cme ,
is calculated with the same period as automated test. It can be expressed as:
After replacing equation 4.6 with equations 4.8 and 4.9, the cost of manual testing
can be expressed as:
To expand the automated test cost in equation 4.7, detailed form of test case cre-
ation cost, Cacc , and test cases execution cost, Cae need to be investigated. The
automated test cases creation can be calculated when average unit cost of test case
design, Cacca , and the number of test cases, Nac , are provided. The total cost of au-
tomated test cases execution is derived from five variables: the number of test cases,
single-test automated framework maintenance cost, Caf sm , single-test automated
test cases maintenance cost, Cacsm , single-test automated test cases execution cost,
Cacse and overall problem analysis cost, Cepa .
Cacc = (Number of automated test cases) · (Average single test case design cost)
= Nac · Cacca
(4.11)
and the expense of automated test execution starting from the M th to the N th test
is indicated as:
36
4. Diagnosis Test Automation and Automation Cost
where Cacsm_sa is average maintenance cost for individual automated test case and
Cacse_sa is average execution cost for each automated test case.
Final expression
The expansion of equation 4.5 considering the manual testing time factor, hm , and
automated testing time factor, ha , is shown in equation 4.13, which can be regarded
as an ultimate form of new ROI calculation described above. It shows the key factors
which influence the test automation and manual test cost computation. The bigger
the value of ROI is, the higher the benefit got from test automation. In order to
make the value big, the nominator needs to be positive and as big as possible while
the denominator should be as small as possible. The prerequisite is that all the cost
associated with manual test cannot be increased, which is reasonable in industry.
Table 4.1, 4.2 and 4.3 list the notations used in this calculation model.
Cm −Ca
ROI = Ca
(Cmcc +Cme )·hm −(Chs ·k+Ct +Caf c +Cacc +Cae )·ha
= (Chs ·k+Ct +Caf c +Cacc +Cae )·ha
Cmcc ·hm −(Chs ·k+Ct +Caf c +Cacc )·ha +(Cme ·hm −Cae ·ha )
= (Chs ·k+Ct +Caf c +Cacc +Cae )·ha
[Cmcc ·hm −(Chs ·k+Ct +Caf c +Cacc )·ha ]+(N −M +1)·[(Cmsm +Cmse ·hm −(Caf sm +Cacsm +Cacse )·ha ]
= M ·d%(1−d%)(N −M +1))
(Chs ·k+Ct +Caf c +Cacc )+(N −M +1)·(Caf sm +Cacsm +Cacse )·ha +Np ·Cepa_sa · 1−d%
(4.13)
General
Term Notation Denoting
Cm Manual test overall cost
Ca Automated test overall cost
N Number of overall test times
M Number of automated test starts point
Np Number of check points
d% Error Rate
Cepa_sa Average cost of single error-checkpoints analysis
Cepa Total error-checkpoints analysis cost
Discussion
For enlarging nominator part, four elements related with test automation (i.e. Chs ,
Ct , Caf c , Cacc ) need to be decreased. That means, testers need to improve resources
using efficiency and use familiar automated environment as much as possible to re-
duce the front-cost. In addition, automated framework creation cost can be reduced
by avoiding using too complicated test framework. The test cases are supposed to
be simple but effective to reduce the script debugging time. On the other hand,
three contributors (i.e. Caf sm , Cacsm , Cacse ) indicate that the development of test
37
4. Diagnosis Test Automation and Automation Cost
Automated Test
Term Notation Denoting
Nac Number of automated test cases
Chs Hardware and software equipment cost
Ct Automation training cost
Caf c Automated test framework creation cost
Cacc Automated test case creation cost
Cacca Average cost of single test case creation cost
Cae Automated test execution cost
Caf sm Automated test framework single-time maintenance cost
Cacsm Automated test cases single-time maintenance cost
Cacse Automated test case single-time execution cost
Cacsm_sa Average cost of single test case maintenance cost
Cacse_sa Average cost of single test execution maintenance cost
Manual Test
Term Notation Denoting
Nmc Number of manual test cases
Cmcc Manual test case creation cost
Cmcca Average cost of single manual test case creation cost
Cme Manual test execution cost
Cmsm Manual test single-time maintenance cost
Cmsm_sa Average cost of single test-case maintenance cost
Cmse Manual test case execution cost
Cmse_sa Average cost of single-test-case execution cost
Table 4.3: Summary of manual test cost notation
framework and test cases should be flexible so that it is easy to change them when
modifications are required. Executing test cases in batch can abate the expenditure
spent on test cases execution in a single test. Moreover, increasing the number of
automated tests (N − M + 1) is another important way of increasing nominator. It
can be concluded that if the execution number of test cases is low, it is not suggested
to have automated test due to low ROI result.
For denominator part, the first two polynomials are reduced by the strategies men-
tioned above. Cost of analysis in the last term, shows that the automated test
cannot be started too early. Too small value of M makes too much cost on the
problem analysis.
For the reason that manual test takes longer time to finish the same amount of
tests than test automation does, hm is larger than ha . Therefore, time elements
result in even larger nominator and smaller denominator, which makes ROI fac-
38
4. Diagnosis Test Automation and Automation Cost
tor bigger. Although it cannot be seen directly from the equation that automated
testing runs more tests per day, since the time spent on the same number of tests
is shorter by test automation, it can be inferred that automated testing outweighs
manual testing in terms of quantity of test execution.
With the new ROI equation, although sometimes it is hard for testers to know each
item unit cost, it benefits testers in several aspects. First, the equation shows that
the testers can make trade-off between test automation and manual test running
hours. Second, the manager can build a better budget plan and divided monthly
budget more reasonably since the formula gives overall expected investment in terms
of one-time cost and continuous input. In this way, the project manager is able to
have rational resource distribution in different period of the project. ROI value
reflects the task scheduling so that the test leader can give feedback or adjust on
testing cycle and weigh the amount of time. In addition, ROI formula calculation
tells the bottom line of getting benefit so engineers have more confidence about
testing strategy.
39
4. Diagnosis Test Automation and Automation Cost
40
5
Automated Test for an ECU
41
5. Automated Test for an ECU
The DRU enable is a relay output from the computer module. If the electronic
unit activates this signal, the relay will short the two output pins connected to one
of the connectors. The presence check input of the computer module serves as a
detection input of a specific IRV camera connected to the electronic unit. One of the
two different cameras has a jumper inside that will pull the presence check signal
low to indicate a connection. If the other camera is connected lacking of jumper,
the presence signal will return 1.
Instead of testing the function of the DRU enable and presence check inside the
electronic unit, data acquisition (DAQ) is used for testing the relay itself without
access to the real hardware. The test principle is the same for the real electronic
unit, which is shown in Figure 5.2. The DAQ board can be divided into two parts:
switch control and switch check, which corresponds to the DRU enable and presence
check function. The computer PC works as a test controller to set switch commands
to the DAQ board. The python script running on the PC can send commands and
read data automatically from the DAQ board to check the status of the relay. Test
cases are shown in table 5.1.
DAQ Board
A165 Unit Switch Control
Set Switch VS
DAC0 Relay
GND
USB
Cable
PC (Python) Switch Check
Voltage x
DI
DAQ Board (LABJACK U3-HV) FIO4
Read status
(a) (b)
42
5. Automated Test for an ECU
The DAQ used in this project is called LabJack U3-HV. The equipment and the pin
illustration are shown in Figure 5.3. LabJack U3-HV has 8 flexible I/O (FIO), 4
analog input (AIN), 2 analog output called DAC0 and DAC1. Each analog output
can be set to a voltage up to 4.95 volts. The first four FIO can only be configured
as analog inputs. FIO4 is selected to be a digital input pin to check the relay
status. DAC0 is set to be an analog output with voltage either 5V or 0V, which
corresponds to closing or opening the relay. The VS terminal is connected to the
positive control of the relay to support the operating voltage of the PCB board. The
physical connection is shown in Figure 5.4. The LabJack U3-HV is connected to
the computer by an USB cable. A simple circuit diagram is depicted in Figure 5.5.
The signal relay used is from TE connectivity which has 3V coil voltage. It provides
high dielectric and surge capability depends on different contacts.
5.3.1 Tools
The software tools needed for test automation are Git, Gerrit, Jenkins, Python
and JIRA. During an agile software development project, developers need to make
changes for each build. With Git, a version control system, these changes are
recorded and the files are saved in a version database [52]. The developers can
recall a specific version at any time on any computers as well as prevent the sit-
uation of script missing. Gerrit is a code-review system which is built on the Git
system [53]. The function of Gerrit is to review code file before it is committed.
All committed files are saved at a central source repository, which is regarded as
an authoritative copy of the project content [54]. The illustration of how Gerrit
works is shown in Figure 5.6. There are two developers called Pumbaa and Timon,
both of them can fetch the code from authoritative repository separately and edit
43
5. Automated Test for an ECU
Voltage X
Input from USB
DAC1
5V
R1 R3 Relay
100 120
LABJACK U3-HV
VS R4
R5
210 TE AXICOM IM
R2 3V DC
47
GND
R6
4.6 k
the script on their local computer. After the changes made, they push the modified
scripts, which will be temporarily stored until another developer (reviewer) reviews
the code. The reviewer should leave a verdict to either approve or reject in order
to make the pending changes to be submitted to the final repository. Jenkins is a
build management system used for continuous integration [55]. The main function
of Jenkins is to trigger testing build automatically and give the test report. JIRA is
a task management system, which is a development tool used for planning, tracking,
test and issue management [56].
44
5. Automated Test for an ECU
The flowchart of test automation is shown in Figure 5.7. The developer first gets a
task from the JIRA system and fetches the source code file from the Gerrit reposi-
tory. The developer then updates code on the local computer and pushes the code
to Gerrit. Afterwards, another software developer who is regarded as a reviewer
fetches the code file from Gerrit, reviews the code and leaves a verdict. The code is
also sent to Jenkins and given a test verdict back from it in parallel. If and only if
two verdicts are postive, the code can be merged to Gerrit central repository and is
public for everyone. The verdict mechanism is depicted in Figure 5.8.
The Python script for test cases, described in table 5.1, is executed in a test rig
and the robot test automation framework discussed in section 2.2.3.1 is applied as
a bridge between Jenkins and the Python script. The relation between Jenkins, the
Python script and the robot framework is displayed in Figure 5.9. After the test is
triggered in Jenkins, it sends command to robot framework to run tagged test cases
on selected test application. The Robot framework executes the Python script based
on the characteristic of keywords defined in the robot test data file. Afterwards, the
feedback is given from the Python script and robot framework. Test results will be
presented as a html report by Jenkins automatically. Finally, the report is handled
by testers for checking the outcome.
45
5. Automated Test for an ECU
Gerrit
⑤’Trigger test execution
Test RIG
REPO
⑥’
⑤’
Test verdict:
Trigger builder
Success / Fail
⑩ Merge if both Electronic Unit
② Fetch file verdicts are positive
from REPO
⑥ Leave a verdict:
Developer Reviewer
Approve / Reject
Figure 5.7: Electronic unit test automation procedure using Gerrit, Jenkins, JIRA
Not Fail
approved
Verdict Verdict
Approved PASS
PASS
Submit
Merge
feedback feedback
Jenkins Robot Framework Python Library (Script)
Trigger Trigger
46
5. Automated Test for an ECU
With the keywords defined in the robot file, it is straightforward for testers and
other people who are not programmers to understand the function without knowing
the details of the scripts. The revision control system Git makes it possible to track
all changes to perform merge operations and revert changes that go wrong. The
introduction of Jenkins facilitate new tests to be integrated in a project continuous
integration pipeline. The comprehensive reports generated by Jenkins simplifies the
analysis and allows stakeholders and testers to quickly understand the result. Jenk-
ins also saves all the execution history to provide a better view of product testing
process.
47
5. Automated Test for an ECU
48
6
Analysis and Discussion
This chapter discusses the outcomes mentioned in the previous chapter and answers
the research questions raised in section 3.1.
The diverse functionality of infotainment ECUs is the main reason that makes auto-
mated test execution hard to realize. The ECU function can either be independently
implemented or mutually related with other ECUs to provide more features, which
in case challenge the integration testing. Another difficulty lies in simulating the
complex user interface such as mobile device and center display console.
In this project, the implementation of ECU diagnosis test is one of the examples of
test cases suitable for test automation. This is because the input and output signals
can be simulated with the help of software tools and test cases can be easily decided.
The execution includes neither the cognition nor the creativity of humans. However,
this is not true for the other test areas within infotainment systems, for example,
Apple carplay compatibility test, bluetooth devices compatibility test, media sys-
tem test, hands-free performance test, over-the-air update test etc. The reason for
the first two is that there are different infotainment platforms, networks and soft-
ware releases in different targeted markets. In addition, the essence of an excellent
infotainment system is good user experience. The user experience can sometimes
only be acquired by testing the infotainment system manually. Examples of this are
media system test and hands-free performance test. HMI, sound and display are
the elements required for media system test. In order to define the types of HMI
errors which test automation should face, it is necessary to find out all HMI errors
happening in real situations. However, it is not easy to locate faults in HMI, for
instance, just for menu navigation, there are probably hundreds of errors caused by
inconsistency, language problem, wrong pop-up dialog windows, overlap text, erro-
neous next-level menu etc.
49
6. Analysis and Discussion
Based on the answer to research question 1 and the literature study mentioned in
the previous chapter, we can concluded that if a test is frequently needed and does
not need human intervention, it is suggested to have test automation. However, it
is also important to make sure that the project time range is long enough to develop
test automation framework and scripts. In the second place, if a test frequency is
high and needs human interaction, it is recommended to have a combination of au-
tomated test and manual test. The starting time of automated test can be decided
by calculating M factor, which is the beginning point of test automation, in the
new ROI calculation model mentioned in equation 4.13. In contrast, suppose that a
test frequency is low and needs human interaction, it is generally not suggested to
conduct test manually.
50
7
Conclusions
This chapter lists the achievements in this thesis project and future work which can
be done.
7.1 Achievements
In the beginning of the thesis project, two objectives were set:
This thesis first considered suitable areas for test automation within infotainment
field. ECU diagnosis test has been further investigated and a theoretical imple-
mentation of test automation is demonstrated. However due to lack of supporting
equipment, it can not be concluded that the test framework is valid for a real situ-
ation.
In this thesis, a new more detailed ROI calculation model used to calculate cost-
benefit factor is provided based on predecessor research results. From the feedback
and evaluation of ÅF engineers, it can be concluded that the model is good for show-
ing all the elements need to be taken care but not very easy to conduct in industry.
The challenge is that it is quite hard to define the cost of each unit described in
the final equation, for example, average cost of single manual test case development
cost. Nevertheless, the calculation model is effective if the unit price can be defined
to some extent.
51
7. Conclusions
First, the theoretical test framework can be applied in a real infotainment ECU
test for validation purpose. Vector system modules need to be changed for different
infotainment ECUs. The test cases defined need transition to language in CANoe,
which is known as CAPL. The configuration of testing environment is necessary to
be built in CANoe to emulate the ECUs working situation. The testers need to
decide whether using real ECUs or simulated ECUs by CANoe.
Second, the new ROI calculation model can be used in real projects to calculate the
cost-benefit factor. Based on the ROI factor and analysis discussed in section 6.2,
testers can make a choice between manual test and automated test. In addition,
since ROI calculation plays an important role of making decision for test automation
availability, the accuracy should be improved in future work. Future research focus
can be put on the factors that influence ROI calculation accuracy; the accuracy level
of investment and benefit separately; a framework to estimate the erroneousness of
ROI calculations etc.
Third, the simple test framework developed in section 5.2 can be used for a real
hardware ECU test. The real outcome can be compared with the model testing
method.
52
Bibliography
[1] D. Mauser, A. Klaus, K. Holl, & R. Zhang (2013). GUI failures of in-vehicle
infotainment: Analysis, Classification, Challenges, and Capabilities.
[3] W. Waxenberger. What do Italian drivers want from tomorrow’s car infotain-
ment systems?, 2015. [Online]. Available: https://blog.gfk.com/2015/01/
what-do-italian-drivers-want-from-tomorrows-car-infotainment-systems.
[Accessed: 2017-04-25].
[9] J. Rushby (2005). Automated test generation and verified software. In Working
Conference on Verified Software: Theories, Tools, and Experiments (pp.
53
Bibliography
[11] S. Reñé Vicente (2015). Handoff management for infotainment services over
vehicular networks.
[16] “Car audio basics: Head units, amplifiers, and speakers”, 2018. [Online]. Avail-
able: https://www.lifewire.com/car-audio-basics-head-units-534562.
[Accessed: 2018-05-23].
[19] H. Chen, & J. Tian (2009). Research on the controller area network. In
Networking and Digital Society, 2009. ICNDS’09. International Conference on
(Vol. 2, pp. 251-254). IEEE.
[21] K. Pazul (1999). Controller area network (CAN) basics. Microchip Technology
Inc, 1.
54
Bibliography
[22] S. Corrigan (2008). Introduction to the controller area network (CAN). Texas
Instrument, Application Report.
[25] R. Shaw (2009). Improving the reliability and performance of FlexRay vehicle
network applications using simulation techniques. Doctoral dissertation.
Waterford Institute of Technology.
[28] B. Metcalfe, et al. (2014). Automotive Ethernet - the definitive guide. Intrepid
Control Systems.
55
Bibliography
[35] A. Ieshin, M. Gerenko, & V. Dmitriev (2009). Test automation: Flexible way.
In Software Engineering Conference in Russia (CEE-SECR), 2009 5th Central
and Eastern European (pp. 249-252). IEEE.
[36] D. H. Kum, J. Son, S. B. Lee, & I. Wilson (2006). Automated testing for
automotive embedded systems. In SICE-ICASE, 2006. International Joint
Conference (pp. 4414-4418). IEEE.
[39] S. Berner, R. Weber, & R. K. Keller (2005). Observations and lessons learned
from automated testing. In Software Engineering, 2005. ICSE 2005. Proceed-
ings. 27th International Conference on(pp. 571-579). IEEE.
[43] Robot Framework user guide (version 3.0.2). Robot Framework Foundation.
[44] R. Hegde, & K. S. Gurumurthy (2009). Load balancing towards ECU integra-
tion. In Advances in Recent Technologies in Communication and Computing,
2009. ARTCom’09. International Conference on (pp. 521-524). IEEE.
[45] A. X. A. Sim, & B. Sitohang (2014). OBD-II standard car engine diagnostic
software development. In Data and Software Engineering (ICODSE), 2014
International Conference on (pp. 1-5). IEEE.
[46] ISO 22901-1: 2008(en) Road vehicles - Open diagnostic data exchange (ODX)
- Part 1: Data model specification
56
Bibliography
[48] Vector system user manual (version 1.14). Vector Informatik GmbH.
[50] D. Hoffman (1999). Cost benefits analysis of test automation. STAR West, 99.
[51] M. Cui & C. Wang (2015). Cost-benefit evaluation model for automated
testing based on test case prioritization. Journal of Software Engineering, 9(4),
808-817.
[52] S. Chacon & B. Straub (2014). Pro git on (Chapter 1 pp.1). Apress.
[56] H. M. Sarkan, T. P. S. Ahmad, & A. A. Bakar (2011). Using JIRA and Redmine
in requirement development for agile methodology. In Software Engineering
(MySEC), 2011 5th Malaysian Conference in(pp. 408-413). IEEE.
57
Bibliography
58