Please use this identifier to cite or link to this item: http://dx.doi.org/10.14279/depositonce-9856.2
For citation please use:
Main Title: What Am I Testing and Where? Comparing Testing Procedures based on Lightweight Requirements Annotations
Author(s): Pudlitz, Florian
Brokhausen, Florian
Vogelsang, Andreas
Type: Article
Language Code: en
Abstract: [Context] The testing of software-intensive systems is performed in different test stages each having a large number of test cases. These test cases are commonly derived from requirements. Each test stages exhibits specific demands and constraints with respect to their degree of detail and what can be tested. Therefore, specific test suites are defined for each test stage. In this paper, the focus is on the domain of embedded systems, where, among others, typical test stages are Software- and Hardware-in-the-loop. [Objective] Monitoring and controlling which requirements are verified in which detail and in which test stage is a challenge for engineers. However, this information is necessary to assure a certain test coverage, to minimize redundant testing procedures, and to avoid inconsistencies between test stages. In addition, engineers are reluctant to state their requirements in terms of structured languages or models that would facilitate the relation of requirements to test executions. [Method] With our approach, we close the gap between requirements specifications and test executions. Previously, we have proposed a lightweight markup language for requirements which provides a set of annotations that can be applied to natural language requirements. The annotations are mapped to events and signals in test executions. As a result, meaningful insights from a set of test executions can be directly related to artifacts in the requirements specification. In this paper, we use the markup language to compare different test stages with one another. [Results] We annotate 443 natural language requirements of a driver assistance system with the means of our lightweight markup language. The annotations are then linked to 1300 test executions from a simulation environment and 53 test executions from test drives with human drivers. Based on the annotations, we are able to analyze how similar the test stages are and how well test stages and test cases are aligned with the requirements. Further, we highlight the general applicability of our approach through this extensive experimental evaluation. [Conclusion] With our approach, the results of several test levels are linked to the requirements and enable the evaluation of complex test executions. By this means, practitioners can easily evaluate how well a systems performs with regards to its specification and, additionally, can reason about the expressiveness of the applied test stage.
URI: https://depositonce.tu-berlin.de/handle/11303/10966.2
http://dx.doi.org/10.14279/depositonce-9856.2
Issue Date: 6-May-2020
Date Available: 3-Apr-2020
7-May-2020
DDC Class: 004 Datenverarbeitung; Informatik
Subject(s): markup language
requirements modeling
simulation
test stage evaluation
test stage comparison
Sponsor/Funder: TU Berlin, Open-Access-Mittel - 2020
License: https://creativecommons.org/licenses/by/4.0/
Journal Title: Empirical Software Engineering
Publisher: Springer
Publisher Place: Dordrecht [u.a.]
Publisher DOI: 10.1007/s10664-020-09815-w
EISSN: 1573-7616
ISSN: 1382-3256
Appears in Collections:FG IT-basierte Fahrzeuginnovationen » Publications

Files in This Item:

Version History
Version Item Date Summary
2 10.14279/depositonce-9856.2 2020-05-07 16:25:47.925 Published article
1 10.14279/depositonce-9856 2020-04-03 08:52:16.0
Item Export Bar

This item is licensed under a Creative Commons License Creative Commons