By Hugo de Kock and Jim Ross
Model-Based Design and the agile practices of behavior-driven development and test-driven development play an important role in modern, software-intensive, large-scale development projects. This white paper illustrates a combined approach based on best practices established through years of work with engineering organizations.
Introduction
How often do you have the correct and complete requirements at the start of a project? Have you ever successfully verified that your models and code meet the given requirements, even with 100% structural coverage, but the resulting system did not deliver on expectations? This white paper utilizes several “Benefit Hypotheses” as found in the Scaled Agile Framework (SAFe) [1]. In short, a Benefit Hypothesis specifies a proposed measurable benefit. In this case, we utilize specific language to construct an assertion or claim that a practice will deliver an outcome based on stated criteria. In the following text, the Benefit Hypotheses for Model-Based Design, Behavior-Driven Development (BDD), and Test-Driven Development (TDD) will provide some insight into why a combined approach is very useful.
Model-Based Design
Modeling products such as Simulink® and Stateflow® are well known for Model-Based Design and are referenced in standards such as ISO® 26262 for safety-critical system development. Model-Based Design enables engineers to develop complex systems by working at a higher level of abstraction compared to code. The visual nature of models and state machines makes it easier to express and understand complex relationships and flow of data. You can use models as executable specifications to enable early validation of requirements through system simulation, then generate documentation and production code from those models automatically. The growth in complexity of today’s systems reinforces the importance of early virtual development including system simulation.
Benefit Hypothesis: I believe that Model-Based Design with system simulation and production code generation will result in increased productivity and efficiency in large-scale software projects, as measured by correctness of requirements and quality of generated production code.
Behavior-Driven Development (BDD)
In our experience as former developers and product owners, requirements at the start of a project are often incomplete, incorrect, or are captured without clear acceptance criteria. Requirements with these issues are difficult to implement and difficult to verify. We believe requirements elicitation is a team effort and the responsibility of many roles; it leads to a clear understanding of what needs to be done, why it should be done, and when it will be good enough. This is an iterative process, and the best practices are to think of the end user’s use cases, to keep your customer involved, and to demonstrate working prototypes (e.g., using system simulations) in short time intervals. It is usually hard to know what behavior is needed until you see some examples in action. We believe that BDD is an effective way to get a baseline for requirements with acceptance criteria.
In addition to early requirements iteration and validation, BDD provides the basis for ongoing, continuous validation. System test cases created for BDD can be automated and used throughout the product lifecycle to ensure that changes or new requirements do not cause issues with existing functionality.
The Scaled Agile Framework (SAFe) is a leading framework that is being adopted in various industries, both in small companies and large enterprises. According to SAFe, “Behavior-Driven Development (BDD) is a test-first, agile testing practice that provides built-in quality by defining (and potentially automating) tests before, or as part of, specifying system behavior. BDD is a collaborative process that creates a shared understanding of requirements between the business and the agile teams. Without focusing on internal implementation, BDD tests are business-facing scenarios that attempt to describe the behavior of a story, feature, or capability from a user’s perspective.” [1]
In BDD, the focus is on getting the requirements right and ensuring that the design meets those requirements, which is an iterative process. Combining BDD with the system simulation capabilities of Model-Based Design provides a method to create virtual prototypes very early on and explore system behavior along with fulfillment of acceptance criteria from an end user’s point of view.
BDD requires modeling functionality of the system related to both the controller (software) and plant (non-software) and simulating them together to understand the complete behavior of the system.
Benefit Hypothesis: I believe that Behavior-Driven Development (BDD) in the combination with system simulation capabilities of Model-Based Design will result in built-in quality, as measured by the efficient elicitation of correct system requirements and early validation of correct behavior of the system.
Test-Driven Development (TDD)
In the context of agile development, TDD is a philosophy and practice that involves building and executing tests before implementing a part of a system [2], [3]. While the TDD practice might be better known for handwritten code, following Extreme Programming (XP) practices, it is also highly applicable for Model-Based Design, following, for example, Extreme Modeling (XM) practices [3].
According to SAFe, “The TDD practice was designed primarily to operate in the context of unit tests, which are developer-written tests. These are a form of ‘white-box testing,’ because they test the internals of the system and focus on structural coverage. A rich set of unit tests ensure that refactoring efforts do not introduce new errors, allowing developers to continuously improve their designs. Refactoring builds quality by allowing designs to emerge over time, supporting the solution’s changing requirements.” [2]
Applying TDD with Model-Based Design is different than applying TDD for handwritten code, mainly because the model can be used as an executable specification. Thus, the model allows early verification with test-vector generation along with recognized benefits of code generation and documentation generation. Combining TDD with Model-Based Design is recognized as a powerful method to achieve reliable software intensive systems [4]. However, Simulink provides an environment that simplifies creating and maintaining a mocking environment to perform this testing.
Benefit Hypothesis: I believe that Test-Driven Development (TDD) in the combination with the executable specification (simulation) capabilities, model-level verification, and code generation capabilities of Model-Based Design will result in built-in quality, as measured by verified correct behavior of models and generated code on a unit level.
The Role of Requirements
This white paper emphasizes the value of BDD and TDD for Model-Based Design. However, neither of these practices can be successfully utilized without having good requirements.
The Importance of Good Requirements
The starting point for both BDD and TDD is a requirement with acceptance criteria. For BDD, this means starting with an understanding of the desired functionality or system behavior. For TDD, this often means a more detailed specification derived from the system behavior requirement in the context of the chosen solution.
In both cases, a good requirement must be testable, meaning that it is possible to define inputs and corresponding expected results. And to perform the test, it must be possible to measure the response and determine if that response meets the expectation.
How Can I Write Good Requirements?
Some agile best practices of writing requirements are summarized in the table below. It can also be useful to create use case diagrams [5] as a visual overview and then elaborate on each use case using the formulations below. Having acceptance criteria for each requirement is very important, because they form the steppingstones to define tests. If there are no clear acceptance criteria, it is not possible to write any tests and therefore not possible to practice BDD and TDD.
Benefit Hypothesis | I believe that <capability>, Will produce <outcome>, As measured by <metric>. |
To communicate business and customer relevance | WHY? |
Customer Stories | As a <role>, I want to <need>, So that <goal>. |
To give context | WHAT? |
Acceptance Criteria | Given <precondition>, When <trigger action>, Then <expected result>. |
To communicate quality assurance criteria | WHEN WILL IT BE GOOD ENOUGH? |
Are We Sure BDD and TDD Are Worth It?
According to SAFe, “TDD creates a large set of developer-level tests, which allows quality assurance and test personnel to focus on other testing challenges. Instead of spending time finding and reporting code-level bugs, they can focus on more complex behaviors and interactions between components. TDD, along with BDD, is part of the ‘test-first’ approach achieving built-in quality. Writing tests first creates a more balanced testing portfolio with many fast, automated development tests and fewer slow, manual, end-to-end tests.” [1]
From a tooling perspective, when the developer uses the same environment and language for writing tests and practicing Model-Based Design, it becomes easier and the mental barrier for practicing BDD and TDD is smaller. The developer does not need to learn another language or switch between environments just for writing test cases. If a tester or system engineer who might focus on other types and levels of testing is also using the same tooling and environment, the communication efficiency between the different roles could be increased tremendously. In case a fault is detected on the tester’s side, the developer would be able to understand the test cases easily and could start debugging directly.
Benefit Hypothesis: I believe using a single development environment and language for Model-Based Design including testing will result in efficient communication between various roles (e.g., system engineer, developer, tester), faster elimination of wrong requirements and software bugs, as well as improve the flow of development, as measured by number of reused test cases and assessments at various stages of system and unit testing (e.g., desktop simulation, software-in-the-loop, processor-in-the-loop, hardware-in-the-loop testing).
How Do BDD and TDD Work with Model-Based Design?
BDD and TDD both follow the “test-first” approach. Firstly, before being able to test anything, it is very important to have clear requirements, and more specifically, clear acceptance criteria. Secondly, you need to have a minimum architecture or design, such as a subsystem with inputs, outputs, possibly a function call, and some system parameters. Only then is it possible to start building test harnesses and test cases with assessments.
The table below provides a simple example to clarify the need to start with requirements, including clear acceptance criteria.
Requirement | Issue |
Software will detect low battery voltage. | This requirement is vague and incomplete with no clear acceptance criteria. |
Software will detect battery voltage below 4 V and notify the user. | The requirement now has clear acceptance criteria, but it is still incomplete. |
As an electronic device user, I want to know when my device battery voltage is low, So that I can plug it into a charger. Given the device is switched on, When the battery voltage drops below 4 V for longer than 3 s, Then the software shall notify the user. |
This requirement appears complete with clear acceptance criteria.
|
A graphical representation of the BDD and TDD lifecycles, with several adaptions to incorporate the benefits of Model-Based Design and best practices, is shown in Figure 1.
The next sections of this white paper elaborate on the activities and purpose in the flowchart shown in Figure 1. Each of these sections will identify the typical roles involved, with (R) designating the “Responsible” role. Detailed activities for each phase will be listed along with applicable MATLAB® and Simulink products that can be used to carry out the tasks.
Phase A: Analyze System Requirements
The first phase is to define the system requirements. Often, a good place to start is to identify stakeholder needs and customer-centric use cases. From this discovery phase, formal system requirements including acceptance criteria can be captured. Finally, review the system requirements and use case diagrams with the stakeholders.
Typical Roles Involved | Details/Remarks | Applicable Products |
|
|
|
Phase B: Define System Architecture
When the requirements and acceptance criteria are clear, then proceed to system architecture analysis. The system should be broken down into clearly defined components and interfaces. Note that for large or complex systems, this can be an iterative process with high-level components representing subsystems of the overall product undergoing a similar architecture definition with internal components and interfaces. At each layer, allocate requirements to each component.
Create models for the system based on the architecture. At each layer of the system architecture, create skeleton models (or reuse existing models) and connect them per the architecture. The resulting system model and set of component models will be used for BDD and TDD in subsequent phases.
Typical Roles Involved | Details/Remarks | Applicable Products |
|
|
|
Phase C: Write System Test Cases
When the minimal needed architecture is ready, proceed to writing system-level test cases. The aim is to create system-level test cases and assessments that will fail due to lacking functionality in the components. Our experience shows that writing the system test cases in this phase serves a dual purpose. The first, hopefully obvious purpose is that having these test cases available enables BDD. The second purpose is that writing a system test case often uncovers additional issues or gaps in the requirements—essentially becoming a second, often more detailed review of the system requirements. From a BDD perspective, this is particularly true when it comes to acceptance criteria.
One of the steps required for this phase is to create the plant and environment models required to simulate system functionality. Steps in this phase include creating a test harness for the system model and defining input sequences to execute the use cases captured in Phase A: Analyze System Requirements. Use the acceptance criteria to define executable assessments that can be repeated. At this point, all nontrivial assessments should fail.
Typical Roles Involved | Details/Remarks | Applicable Products |
|
|
|
Phase D: Write Component Test Cases
Once the system-level test cases are complete, it is time to begin the actual test drive development phase. As with the system test cases, writing test cases for components serves two purposes: enabling TDD and reviewing allocated requirements for the components. Writing a test case, like modeling or writing code, requires a deeper look at the requirements and often uncovers gaps and issues that are missed. Automating a test case ensures that the acceptance criteria are complete and testable.
Typical Roles Involved | Details/Remarks | Applicable Products |
|
|
|
Tip: For logic-driven software requirements, use the Requirements Table block [6] from Requirements Toolbox™ since it can be reused as executable assessments in the test harness. Otherwise, for time- or value-based requirements, use blocks from the Model Verification and Simulink Design Verifier™ libraries to create reusable automated assessments in test harnesses.
Tip: Use the rich features of Simscape™ and System Identification Toolbox™ to create plant models that could be used in test harnesses as test doubles.
Phase E: Design Detailed Component
When all the component test cases fail, then proceed to detailed component designs to add the necessary functionality for each component. Multiple developers could work on multiple components during this phase. Doing “pair programming” is an agile practice that could be used; in other words, work in pairs with quick peer reviews to increase quality.
Typical Roles Involved | Details/Remarks | Applicable Products |
|
|
|
Phase F: Simulate and Test System
When all the components have just enough functionality to pass their acceptance criteria, proceed to the next phase: system simulation. Since the system model was created by integrating the skeleton models in Phase B, the completion of the detailed component design phase means the system model should now provide a useful simulation of the system behavior. If a system assessment fails, it is easy to conclude that there is a problem with the model implementing the desired functionality. While this may be the case, a failed BDD test case may also indicate an issue with the test case itself or even an issue with the requirement. In fact, as stated earlier, the value of BDD is fully realized through the ability to uncover requirement problems early in development.
Typical Roles Involved | Details/Remarks | Applicable Products |
|
|
|
Phase G: Perform Rapid Prototyping with Real Hardware
In Phase F, MIL system simulation can give confidence and early feedback to stakeholders about the intended system functionality. It is an opportunity to perform sanity checks and possibly correct incomplete or incorrect requirements or incorrect acceptance criteria. Some advantages to MIL system simulations using plant models include consistent availability and, with some upfront planning, support variants to enable testing of an entire product line. However, the plant models are usually a simplification of reality, and it is sometimes difficult or time-consuming to capture relevant system behavior that impacts the solution. These limitations can be minimized using rapid prototyping: in other words, using bypassing techniques, with or without external real-time hardware, to test the intended components within the real system in real time [8].
Typical Roles Involved | Details/Remarks | Applicable Products |
Developer (R) |
|
|
Phase H: Implement Component
Having gained confidence in the component prototypes and integrated system behavior through Phases E, F and G, you can now elaborate the models for implementation via code generation. Having a set of automated tests provides the freedom to refactor the model for long-term maintainability. Implementation data types and storage classes can be defined to generate code appropriate for the production-intent microprocessor. After requirements validation in Phases F and G, ensure that the requirements and models are appropriately linked to meet traceability needs. Finally, enhance the testing to achieve coverage goals.
Typical Roles Involved | Details/Remarks | Applicable Products |
|
|
|
Tip: When the project is at a more mature phase, shift some of the manual activities from Phase H to Phase E and automate all the activities in Phases E and F: model checking, unit and system testing, and code generation using a continuous integration platform [9].
Phase I: Run System SIL Tests
When all the component models have been refined for code generation and verified against component acceptance criteria, as well as verified for equivalence (MIL vs. SIL vs. PIL), it is possible to proceed to the next phase in which system-level integration of the components in SIL mode can be tested against the system-level acceptance criteria. In addition to testing against requirements, results of SIL testing can be compared to MIL test results from Phase F.
Typical Roles Involved | Details/Remarks | Applicable Products |
Tester (R) |
|
|
Phase J: Run System HIL Tests
After delivering generated code for each component (or software composition), the software can be integrated and compiled for the target hardware. The target hardware (e.g., electronic control unit), can run in real time in an environment where the rest of the system is simulated in real time [10]. Converting existing MIL tests for use in HIL provides equivalence testing capability. Be sure to test additional aspects of the software that could not be tested in MIL or SIL. For example, fault codes, communication messages, and resource utilization can be evaluated in HIL testing. This may require additional test cases or assessments.
Typical Roles Involved | Details/Remarks | Applicable Products |
Tester (R) |
|
|
Conclusion
We believe that Behavior-Driven Development, Test-Driven Development, and Model-Based Design fit very well together. Used in combination, they will help you to efficiently develop and deliver high-quality products. MathWorks is the creator of a complete toolchain that enables BDD and TDD with MBD in a single tooling environment. MathWorks and Speedgoat are partner companies providing a complete solution for rapid prototyping and HIL testing.
Benefit Hypothesis: I believe that practicing BDD and TDD within the MBD environment of MATLAB and Simulink will result in a large set of test cases on component and system level, which will enable projects to upgrade to newer versions of the toolchain on a regular basis (small increments), giving access to a state-of-the-art development environment, as measured by the time it takes to upgrade the toolchain.
References
[1] “Behavior-Driven Development.” Scaled Agile Framework,
https://scaledagileframework.com/behavior-driven-development/. Accessed 08/22.
[2] “Test-Driven Development,” Scaled Agile Framework,
https://scaledagileframework.com/test-driven-development/. Accessed 08/22.
[3] Shekoufeh, Kolahdouz-Rahimi, et al. “eXtreme Modeling: an approach to agile model-based development.” Journal of Computing and Security 6, no. 2 (July 2019): 42–52.
[4] Fu, Yujian et al. “Model-Based Test-Driven Cyber-Physical System Design.” SoutheastCon 2018, IEEE, 2018, pp. 1–6. DOI.org (Crossref),
https://doi.org/10.1109/SECON.2018.8479080.
[5] Jacobson, Dr. Ivar, et al. Use-Case 2.0: The Guide to Succeeding with Use Cases. Ivar Jacobson International, 2011.
[6] “Use a Requirements Table Block to Create Formal Requirements,” MathWorks, accessed 2022.
https://nl.mathworks.com/help/slrequirements/ug/use-requirements-table-block.html.
[7] “Assess Coverage Results from Requirements-Based Tests,” MathWorks, accessed 2022.
https://nl.mathworks.com/help/slcoverage/ug/assess-coverage-results-from-requirements-based-tests.html.
[8] Pablo Romero Cumbreras and Tjorben Gross, “Model-based calibration testing and ECU bypassing with XCP using Simulink Real-Time and Speedgoat target hardware,” MathWorks, filmed September 26, 2018, video, 26:26.
[9] “Continuous Integration (CI),” MathWorks, accessed 2022.
https://nl.mathworks.com/help/matlab/continuous-integration.html.
[10] “Hardware-in-the-Loop,” Speedgoat, accessed 2022.
https://www.speedgoat.com/solutions/testing-workflows/hardware-in-the-loop-testing.
Published 2023
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)