padv.builtin.task.RunTestsPerModel Class
Namespace: padv.builtin.task
Superclasses: padv.Task
Task for running test cases associated with each model using Simulink Test
Description
This class requires CI/CD Automation for Simulink Check.
The padv.builtin.task.RunTestsPerModel
class provides a task that can run
the test cases associated with your models using Simulink®
Test™.
You can add the task to your process model by using the
method addTask
. After you add the task to your process model, you can run the
task from the Process Advisor app or by using the function
runprocess
. The task runs each test case for each model in your project
and certain tests can generate code. You can control whether Simulink
Test or the MATLAB® Unit Test framework executes the test cases by using the task property
UseMATLABUnitTest
.
The Process Advisor app shows the names of the models that have test cases
under Run Tests in the Tasks column. If you want to
see the names of both the models and the associated test cases, use the
padv.builtin.task.RunTestsPerTestCase
task
instead.
To view the source code for this built-in task, in the MATLAB Command Window, enter:
open padv.builtin.task.RunTestsPerModel
The padv.builtin.task.RunTestsPerModel
class is a handle
class.
Note
When you run the task, the task runs each test case individually and only executes test-case level callbacks. The task does not execute test-file level callbacks or test-suite level callbacks.
Creation
Description
creates a task for running the test cases associated with your models using Simulink
Test.task
= padv.builtin.task.RunTestsPerModel()
sets certain properties using one or more name-value arguments. For example, task
= padv.builtin.task.RunTestsPerModel(Name=Value
)task
= padv.builtin.task.RunTestsPerModel(Name = "MyRunTestsTask")
creates a task
with the specified name.
You can use this syntax to set property values for Name
,
Title
, InputQueries
,
IterationQuery
, InputDependencyQuery
,
Licenses
, LaunchToolAction
, and
LaunchToolText
.
The padv.builtin.task.RunTestsPerModel
class also has other properties, but you cannot set
those properties during task creation.
Properties
The RunTestsPerModel
class inherits properties from padv.Task
. The properties listed in
Specialized Inherited Properties are padv.Task
properties that the RunTestsPerModel
task overrides.
The task also has properties for specifying Test Execution Options.
Specialized Inherited PropertiesName
— Unique identifier for task in process
"padv.builtin.task.RunTestsPerModel"
(default) | string
Unique identifier for task in process, specified as a string.
Example: "TestMyModels"
Data Types: string
Title
— Human-readable name that appears in Process Advisor app
"Run Tests"
(default) | string
Human-readable name that appears in Process Advisor app, specified as a string.
Example: "Run My Tests"
Data Types: string
DescriptionText
— Task description
"This task uses Simulink Test to run the test cases associated with your model. The task runs the test cases on a model-by-model basis.
Certain tests may generate code."
(default) | string
Task description, specified as a string.
When you point to a task in Process Advisor and click the information icon, the tooltip shows the task description.
Example: "This task uses Simulink Test to run the test cases associated with
your model. The task runs the test cases on a model-by-model basis. Certain tests may
generate code."
Data Types: string
DescriptionCSH
— Path to task documentation
path to RunTestsPerModel
documentation (default) | string
Path to task documentation, specified as a string.
When you point to a task in Process Advisor, click the ellipsis (...), and click Help, Process Advisor opens the task documentation.
Example: fullfile(pwd,"taskHelpFiles","myTaskDocumentation.pdf")
Data Types: string
RequiredIterationArtifactType
— Artifact type that task can run on
"sl_model_file"
(default) | ...
Type of artifact, specified as one or more of the values listed in this table. To specify multiple values, use an array.
Category | Artifact Type | Description |
---|---|---|
MATLAB | "m_class" | MATLAB class |
"m_file" | MATLAB file | |
"m_func" | MATLAB function | |
"m_method" | MATLAB class method | |
"m_property" | MATLAB class property | |
Model Advisor | "ma_config_file" | Model Advisor configuration file |
"ma_justification_file" | Model Advisor justification file | |
Process Advisor | "padv_dep_artifacts" | Related artifacts that current artifact depends on |
"padv_output_file" | Process Advisor output file | |
Project | "project" | Current project file |
Requirements | "mwreq_item" | Requirement (since R2024b) |
| Requirement (for R2024a and earlier) | |
"sl_req_file" | Requirement file | |
"sl_req_table" | Requirements Table | |
Stateflow® | "sf_chart" | Stateflow chart |
"sf_graphical_fcn" | Stateflow graphical function | |
"sf_group" | Stateflow group | |
"sf_state" | Stateflow state | |
"sf_state_transition_chart" | Stateflow state transition chart | |
"sf_truth_table" | Stateflow truth table | |
Simulink | "sl_block_diagram" | Block diagram |
"sl_data_dictionary_file" | Data dictionary file | |
"sl_embedded_matlab_fcn" | MATLAB function | |
"sl_block_diagram" | Block diagram | |
"sl_library_file" | Library file | |
"sl_model_file" | Simulink model file | |
"sl_protected_model_file" | Protected Simulink model file | |
"sl_subsystem" | Subsystem | |
"sl_subsystem_file" | Subsystem file | |
System Composer™ | "zc_block_diagram" | System Composer architecture |
"zc_component" | System Composer architecture component | |
"zc_file" | System Composer architecture file | |
Tests | "harness_info_file" | Harness info file |
"sl_harness_block_diagram" | Harness block diagram | |
"sl_harness_file" | Test harness file | |
"sl_test_case" | Simulink Test case | |
"sl_test_case_result" | Simulink Test case result | |
"sl_test_file" | Simulink Test file | |
"sl_test_iteration" | Simulink Test iteration | |
"sl_test_iteration_result" | Simulink Test iteration result | |
"sl_test_report_file" | Simulink Test result report | |
"sl_test_result_file" | Simulink Test result file | |
"sl_test_resultset" | Simulink Test result set | |
"sl_test_seq" | Test Sequence | |
"sl_test_suite" | Simulink Test suite | |
"sl_test_suite_result" | Simulink Test suite result |
Example: "sl_model_file"
Example: ["sl_model_file "zc_file"]
IterationQuery
— Find artifacts that task iterates over
padv.builtin.query.FindModelsWithTestCases
(default) | padv.Query
object | name of padv.Query
object
Query that finds the artifacts that the task iterates over, specified as a
padv.Query
object or the name of a padv.Query
object. When you specify IterationQuery
, the task runs one time
for each artifact returned by the query. In the Process
Advisor app, the artifacts returned by IterationQuery
appear under task title.
For more information about task iterations, see Overview of Process Model.
Example: padv.builtin.query.FindModelsWithTestCases(ExcludePath =
"Control")
InputDependencyQuery
— Finds artifact dependencies for task inputs
padv.builtin.query.GetDependentArtifacts
| padv.Query
object | name of padv.Query
object
Query that finds artifact dependencies for task inputs, specified as a
padv.Query
object or the name of a padv.Query
object.
The build system runs the query specified by
InputDependencyQuery
to find the dependencies for the task
inputs, since those dependencies can impact if task results are up-to-date. For more
information, see Overview of Process Model.
Example: padv.builtin.query.GetDependentArtifacts
Licenses
— List of additional licenses that task requires
"simulink_test"
(default) | string
List of additional licenses that the task requires, specified as a string.
Data Types: string
LaunchToolAction
— Function that launches tool
@launchToolAction
(default) | function handle
Function that launches a tool, specified as the function handle.
When you point to a task in the Process Advisor app, you can click the ellipsis (...) to see more options. For built-in tasks, you have the option to launch a tool associated with the task.
For the task RunTestsPerModel
, you can launch Simulink Test
Manager.
Data Types: function_handle
LaunchToolText
— Description of action that LaunchToolAction
property performs
"Open Test Manager"
(default) | string
Description of the action that the LaunchToolAction
property
performs, specified as a string.
Data Types: string
CISupportOutputsByTask
— Type of CI-compatible result files that task generates when run
"JUnit"
(default)
Type of CI-compatible result files that the task itself generates when run, specified as either:
"JUnit"
— JUnit-style XML report for task results.""
— None. The build system generates a JUnit-style XML report for the task instead.
InputQueries
— Inputs to task
[padv.builtin.query.GetIterationArtifact, padv.builtin.query.FindTestCasesForModel]
| padv.Query
object | name of padv.Query
object | array of padv.Query
objects
Inputs to the task, specified as:
a
padv.Query
objectthe name of
padv.Query
objectan array of
padv.Query
objectsan array of names of
padv.Query
objects
By default, the task RunTestsPerModel
gets the current model by using
the built-in query padv.builtin.query.GetIterationArtifact
and finds the
tests associated with that model by using the built-in query
padv.builtin.query.FindTestCasesForModel
.
OutputDirectory
— Location for standard task outputs
fullfile('$DEFAULTOUTPUTDIR$', '$ITERATIONARTIFACT$','test_results')
(default) | string
Location for standard task outputs, specified as a string.
The built-in tasks use tokens, like $DEFAULTOUTPUTDIR$
, as
placeholders for dynamic path resolution during run-time. For more information, see
Dynamically Resolve Paths with Tokens.
Data Types: string
Author
— Name of report author
"Process Advisor"
(default) | string
Name of the report author, specified as a string.
Data Types: string
IncludeComparisonSignalPlots
— Include signal comparison plots in report
false
or 0
(default) | true
or 1
Include the signal comparison plots in the report, specified as a numeric or logical
1
(true
) or 0
(false
).
When true
, the report includes the signal comparison plots
defined under baseline criteria, equivalence criteria, or assessments using the verify
operator in the test case.
Example: true
Data Types: logical
IncludeCoverageResult
— Include coverage metrics in report
true
or 1
(default) | false
or 0
Include coverage metrics that the test collects during test execution in the report,
specified as a numeric or logical 1
(true
) or
0
(false
).
Example: false
Data Types: logical
IncludeErrorMessages
— Include error messages from test case simulations in report
true
or 1
(default) | false
or 0
Include error messages from test case simulations in report, specified as a numeric
or logical 1
(true
) or 0
(false
).
Example: false
Data Types: logical
IncludeMATLABFigures
— Include figures in report
false
or 0
(default) | true
or 1
Include the figures opened from a callback script, custom criteria, or by the model
in the report, specified as a numeric or logical 1
(true
) or 0
(false
).
Example: true
Data Types: logical
IncludeMLVersion
— Include MATLAB version information in report
true
or 1
(default) | false
or 0
Include the version of MATLAB that ran the test cases in the report, specified as a numeric or logical
1
(true
) or 0
(false
).
Example: false
Data Types: logical
IncludeSimulationMetadata
— Include simulation metadata in report
false
or 0
(default) | true
or 1
Include simulation metadata for each test case or iteration in the report, specified
as a numeric or logical 1
(true
) or
0
(false
).
Example: true
Data Types: logical
IncludeSimulationSignalPlots
— Include simulation output plots for each signal in report
true
or 1
(default) | false
or 0
Include the simulation output plots for each signal in the report, specified as a
numeric or logical 1
(true
) or
0
(false
).
Example: true
Data Types: logical
IncludeTestRequirement
— Include test requirement link in report
true
or 1
(default) | false
or 0
Include the test requirement link, defined under Requirements in the test case, in
the report, specified as a numeric or logical 1
(true
) or 0
(false
).
Example: false
Data Types: logical
IncludeTestResults
— Include test results in report
0
(default) | 1
| 2
Include all or a subset of test results in the report, specified as either:
0
— Passed and failed results1
— Only passed results2
— Only failed results
Example: 2
LaunchReport
— Open generated report
0
(default) | 1
| 2
Open the generated report, specified as a numeric or logical 1
(true
) or 0
(false
).
Example: true
Data Types: logical
NumPlotColumnsPerPage
— Number of columns of plots to include on report pages
1
(default) | 2
| 3
| 4
Number of columns of plots to include on report pages, specified as an integer
1
, 2
, 3
, or
4
.
Example: 4
NumPlotRowsPerPage
— Number of rows of plots to include on report pages
2
(default) | 1
| 3
| 4
Number of rows of plots to include on report pages, specified as an integer
1
, 2
, 3
, or
4
.
Example: 4
ReportFormat
— Format for generated report
"pdf"
(default) | "docx"
| "zip"
Format for the generated report, specified as either:
"pdf"
— PDF format"docx"
— Microsoft® Word document format"zip"
— Zipped file that contains an HTML file, images, style sheet, and JavaScript® files for an HTML report
Example: "zip"
ReportPath
— Path to generated report
string(fullfile('$DEFAULTOUTPUTDIR$', '$ITERATIONARTIFACT$','test_results'))
(default) | string
Path to the generated report, specified as a string.
The built-in tasks use tokens, like $DEFAULTOUTPUTDIR$
, as
placeholders for dynamic path resolution during run-time. For more information, see
Dynamically Resolve Paths with Tokens.
Data Types: string
ReportName
— File name for generated report
"$ITERATIONARTIFACT$_Test"
(default) | string
File name for the generated report, specified as a string.
The built-in tasks use tokens, like $ITERATIONARTIFACT$
, as
placeholders for dynamic path resolution during run-time. For more information, see
Dynamically Resolve Paths with Tokens.
Data Types: string
ReportTitle
— Title of report
"$ITERATIONARTIFACT$ REPORT"
(default) | string
Title of the report, specified as a string.
The built-in tasks use tokens, like $ITERATIONARTIFACT$
, as
placeholders for dynamic path resolution during run-time. For more information, see
Dynamically Resolve Paths with Tokens.
Data Types: string
ResultFileName
— Name of test result file
"$ITERATIONARTIFACT$_ResultFile"
(default) | string
Name of test result file, specified as a string.
The built-in tasks use tokens, like $ITERATIONARTIFACT$
, as
placeholders for dynamic path resolution during run-time. For more information, see
Dynamically Resolve Paths with Tokens.
Data Types: string
SaveResultsAfterRun
— Save test results to file after execution
true
or 1
(default) | false
or 0
Save the test results to a file after execution, specified as a numeric or logical
1
(true
) or 0
(false
).
Example: false
Data Types: logical
SimulationMode
— Simulation mode for running tests
""
(default) | "Normal"
| "Accelerator"
| "Rapid Accelerator"
| "Software-in-the-Loop"
| "Processor-in-the-Loop"
Since R2023a
Simulation mode for running tests, specified as "Normal"
,
"Accelerator"
, "Rapid Accelerator"
,
"Software-in-the-Loop"
, or
"Processor-in-the-Loop"
.
By default, the property is empty (""
), which means the built-in
task uses the simulation mode that you define in the test itself. If you specify a value
other than ""
, the built-in task overrides the simulation mode set in
Simulink Test Manager. You do not need to update the test parameters or
settings to run the test in the new mode.
Example: "Software-in-the-Loop"
UseMATLABUnitTest
— Use MATLAB Unit Test framework to execute test cases
false
(0
) (default) | true
(1
)
Use MATLAB Unit Test framework to execute test cases, specified as either:
true
(1
) — The task runs your test cases by using the MATLAB Unit Test framework to create a test runner, creates a suite of tests from your test file, and run the tests. If you use the pipeline generator,padv.pipeline.generatePipeline
, and your pipeline generator options specify theGenerateJUnitForProcess
property astrue
(1
), the task uses the MATLAB unit test XML plugin to produce JUnit-style XML format test results that integrate into CI platforms.false
(0
) — The task runs your test cases by using Simulink Test. Starting in R2023a, if you specified the task propertySimulationMode
, the task overrides the test simulation mode without having to change the test definition.
Example: true
Data Types: logical
Methods
Specialized Public Methods
This class overrides the following inherited methods.
run | Run test cases for each model using Simulink Test Note You do not need to manually invoke this method. When you run a task using
the Process Advisor app or the
The function taskResult = run(obj, input) ... end |
dryRun |
Dry run the task to validate task inputs and generate
representative task outputs without actually running the task. The function taskResult = dryRun(obj, input) ... end |
launchToolAction | Launch Simulink Test Manager. Process Advisor uses this method when you open the tool associated with a task. |
Examples
Add Task to Run Tests for Each Model
Add a task to your process that can run test cases for each model using Simulink Test.
Open the process model for your project. If you do not have a process model, open the Process Advisor app to automatically create a process model.
In the process model file, add the RunTestsPerModel
task to your
process model by using the addTask
method.
runTestsPerModelTask = pm.addTask(padv.builtin.task.RunTestsPerModel);
You can reconfigure the task behavior by using the task properties. For example, to generate a zipped HTML report file instead of a PDF:
runTestsPerModelTask.ReportFormat = "zip";
If you want to use the MergeTestResults
task to merge the test
results, you need to reconfigure the input queries for the
MergeTestResults
task to get the outputs from the
RunTestsPerModel
task. By default, the
MergeTestResults
task only gets the current model and the outputs
from the task
RunTestsPerTestCase
.
%% Merge Test Results from Running Tests per Model mergeTestTask = pm.addTask(padv.builtin.task.MergeTestResults(... InputQueries = [... padv.builtin.query.GetIterationArtifact,... padv.builtin.query.GetOutputsOfDependentTask(Task = runTestsPerModelTask)]));
MergeTestResults
task now depends on outputs from the
RunTestsPerModel
task, you also need to explicitly specify those
dependencies in the process
model.mergeTestTask.dependsOn(runTestsPerModelTask);
Run Tests in Multiple Simulation Modes
Since R2023a
Suppose that you want to have one instance of the
RunTestsPerModel
task that runs normal mode tests and another instance
that runs software-in-the-loop (SIL) tests. You can create multiple instances of the task
inside your process model and then use the SimulationMode
to override
the simulation mode set in Simulink Test Manager.
Inside your process model, create multiple instances of the
RunTestsPerModel
task. When you create multiple instances of a task,
you must specify a unique name for each task object. For example:
milTask = pm.addTask(padv.builtin.task.RunTestsPerModel(... Name = "RunTestsNormalMode")); silTask = pm.addTask(padv.builtin.task.RunTestsPerModel(... Name = "RunTestsSILMode"));
The build system uses the Name
property as the unique
identifier for the task.
Reconfigure the task instances to run tests in different simulation modes. You can
run tests in different simulation modes without having to change the test definition by
using the SimulationMode
property to override the mode. For
example:
milTask.SimulationMode = "Normal"; silTask.SimulationMode = "Software-in-the-Loop";
To prevent task outputs from overwriting each other, reconfigure the names and locations of the task outputs by using the associated task properties. For example:
% Specify normal mode outputs milTask.OutputDirectory = defaultTestResultPath; milTask.ReportName = '$ITERATIONARTIFACT$_Normal_Test'; milTask.ResultFileName = '$ITERATIONARTIFACT$_Normal_ResultFile'; % Specify SIL mode outputs silTask.OutputDirectory = defaultTestResultPath; silTask.ReportName = '$ITERATIONARTIFACT$_SIL_Test'; silTask.ResultFileName = '$ITERATIONARTIFACT$_SIL_ResultFile';
$ITERATIONARTIFACT$
, as placeholders
for dynamic path resolution during run-time. For more information, see Dynamically Resolve Paths with Tokens.By default, the MergeTestResults
task only gets the current
model and the outputs from the task
padv.builtin.task.RunTestsPerTestCase
.
If you want to merge the test results from these two task instances using the
MergeTestResults
task, you need to reconfigure the input queries
for the MergeTestResults
task to get the outputs from those task
instances. For
example:
%% Merge Test Results (Normal and SIL) mergeTestTask = pm.addTask(padv.builtin.task.MergeTestResults(... InputQueries = [... padv.builtin.query.GetIterationArtifact,... padv.builtin.query.GetOutputsOfDependentTask(Task = "RunTestsNormalMode"),... padv.builtin.query.GetOutputsOfDependentTask(Task = "RunTestsSILMode")]));
Since that MergeTestResults
task depends on outputs from the
RunTestsPerTestCase
tasks, you need to explicitly specify those
dependencies in the process
model.
mergeTestTask.dependsOn(milTask); mergeTestTask.dependsOn(silTask);
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)