Model Verification and Validation : Dynamic Model Analysis (Simulation)

Dynamic Model Analysis (Simulation)

Following the static analysis of the model, it is necessary to dynamically exercise the model to verify and validate that the system functions correctly. These tests typically start at decomposition level 2, when behavior (e.g. by means of Statecharts, Mini-Specs, Truth Tables) is captured.

Test vectors and tests scenarios are derived directly from the written requirements. If during modeling additional requirements (“derived requirements”) are formulated, they have to be documented and the model has to be checked against them.

The model can be exercised in an interactive mode where the user injects stimuli into the model, controls the simulation of the system functions, and observes the response from the model. The code generated from the model in the interactive mode is an interpreter code, thus allowing to debug the system by stepping through it forward and backwards.

In addition, the model can be executed in a batch mode by means of a script written in the Simulation Control Language (SCL). It is based on the Rational Statemate Action Language used to describe the behavior of the model. A simulation script can also be generated from a playback file, recorded while interactively simulating the model. It might be used as the baseline test that can easily be modified to quickly create a set of tests that later become part of a suite of regression tests.

Note: SCL scripts are test programs re-usable only in the Rational Statemate environment. For the later re-use of test scenarios outside the Rational Statemate tool (e.g. in MicroC), test vectors should be generated from the SCL scripts.

While executing the model in either interactive or batch mode, a waveform display of the model elements can be used to capture the history of the changes in the model. These changes might also be captured textually via a trace file. After completion of the simulation, the respective trace file can be displayed as a waveform for easier analysis. An important use of the trace files and/or Waveform Viewer is to check for test coverage. In cases where the trace files are very big, a testbench chart should be used to record only the model information needed for the coverage analysis.

Additionally, testbench charts and graphical panels are needed for model verification and validation.

Testbench charts are used to accomplish following tasks:

Graphical panels are used to visualize the functionality of the model without having to look at the charts that describe the system's behavior. They should be considered as another view of the SUD, in order to communicate system understanding at a higher level of abstraction. This is particular useful when talking with marketing departments, suppliers or managers. Often, panels are used as a Graphical User Interface (GUI) to the system, but they are also valuable as a graphical testing interface to the system.

Note: Graphical panels in Rational Statemate should not be viewed as photo-realistic. They are primarily engineering views. Much time can be wasted trying to produce a GUI with a high degree of realism. If photo-realism is a requirement, then a dedicated tool should be used.

The dynamic analysis starts with the testing of the normal operation of the system to ensure that the model functions as expected. Then boundary cases and failure mode operations should be analyzed. Ideally the designer of the model should do the normal mode tests while another engineer should be tasked with the verification and validation of boundary cases and failure mode operations.

It is often helpful to hold a Peer Review in order to ensure the designer that the system requirements were correctly interpreted.

In order to perform boundary case and failure mode analysis, the respective boundary cases and system failures must be identified. A testbench chart should be created to check for incorrect operation in these modes. While the use of a testbench is not mandatory, it enables the checking of boundary cases and system failures under all operational aspects. These testbenches can also be re-used during system integration

Testbench charts might also be used to record the inputs and outputs of each module/sub-system in a format needed later in the test equipment.

The following figure shows an example from an automotive application:

Test Pattern Generation and Re-Use of Tests (“Unit Tests” Automotive)

 

 

In a first step the SUD is executed in playback mode and trace files / waveforms are generated for coverage analysis. Once a scenario is approved for later re-use, the respective (SCL-) file is re-played for data recording via the testbench chart for later re-use in the test hardware HP ECUTEST. The recorded test vectors are part of the model documentation (i.e. SRS).