Smart Impact app
The Smart Impact app may be used to assess the impact of support packs or custom development changes. It identifies an optimal or most-at-risk set of executables, which when tested will execute each of the changing objects. The Smart Impact returns most-at-risk objects that are in the same application area as the impacting changing object.
The Smart Impact app writes an entry to the Data > Impact Analyses folder, so that its results are available to other analyses.
The depth used by the app to search for referenced objects may be set by an Administrator in the Configuration - Impact Analysis screen’s ImpactAnalysisDepth field. If you don't set this field, the app uses 10 as the default value.
The Smart Impact app may be integrated with a Test Repository to identify hits, gaps and known gaps.
-
Hits are most-at-risk object names for which test assets have been found.
-
Gaps are most-at-risk object names for which there are no available test assets.
-
Known gaps are most-at-risk objects that aren’t expected to have tests in the specified Test Repository. These are set in the Pipeline’s Known Test Gaps External Data Source.
If required, the gap objects may be used to create test requirements and test execution lists in the specified Test Repository.
If a Comparison system is specified, the app may be configured to compare impactful changing objects and table data on the Analysis and Comparison systems.
The app is provided with one or more of the following on the SAP system to be analyzed.
- A set of transports
- A set of objects
- A set of ChaRM change requests
These provide the Smart Impact app with a set of changing objects. The app identifies an optimal or most-at-risk set of used executables which when tested will exercise each of the changing objects. It also identifies the screens, users and roles impacted by the changing objects. The analysis is driven by the changing objects and by a set of used objects obtained from a Performance History system. Typically, this will be your production system. Note that table content changes aren’t processed when identifying the most-at-risk objects.
-
Only changing Function (FUNC) objects that are remote-enabled are considered used.
-
Only changing Class (CLAS) objects that call Web Services are considered used.
By default, the Smart Impact app reports the following object types as used, impacted and most-at-risk.
| Object type | Description |
|---|---|
| WAPA | BSP Application |
| IWSV | Gateway Service |
| PROG | Program |
| FUNC | Function |
| TCOD | Transaction |
However, a user with Administrator permissions may customize these defaults as follows:
- Select the Administration > Configuration > Impact Analysis folder in the LiveCompare hierarchy.
- In the TypesToFind section, deselect the object types to be excluded from the used, impacted and most-at-risk results. Note that if no items are selected, all the above object types will be used as the default value.
- Click Save to save your changes.
These changes affect all subsequent runs of the Smart Impact app and Smart Impact Analysis workflow, but they don't affect any existing app or workflow results.
The app produces a Dashboard report and an associated Excel report.
DevOps categories
Testing
Parallel impact analysis
You can run the Smart Impact app in parallel with other impact analysis apps and workflows.
Prerequisites
We recommend that you configure the Daily FOL workflow to run each night. This workflow maintains a database of SAP object dependencies for your Analysis system.
The Smart Impact app uses a Pipeline to identify:
-
The Analysis, Comparison, Usage and SAP Solution Manager systems.
-
An External Data Source with TYPE and NAME columns containing a list of business critical objects. This will be identified as most-at-risk if they are impacted by one or more changing objects. If the Business Critical Objects field in the Pipeline is not set, the Smart Impact app uses the Business Critical Objects External Data Source.
-
One or more Most-at-risk Search Test Repositories that will be searched to find test assets that match the most-at-risk executables.
-
A Most-at-Risk Gaps Test Repository, in which test requirements are to be created for most-at-risk objects that don't have matching test assets in the specified search Test Repositories.
-
One or more Most-at-risk Hits Execution Test Repositories, in which tests that match most-at-risk objects are to be created and executed.
Tester Business Critical
If the Pipeline’s Tester Business Critical checkbox is checked, the Pipeline’s Search Test Repositories are searched to find used objects that have associated tests. These objects are treated as business critical, so that impacted objects that have associated tests will be identified as most-at-risk.
Disable Table Content Analysis
If the Pipeline’s Disable Table Content Analysis checkbox is checked, table content changes will be excluded from the changing objects used in this app. If it is unchecked, table content changes will be included.
Before the Smart Impact app is run, you must create a Pipeline that includes the RFC Destinations and Test Repositories to be used in the analysis.
If a support pack or transport has not been applied to the Analysis system, it must be disassembled before the Smart Impact app can analyze it. This can be done in SAP by running the SAINT transaction and selecting Disassemble OCS Package from the Utilities menu. Alternatively, the support pack or transport may be disassembled in LiveCompare using the Package Disassembler app.
The app requires that SAP’s Where Used indexes be up to date on the Analysis system.
A LiveCompare Editor will need to run the Create Object Links Cache workflow from the Prerequisites templates folder to create an object links cache database for the Analysis System in the Pipeline. A system’s object links cache database should be no older than 7 days; its run date may be checked in the RFC Destination’s OLC tab. The Create Object Links Cache workflow may be run incrementally to update the object links cache database with any recent object dependency changes, and to refresh its run date.
A LiveCompare Editor will need to make sure that performance history data is available on the Usage System specified in the Pipeline. Select the RFC Destination in the LiveCompare hierarchy and click the PHD tab. Select the source for performance history data, set a collection schedule, click Save, and then click Update Cache. See the Retrieve performance history data help topic for details.
If you plan to use the app to identify test hits and gaps in a Test Repository, a LiveCompare Editor must run the Create Test Repository Cache workflow from the Prerequisites template folder must be run first in order to populate the Test Repository’s cache.
If you plan to use the app to identify impacted IDocs, a LiveCompare Editor must run the Cache IDoc Impact Data workflow from the Prerequisites templates folder in order to populate the IDoc Impact Cache External Data Source.
If required, the Business Critical Objects External Data Source should be populated with a set of business critical objects that are included in the set of most-at-risk executables if they are impacted by one or more changing objects. The External Data Source is populated from a .CSV file with TYPE and NAME columns. Use the External Data Source’s ‘Replace Data File’ option in the LiveCompare studio to upload your own .CSV file. Note that the Business Critical Objects External Data Source is not used if a different External Data Source is specified in the Pipeline’s Business Critical Objects field.
-
The ChangingObjectsToIgnore External Data Source removes tables whose names begin with ENH or AGR from the set of changing objects.
-
The TransportsToIgnore External Data Source contains regular expressions which are used to filter out transports containing custom objects.
-
The External Data Source contains used objects that are to be ignored during the analysis.
If required, these External Data Sources may be edited in the LiveCompare studio using the ‘Replace Data File’ option.
Express and Standard modes
The Smart Impact app runs in either Express or Standard mode, depending on whether its changing objects all exist in the database of SAP object dependencies you created here.
-
If the Smart Impact app’s changing objects are all in the dependencies database, it runs in Express mode. The app’s Find Object Links (Read Only) action reads the dependencies database directly and doesn’t need to connect to SAP to find any object dependencies.
-
If the Smart Impact app’s changing objects aren’t all in the dependencies database, it runs in Standard mode. The app’s Find Object Links (Read Only) action connects to SAP to find the missing dependencies.
The Smart Impact app typically runs much faster in Express mode.
Run the app
To run the Smart Impact app, select the app from the Apps screen and create an app variant. Complete the Variant screen as follows:
-
Set the Pipeline field to the Pipeline that contains the RFC Destinations, Test Repositories and Business Critical Objects External Data Source to be used by the app.
-
If required, edit the Transports table to provide a list of transports to be analyzed.
-
If required, edit the ChaRM Change Requests string list to provide a list of ChaRM change requests to be analyzed.
-
If required, edit the Objects table to provide a list of objects to be analyzed.
-
Set the Compare ABAP? switch to specify whether the most-at-risk objects will be compared on the Analysis and Comparison systems.
-
Set the Compare Data? switch to specify whether the table keys for changing tables will be compared on the Analysis and Comparison systems. If this switch is set, up to 20 table keys will be compared for each changing table.
-
If required, edit the Execution List Configuration table to provide one or more configuration entries to be associated with the test execution in Tosca Test Repositories, for example /Configurations/Environment1/TDS.
-
Turn on the Cross Reference switch to generate a Cross Reference spreadsheet in the app’s Function Details and Testing Details reports. The Cross Reference spreadsheet lists all the impacted or most-at-risk executables for each impactful changing object. If you switch this on, the app will take a long time to run if there are many objects to analyze.
Click Run. When the variant has completed, its results may be accessed from the App Cockpit screen.
App results
The Smart Impact app generates the following reports:
Smart Impact Analysis Dashboard
The Smart Impact app generates a Dashboard which includes the following charts:
-
The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.
-
The Most-at-risk & Test Coverage doughnut chart provides a summary of the number of hits, gaps and known gaps found for the most-at-risk objects in the specified Test Repository.
-
The Changing Object Summary doughnut chart summarizes the changing objects by their change type.
-
The Most-at-risk & Test Coverage by Type column chart summarizes by type the most-at-risk objects, and the objects with hits in the specified Test Repository.
-
The Top 5 Application Areas bar chart lists the top 5 Application Areas, in terms of the number of most-at-risk objects in each Application Area.
-
The All, Covering and Optimal Tests column chart lists the number of found tests in each Application Area, the number of tests that cover at least one most-at-risk object, and the optimal number of tests that cover each of the most-at-risk objects.
-
Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including the date range for which performance history data was obtained, and the name of the Test Repository that was searched to obtain matching test assets.
The Dashboard’s Additional Resources section includes links to the following Excel reports:
Function Details report
The Function Details Excel report includes the following spreadsheets:
Dashboard
The Dashboard spreadsheet includes the following charts:
-
The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.
-
The Most-at-risk & Test Coverage doughnut chart provides a summary of the number of hits, gaps and known gaps found for the most-at-risk objects in the specified Test Repository.
-
The Changed Objects Summary doughnut chart summarizes the changed objects by their change type.
-
The Most-at-risk & Test Coverage by Type column chart summarizes by type the most-at-risk objects, and the objects with hits in the specified Test Repository.
-
The Top 5 Application Areas bar chart lists the top 5 Application Areas, in terms of the number of most-at-risk objects in each Application Area.
-
The All, Covering and Optimal Tests column chart lists the number of found tests in each Application Area, the number of tests that cover at least one most-at-risk object, and the optimal number of tests that cover each of the most-at-risk objects.
-
Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including the date range for which performance history data was obtained, and the name of the Test Repository that was searched to obtain matching test assets. The Dashboard spreadsheet also shows the number of change IDs and changed objects.
Home
The Home spreadsheet provides a summary view of the tests found during the analysis, grouped by Application Area. It has the following columns:
APP_AREA
The name of the Application Area in which the objects were found. (None) is used for objects that do not have an Application Area.
NOT_IMPACTED
The number of used objects in the Application Area that aren’t impacted by a changing object.
IMPACTED
The number of used objects in the Application Area that are impacted by a changing object, but not most-at-risk.
MOST_AT_RISK
The number of used objects in the Application Area that are impacted and most-at-risk; these are recommended for testing.
TEST_HITS
The number of most-at-risk objects in the Application Area that are covered by at least one test in the Pipeline’s Most-at-risk Test Repository.
TEST_GAPS
The number of most-at-risk objects in the Application Area that aren’t covered by any tests in the Pipeline’s Most-at-risk Test Repository.
IMPACTFUL_CHANGES
A count of the distinct impacting objects for each Application Area’s most-at-risk objects.
App Area Details
This spreadsheet lists the most-at-risk, impacted and not impacted objects, grouped by Application Area.
It includes:
-
Impacted objects with one or more impactful changes (these objects are marked as most-at-risk).
-
Impacted objects with no impactful changes.
-
Used objects with no impactful changes.
The spreadsheet has the following columns:
APP_AREA
The name of the Application Area in which the objects were found. (None) is used for objects that do not have an Application Area.
TYPE
The type of an object.
NAME
The name of the object.
STATUS
The status of the object, either Most-at-risk, Impacted or Not Impacted.
RISK
The risk value of the object, either H for high risk, M for medium risk, or L for low risk. The risk values are based on the depth of the impact and frequency of use of the object.
IMPACTFUL_OBJECTS
This column displays the number of changing objects that impact the object. New most-at-risk objects (that have no impactful objects and a usage count of 0) are set to have themselves as a single impactful object. Select a hyperlink in this column to display the impacting objects in the Impactful Objects spreadsheet.
DESCRIPTION
The description for the object in the NAME column.
TESTS
The number of optimal tests chosen for the object in the NAME column in the specified Most-at-Risk Search Test Repository. Select a hyperlink in this column to display the matching tests in the Test Hit Details spreadsheet.
USAGE
The usage count for the object in the NAME column, according to the data obtained from your Pipeline’s Usage system.
USERS
The number of users of the object in the NAME column. Select a hyperlink in this column to display the users in the Impacted Users spreadsheet.
CUSTOM
This column has the value Y for custom used, impacted and most-at-risk objects.
BUSINESS_CRITICAL
This column has the value Y for objects included in the Business Critical Objects External Data Source.
Impactful Objects
This spreadsheet lists the changing objects introduced by the transports, ChaRM change requests or objects analyzed by the Smart Impact app or workflow. It has the following columns:
CHANGE_ID
The transport or ChaRM change request that includes the impacting object This column has the value Objects if you specified a list of objects.
CHILD_TYPE
The type of the impacting changing object.
CHILD_NAME
The name of the impacting changing object. Select a hyperlink to display comparison details for the selected object.
CHANGE_STATE
If you specified a Comparison system and set the Compare ABAP? switch, this column lists the comparison status for the object on the Analysis and Comparison systems specified in your Pipeline. Select a hyperlink in this column to display comparison details for the selected object.
DEPTH
The search depth at which the used impacted object was found.
TYPE
The type of a used impacted object.
NAME
The name of the used impacted object.
DYNP
The number of impacted screens for each used impacted object. Select a hyperlink in this column to display the CHILD_NAME object in the Impacted DYNPs spreadsheet.
Cross Reference
This spreadsheet lists all the impacted or most-at-risk executables for each impactful changing object. LiveCompare populates it you set the Cross Reference switch in the Smart Impact app variant or workflow. The Cross Reference spreadsheet is empty if there are no impacted objects, or if all most-at-risk objects are New. This spreadsheet has the following columns:
APP_AREA
The Application Area of the object in the NAME column.
TYPE
The type of an impacted or most-at-risk executable.
NAME
The name of the impacted or most-at-risk executable.
USAGE
The usage count for the impacted or most-at-risk executable.
DEPTH
The search depth at which LiveCompare found the object in the CHILD_TYPE and CHILD_NAME column.
CHILD_TYPE
The type of a changing object that impacts the impacted or most-at-risk executable.
CHILD_NAME
The name of the impactful changing object.
Impacted DYNPS
This spreadsheet lists the details for impacted screens. It has the following columns.
CHILD_TYPE
The type of a changing object.
CHILD_NAME
The name of the changing object.
NAME
The name of a used impacted object. Select a hyperlink in this column to display tests that include the object in the Test Hit Details spreadsheet.
DYNP_PROG
The used impacted object’s associated screen’s program.
DYNP_NUM
The used impacted object’s associated screen’s number.
DTXT
The used impacted object’s associated screen’s description.
Impacted Users
This spreadsheet lists the users who ran each impacted object. It has the following columns:
TYPE
The type of an impacted object.
NAME
The name of the impacted object.
COUNT
The usage count for the impacted object according to the data obtained from your Pipeline’s Usage system.
ACCOUNT
The user of the impacted object according to the data obtained from your Pipeline’s Usage system.
Test Hits & Gaps
This spreadsheet indicates whether each most-at-risk object is a Hit, Gap or Known gap in the specified Most-at-Risk Search Test Repository.
-
Hits are most-at-risk object names for which test assets have been found.
-
Gaps are most-at-risk object names for which there are no available test assets.
-
Known gaps are most-at-risk objects that aren’t expected to have tests in the specified Test Repository. You set these in your Pipeline’s Known Test Gaps External Data Source.
The spreadsheet has the following columns:
NAME
The name of a most-at-risk object.
TEST_COVERAGE
The most-at-risk object’s test coverage, either Hit, Gap or KnownGap.
Test Hit Details
This spreadsheet includes details for the Hits in your Pipeline’s Most-at-Risk Search Test Repository. A Hit is a test asset that matched at least one most-at-risk object. If you access the spreadsheet from a hyperlink, it displays the details for the linked test asset. The spreadsheet has the following columns:
APP_AREA
The tested object’s Application Area.
TEST_REPOSITORY
The Most-at-Risk Search Test Repository that contains a matching test asset.
TEST_REPOSITORY_TYPE
The Test Repository’s type.
TESTED_OBJECT
The name of the most-at-risk object that matched a test asset.
CONFIDENCE
For Tosca, qTest, ALM and SAP Solution Manager Test Repositories, the percentage of search terms for which LiveCompare found a matching test asset. If LiveCompare matches a search term with a technical name in a test, the CONFIDENCE value is set to 100.
COMMON_TERMS
For Tosca, qTest, ALM and SAP Solution Manager Test Repositories, a list of the all the SEARCH_TERMS matched in the test asset.
TEST_ID
The ID of a test that covers the tested object.
TEST_NAME
The name of a test that covers the tested object.
TEST_LIST_ID
For Tosca, qTest and ALM Test Repositories, the ID of an execution list or test set.
TEST_LIST_NAME
For Tosca, qTest and ALM Test Repositories, the name of an execution list or test set.
TEST_PATH
The test asset’s path. Note that if a matched token contains path separators, these will be escaped and stored as \/ in the test path.
TEST_LIST_PATH
For Tosca, qTest and ALM Test Repositories, the path of an execution list or test set.
RANK
The test’s rank, either H (High), M (Medium) or L (Low), based on how recently it was last run, its passes and fails, the number of runs per day, and the number of test steps. You should prioritize more highly ranked tests over tests with a lower rank.
WORKSTATE
For Tosca Test Repositories, this column stores the workstate associated with the test asset.
TEST_URL
The test asset’s URL.
TEST_TYPE
This column isn't used.
HAS_DATA
This column is set to Y for Tosca test cases that match affected data, or to <blank> for test cases that do not match affected data. Affected data is defined by the key fields of table rows that are different, in the Analysis system only (added data), or in the Comparison system only (deleted data). If any conversion exit routines are available for the key field values on the Analysis system, LiveCompare applies these to the key field values before searching for matching test cases. LiveCompare doesn’t apply conversion exit routines if there are no key field values.
STATUS
This column has the value Covering if the test covers the tested object, or Optimal if LiveCompare identifies the test as optimal. LiveCompare identifies tests as optimal based on the number of most-at-risk objects they cover, and the usage counts of the most-at-risk objects.
Test Data
If you have specified a Comparison system and set the Compare Data? switch, this spreadsheet referenced in tests found for the most-at-risk executables in the specified Most-at-Risk Search Test Repositories. The spreadsheet has the following columns:
TEST_REPOSITORY_TYPE
The Most-at-Risk Search Test Repository’s type.
TEST_REPOSITORY
The Test Repository’s name.
TEST_NAME
The name of a test that matches a most-at-risk executable.
TABLE_NAME
The name of an SAP table referenced by the test.
TEST_ID
The ID of the matching test.
Help
This spreadsheet provides help for each of the spreadsheet reports.
Testing Details report
The Testing Details Excel report includes the following spreadsheets:
Dashboard
The Dashboard spreadsheet includes the following charts:
-
The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.
-
The Most-at-risk & Test Coverage doughnut chart provides a summary of the number of hits, gaps and known gaps found for the most-at-risk objects in the specified Test Repository.
-
The Changing Object Summary doughnut chart summarizes the changed objects by their change type.
-
-
The Most-at-risk & Test Coverage by Type column chart summarizes by type the most-at-risk objects, and the objects with hits in the specified Test Repository.
-
The Top 5 Application Areas bar chart lists the top 5 Application Areas, in terms of the number of most-at-risk objects in each Application Area.
-
The All, Covering and Optimal Tests column chart lists the number of found tests in each Application Area, the number of tests that cover at least one most-at-risk object, and the optimal number of tests that cover each of the most-at-risk objects.
-
Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including the date range for which performance history data was obtained, and the name of the Test Repository that was searched to obtain matching test assets. The Dashboard spreadsheet also shows the number of change IDs and changed objects.
Home
The Home spreadsheet provides a summary view of the tests found during the analysis, grouped by Application Area. It has the following columns:
APP_AREA
The Application Area name. Select a hyperlink in this column to display the Application Area in the App Area Details spreadsheet.
ALL
The number of tests found for the Application Area.
COVERING
The total number of tests that cover at least one most-at-risk object in the Application Area, including tests identified as optimal.
OPTIMAL
The number of tests that cover at least one most-at-risk object in the Application Area. LiveCompare identifies tests as optimal based on the number of most-at-risk objects they cover, and the usage counts of the most-at-risk objects.
TEST_GAPS
The number of most-at-risk objects that don’t have tests in specified Most-at-Risk Search Test Repositories.
App Area Details
This spreadsheet lists the most-at-risk objects that have matching tests in the specified Most-at-risk Search Test Repository, grouped by the most-at-risk objects’ Application Area. The spreadsheet has the following columns:
APP_AREA
The name of the Application Area in which the objects were found. (None) is used for objects that do not have an Application Area.
TEST_REPOSITORY_TYPE
The type of the Test Repository where LiveCompare found a matching test.
TEST_REPOSITORY
The name of the Test Repository.
TEST_NAME
The name of the matching test.
STATUS
This column has the value Covering if the test covers the tested object, or Optimal if LiveCompare identifies the test as optimal. LiveCompare identifies tests as optimal based on the number of most-at-risk objects they cover, and the usage counts of the most-at-risk objects.
RISK
The risk value of the tested object, either H for high risk, M for medium risk, or L for low risk. The risk values are based on the depth of the impact and frequency of use of the object.
TEST_DATA
The number of SAP tables referenced by the matching test, as shown in the Test Data spreadsheet.
TESTED_OBJECTS
The number of objects covered by the test. Select a hyperlink in this column to display the objects in the Test Hit Details spreadsheet.
TEST_PATH
The matching test’s path.
TEST_ID
The matching test’s ID.
Test Data
If you have specified a Comparison system and set the Compare Data? switch, this spreadsheet referenced in tests found for the most-at-risk executables in the specified Most-at-Risk Search Test Repositories. The spreadsheet has the following columns:
TEST_REPOSITORY_TYPE
The Most-at-Risk Search Test Repository’s type.
TEST_REPOSITORY
The Test Repository’s name.
TEST_NAME
The name of a test that matches a most-at-risk executable.
TABLE_NAME
The name of an SAP table referenced by the test.
TEST_ID
The ID of the matching test.
Test Hit Details
This spreadsheet includes details for the Hits in your Pipeline’s Most-at-Risk Search Test Repository. A Hit is a test asset that matched at least one most-at-risk object. If you access the spreadsheet from a hyperlink, it displays the details for the linked test asset. The spreadsheet has the following columns:
APP_AREA
The tested object’s Application Area.
TEST_REPOSITORY_TYPE
The type of a Most-at-risk Search Test Repository that contains a matching test asset.
TEST_REPOSITORY_NAME
The name of the Test Repository.
TEST_NAME
The name of a test that covers the tested object.
STATUS
This column has the value Covering if the test covers the tested object, or Optimal if LiveCompare identifies the test as optimal. LiveCompare identifies tests as optimal based on the number of most-at-risk objects they cover, and the usage counts of the most-at-risk objects.
RANK
The test’s rank, either H (High), M (Medium) or L (Low), based on how recently it was last run, its passes and fails, the number of runs per day, and the number of test steps. You should prioritize more highly ranked tests over tests with a lower rank.
TESTED_OBJECT
The name of the most-at-risk object that matched a test asset.
RISK
The risk value of the tested object, either H for high risk, M for medium risk, or L for low risk. The risk values are based on the depth of the impact and frequency of use of the object.
CHANGED_OBJECTS
The number of changed objects that the test covers. Select a hyperlink in this column to display the changed objects in the Changes spreadsheet.
TEST_PATH
The test asset’s path. Note that if a matched token contains path separators, these will be escaped and stored as \/ in the test path.
TEST_ID
The ID of a test that covers the tested object.
TEST_LIST_PATH
For Tosca, qTest and ALM Test Repositories, the path of an execution list or test set.
TEST_LIST_NAME
For Tosca, qTest and ALM Test Repositories, the name of an execution list or test set.
TEST_LIST_ID
For Tosca, qTest and ALM Test Repositories, the ID of an execution list or test set.
CONFIDENCE
For Tosca, qTest, ALM and SAP Solution Manager Test Repositories, the percentage of search terms for which LiveCompare found a matching test asset. If LiveCompare matches a search term with a technical name in a test, the CONFIDENCE value is set to 100.
COMMON_TERMS
For Tosca, qTest, ALM and SAP Solution Manager Test Repositories, a list of the all the SEARCH_TERMS matched in the test asset.
WORKSTATE
For Tosca Test Repositories, this column stores the workstate associated with the test asset.
TEST_URL
The test asset’s URL.
TEST_TYPE
This column isn't used.
HAS_DATA
This column is set to Y for Tosca test cases that match affected data, or to <blank> for test cases that do not match affected data. Affected data is defined by the key fields of table rows that are different, in the Analysis system only (added data), or in the Comparison system only (deleted data). If any conversion exit routines are available for the key field values on the Analysis system, LiveCompare applies these to the key field values before searching for matching test cases. LiveCompare doesn’t apply conversion exit routines if there are no key field values.
Test Hits & Gaps
This spreadsheet indicates whether each most-at-risk object is a Hit, Gap or Known gap in the specified Most-at-Risk Search Test Repository.
-
Hits are most-at-risk object names for which test assets have been found.
-
Gaps are most-at-risk object names for which there are no available test assets.
-
Known gaps are most-at-risk objects that aren’t expected to have tests in the specified Test Repository. You set these in your Pipeline’s Known Test Gaps External Data Source.
The spreadsheet has the following columns:
NAME
The name of a most-at-risk object.
TEST_COVERAGE
The most-at-risk object’s test coverage, either Hit, Gap or KnownGap.
Cross Reference
This spreadsheet lists all the impacted or most-at-risk executables for each impactful changing object. LiveCompare populates it you switch on the Cross Reference switch in the Smart Impact app variant or workflow. The Cross Reference spreadsheet is empty if there are no impacted objects, or if all most-at-risk objects are ‘New’. This spreadsheet has the following columns:
APP_AREA
The Application Area of the object in the NAME column.
TYPE
The type of an impacted or most-at-risk executable.
NAME
The name of the impacted or most-at-risk executable.
USAGE
The usage count for the impacted or most-at-risk executable.
DEPTH
The search depth at which LiveCompare found the object in the CHILD_TYPE and CHILD_NAME column.
CHILD_TYPE
The type of a changing object that impacts the impacted or most-at-risk executable.
CHILD_NAME
The name of the impactful changing object.
Changes
This spreadsheet lists the changing objects introduced by the transports, ChaRM change requests or objects analyzed by the Smart Impact app or workflow. The spreadsheet has the following columns:
CHANGE_ID
The transport or ChaRM change request that includes the impacting object This column has the value Objects if you specified a list of objects.
CHILD_TYPE
The type of the impacting changing object.
CHILD_NAME
The name of the impacting changing object. Select a hyperlink to display comparison details for the selected object.
CHANGE_STATE
If you specified a Comparison system and set the Compare ABAP? switch, this column lists the comparison status for the object on the Analysis and Comparison systems specified in your Pipeline. Select a hyperlink in this column to display comparison details for the selected object.
DEPTH
The search depth at which the used impacted object was found.
TYPE
The type of a used impacted object.
NAME
The name of the used impacted object.
DYNP
The number of impacted screens for each used impacted object.
Help
This spreadsheet provides help for each of the spreadsheet reports.
Analysis Input Data
This Excel report contains a copy of the input parameters used to produce the app’s Dashboard report. The value of each input parameter is stored in a separate worksheet, which is named after the parameter whose value it contains.