Smart Impact Learning app

The Smart Impact Learning app simulates a run of the Smart Impact app by generating reports from sample data stored in External Data Sources. It runs in only a few minutes, and may be used in the following ways:

  • For demonstration purposes: to demonstrate to customers and prospects the reports they would expect to receive from the Smart Impact app.

  • For training purposes: to train consultants on how to run the Smart Impact app and navigate through each of its reports.

The Smart Impact Learning app doesn't require connections to any SAP systems or test repositories, and it may be run on servers where the SAP RFC libraries normally required have not been installed. This option is set during the installation of LiveCompare by selecting ‘Skip. Not using LiveCompare with SAP’ in the SAP RFC Library screen.

DevOps categories

Testing

Prerequisites

Install the Smart Impact Learning app package on your LiveCompare server by running LiveCompareApps.exe from the LiveCompare distribution directory. Click through each of the Wizard screens to complete the installation.

Learning resources

LiveCompare includes the following resources that are used by the Smart Impact Learning app:

  • Dummy RFC Destinations named Learning-DEV, Learning PHD and Learning-QAS. These are stored in the Learning RFC Destinations folder.

  • A dummy Test Repository named Learning.

  • An External Data Source named Smart Impact Learning.

  • A dummy Pipeline named Learning. This is stored in the Learning Pipelines folder, and is configured as follows:

Field Value
Name Learning
Description LiveCompare Learning Pipeline
Tester Business Critical Not selected
Analysis System Learning-DEV
Comparison System Learning-QAS
Usage System Learning-PHD
Most-at-Risk Search Test Repositories Learning

The resources in the Learning Pipeline may not be added to any additional Pipelines as they were designed exclusively to be used by the Smart Impact Learning app. Additionally, the Learning Pipeline is not processed when the Commit Configuration process is run for the Pipelines in the Guided Configuration’s Pipelines tab.

Run the app

To run the Smart Impact Learning app, follow these steps:

  1. Select the Smart Impact Learning app from the Apps screen’s Testing tab.

  2. In the App Cockpit screen, click New Variant, and create a new variant based on [app].

  3. In the My Variants table, click Run Variant button. to run the variant.

App results

The Smart Impact Learning app generates the following reports:

Smart Impact Analysis Dashboard

The Smart Impact Learning app generates a Dashboard which includes the following charts:

  • The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.

  • The Most-at-risk & Test Coverage doughnut chart provides a summary of the number of hits and gaps found for the most-at-risk objects in the sample Test Repository.

  • The Changing Object Summary doughnut chart summarizes the changing objects by their change type.

  • The Most-at-risk & Test Coverage by Type column chart summarizes by type the most-at-risk objects, and the objects with hits in the specified Test Repository.

  • The Top 5 Application Areas bar chart lists the top 5 Application Areas, in terms of the number of most-at-risk objects in each Application Area.

  • The All, Covering and Optimal Tests column chart lists the number of found tests in each Application Area, the number of tests that cover at least one most-at-risk object, and the optimal number of tests that cover each of the most-at-risk objects.

  • Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including a sample date range for which performance history data was obtained, and the name of the Pipeline used in the analysis.

The Dashboard’s Additional Resources section includes links to the following Excel reports:

Function Details report

The Function Details Excel report includes the following spreadsheets:

Dashboard

The Dashboard spreadsheet includes the following charts:

  • The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.

  • The Most-at-risk & Test Coverage doughnut chart provides a summary of the number of hits, gaps and known gaps found for the most-at-risk objects in the specified Test Repository.

  • The Changed Objects Summary doughnut chart summarizes the changed objects by their change type.

  • The Most-at-risk & Test Coverage by Type column chart summarizes by type the most-at-risk objects, and the objects with hits in the specified Test Repository.

  • The Top 5 Application Areas bar chart lists the top 5 Application Areas, in terms of the number of most-at-risk objects in each Application Area.

  • The All, Covering and Optimal Tests column chart lists the number of found tests in each Application Area, the number of tests that cover at least one most-at-risk object, and the optimal number of tests that cover each of the most-at-risk objects.

  • Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including the date range for which performance history data was obtained, and the name of the Test Repository that was searched to obtain matching test assets. The Dashboard spreadsheet also shows the number of change IDs and changed objects.

Home

The Home spreadsheet provides a summary view of the tests found during the analysis, grouped by Application Area. It has the following columns:

APP_AREA

The name of the Application Area in which the objects were found. (None) is used for objects that do not have an Application Area.

NOT_IMPACTED

The number of used objects in the Application Area that aren’t impacted by a changing object.

IMPACTED

The number of used objects in the Application Area that are impacted by a changing object, but not most-at-risk.

MOST_AT_RISK

The number of used objects in the Application Area that are impacted and most-at-risk; these are recommended for testing.

TEST_HITS

The number of most-at-risk objects in the Application Area that are covered by at least one test in the Pipeline’s Most-at-risk Test Repository.

TEST_GAPS

The number of most-at-risk objects in the Application Area that aren’t covered by any tests in the Pipeline’s Most-at-risk Test Repository.

IMPACTFUL_CHANGES

A count of the distinct impacting objects for each Application Area’s most-at-risk objects.

App Area Details

This spreadsheet lists the most-at-risk, impacted and not impacted objects, grouped by Application Area.

It includes:

  • Impacted objects with one or more impactful changes (these objects are marked as most-at-risk).

  • Impacted objects with no impactful changes.

  • Used objects with no impactful changes.

The spreadsheet has the following columns:

APP_AREA

The name of the Application Area in which the objects were found. (None) is used for objects that do not have an Application Area.

TYPE

The type of an object.

NAME

The name of the object.

STATUS

The status of the object, either Most-at-risk, Impacted or Not Impacted.

RISK

The risk value of the object, either H for high risk, M for medium risk, or L for low risk. The risk values are based on the depth of the impact and frequency of use of the object.

IMPACTFUL_OBJECTS

This column displays the number of changing objects that impact the object. New most-at-risk objects (that have no impactful objects and a usage count of 0) are set to have themselves as a single impactful object. Select a hyperlink in this column to display the impacting objects in the Impactful Objects spreadsheet.

DESCRIPTION

The description for the object in the NAME column.

TESTS

The number of optimal tests chosen for the object in the NAME column in the specified Most-at-Risk Search Test Repository. Select a hyperlink in this column to display the matching tests in the Test Hit Details spreadsheet.

USAGE

The usage count for the object in the NAME column, according to the data obtained from your Pipeline’s Usage system.

USERS

The number of users of the object in the NAME column. Select a hyperlink in this column to display the users in the Impacted Users spreadsheet.

CUSTOM

This column has the value Y for custom used, impacted and most-at-risk objects.

BUSINESS_CRITICAL

This column has the value Y for objects included in the Business Critical Objects External Data Source.

Impactful Objects

This spreadsheet lists the changing objects introduced by the transports, ChaRM change requests or objects analyzed by the Smart Impact app or workflow. It has the following columns:

CHANGE_ID

The transport or ChaRM change request that includes the impacting object This column has the value Objects if you specified a list of objects.

CHILD_TYPE

The type of the impacting changing object.

CHILD_NAME

The name of the impacting changing object. Select a hyperlink to display comparison details for the selected object.

CHANGE_STATE

If you specified a Comparison system and set the Compare ABAP? switch, this column lists the comparison status for the object on the Analysis and Comparison systems specified in your Pipeline. Select a hyperlink in this column to display comparison details for the selected object.

DEPTH

The search depth at which the used impacted object was found.

TYPE

The type of a used impacted object.

NAME

The name of the used impacted object.

DYNP

The number of impacted screens for each used impacted object. Select a hyperlink in this column to display the CHILD_NAME object in the Impacted DYNPs spreadsheet.

Cross Reference

This spreadsheet lists all the impacted or most-at-risk executables for each impactful changing object. LiveCompare populates it you set the Cross Reference switch in the Smart Impact app variant or workflow. The Cross Reference spreadsheet is empty if there are no impacted objects, or if all most-at-risk objects are New. This spreadsheet has the following columns:

APP_AREA

The Application Area of the object in the NAME column.

TYPE

The type of an impacted or most-at-risk executable.

NAME

The name of the impacted or most-at-risk executable.

USAGE

The usage count for the impacted or most-at-risk executable.

DEPTH

The search depth at which LiveCompare found the object in the CHILD_TYPE and CHILD_NAME column.

CHILD_TYPE

The type of a changing object that impacts the impacted or most-at-risk executable.

CHILD_NAME

The name of the impactful changing object.

Impacted DYNPS

This spreadsheet lists the details for impacted screens. It has the following columns.

CHILD_TYPE

The type of a changing object.

CHILD_NAME

The name of the changing object.

NAME

The name of a used impacted object. Select a hyperlink in this column to display tests that include the object in the Test Hit Details spreadsheet.

DYNP_PROG

The used impacted object’s associated screen’s program.

DYNP_NUM

The used impacted object’s associated screen’s number.

DTXT

The used impacted object’s associated screen’s description.

Impacted Users

This spreadsheet lists the users who ran each impacted object. It has the following columns:

TYPE

The type of an impacted object.

NAME

The name of the impacted object.

COUNT

The usage count for the impacted object according to the data obtained from your Pipeline’s Usage system.

ACCOUNT

The user of the impacted object according to the data obtained from your Pipeline’s Usage system.

Test Hits & Gaps

This spreadsheet indicates whether each most-at-risk object is a Hit, Gap or Known gap in the specified Most-at-Risk Search Test Repository.

  • Hits are most-at-risk object names for which test assets have been found.

  • Gaps are most-at-risk object names for which there are no available test assets.

  • Known gaps are most-at-risk objects that aren’t expected to have tests in the specified Test Repository. You set these in your Pipeline’s Known Test Gaps External Data Source.

The spreadsheet has the following columns:

NAME

The name of a most-at-risk object.

TEST_COVERAGE

The most-at-risk object’s test coverage, either Hit, Gap or KnownGap.

Test Hit Details

This spreadsheet includes details for the Hits in your Pipeline’s Most-at-Risk Search Test Repository. A Hit is a test asset that matched at least one most-at-risk object. If you access the spreadsheet from a hyperlink, it displays the details for the linked test asset. The spreadsheet has the following columns:

APP_AREA

The tested object’s Application Area.

TEST_REPOSITORY

The Most-at-Risk Search Test Repository that contains a matching test asset.

TEST_REPOSITORY_TYPE

The Test Repository’s type.

TESTED_OBJECT

The name of the most-at-risk object that matched a test asset.

CONFIDENCE

For Tosca, qTest, ALM and SAP Solution Manager Test Repositories, the percentage of search terms for which LiveCompare found a matching test asset. If LiveCompare matches a search term with a technical name in a test, the CONFIDENCE value is set to 100.

COMMON_TERMS

For Tosca, qTest, ALM and SAP Solution Manager Test Repositories, a list of the all the SEARCH_TERMS matched in the test asset.

TEST_ID

The ID of a test that covers the tested object.

TEST_NAME

The name of a test that covers the tested object.

TEST_LIST_ID

For Tosca, qTest and ALM Test Repositories, the ID of an execution list or test set.

TEST_LIST_NAME

For Tosca, qTest and ALM Test Repositories, the name of an execution list or test set.

TEST_PATH

The test asset’s path. Note that if a matched token contains path separators, these will be escaped and stored as \/ in the test path.

TEST_LIST_PATH

For Tosca, qTest and ALM Test Repositories, the path of an execution list or test set.

RANK

The test’s rank, either H (High), M (Medium) or L (Low), based on how recently it was last run, its passes and fails, the number of runs per day, and the number of test steps. You should prioritize more highly ranked tests over tests with a lower rank.

WORKSTATE

For Tosca Test Repositories, this column stores the workstate associated with the test asset.

TEST_URL

The test asset’s URL.

TEST_TYPE

This column isn't used.

HAS_DATA

This column is set to Y for Tosca test cases that match affected data, or to <blank> for test cases that do not match affected data. Affected data is defined by the key fields of table rows that are different, in the Analysis system only (added data), or in the Comparison system only (deleted data). If any conversion exit routines are available for the key field values on the Analysis system, LiveCompare applies these to the key field values before searching for matching test cases. LiveCompare doesn’t apply conversion exit routines if there are no key field values.

STATUS

This column has the value Covering if the test covers the tested object, or Optimal if LiveCompare identifies the test as optimal. LiveCompare identifies tests as optimal based on the number of most-at-risk objects they cover, and the usage counts of the most-at-risk objects.

Test Data

If you have specified a Comparison system and set the Compare Data? switch, this spreadsheet referenced in tests found for the most-at-risk executables in the specified Most-at-Risk Search Test Repositories. The spreadsheet has the following columns:

TEST_REPOSITORY_TYPE

The Most-at-Risk Search Test Repository’s type.

TEST_REPOSITORY

The Test Repository’s name.

TEST_NAME

The name of a test that matches a most-at-risk executable.

TABLE_NAME

The name of an SAP table referenced by the test.

TEST_ID

The ID of the matching test.

Help

This spreadsheet provides help for each of the spreadsheet reports.

Testing Details report

The Testing Details Excel report includes the following spreadsheets:

Dashboard

The Dashboard spreadsheet includes the following charts:

  • The Used, Impacted & Most-at-risk column chart provides a summary of the number of custom and standard used, impacted and most-at-risk objects.

  • The Most-at-risk & Test Coverage doughnut chart provides a summary of the number of hits, gaps and known gaps found for the most-at-risk objects in the specified Test Repository.

    • The Changing Object Summary doughnut chart summarizes the changed objects by their change type.

  • The Most-at-risk & Test Coverage by Type column chart summarizes by type the most-at-risk objects, and the objects with hits in the specified Test Repository.

  • The Top 5 Application Areas bar chart lists the top 5 Application Areas, in terms of the number of most-at-risk objects in each Application Area.

  • The All, Covering and Optimal Tests column chart lists the number of found tests in each Application Area, the number of tests that cover at least one most-at-risk object, and the optimal number of tests that cover each of the most-at-risk objects.

  • Dashboard tiles display the date of the analysis, the name of the Analysis system, the name of the Performance History system including the date range for which performance history data was obtained, and the name of the Test Repository that was searched to obtain matching test assets. The Dashboard spreadsheet also shows the number of change IDs and changed objects.

Home

The Home spreadsheet provides a summary view of the tests found during the analysis, grouped by Application Area. It has the following columns:

APP_AREA

The Application Area name. Select a hyperlink in this column to display the Application Area in the App Area Details spreadsheet.

ALL

The number of tests found for the Application Area.

COVERING

The total number of tests that cover at least one most-at-risk object in the Application Area, including tests identified as optimal.

OPTIMAL

The number of tests that cover at least one most-at-risk object in the Application Area. LiveCompare identifies tests as optimal based on the number of most-at-risk objects they cover, and the usage counts of the most-at-risk objects.

TEST_GAPS

The number of most-at-risk objects that don’t have tests in specified Most-at-Risk Search Test Repositories.

App Area Details

This spreadsheet lists the most-at-risk objects that have matching tests in the specified Most-at-risk Search Test Repository, grouped by the most-at-risk objects’ Application Area. The spreadsheet has the following columns:

APP_AREA

The name of the Application Area in which the objects were found. (None) is used for objects that do not have an Application Area.

TEST_REPOSITORY_TYPE

The type of the Test Repository where LiveCompare found a matching test.

TEST_REPOSITORY

The name of the Test Repository.

TEST_NAME

The name of the matching test.

STATUS

This column has the value Covering if the test covers the tested object, or Optimal if LiveCompare identifies the test as optimal. LiveCompare identifies tests as optimal based on the number of most-at-risk objects they cover, and the usage counts of the most-at-risk objects.

RISK

The risk value of the tested object, either H for high risk, M for medium risk, or L for low risk. The risk values are based on the depth of the impact and frequency of use of the object.

TEST_DATA

The number of SAP tables referenced by the matching test, as shown in the Test Data spreadsheet.

TESTED_OBJECTS

The number of objects covered by the test. Select a hyperlink in this column to display the objects in the Test Hit Details spreadsheet.

TEST_PATH

The matching test’s path.

TEST_ID

The matching test’s ID.

Test Data

If you have specified a Comparison system and set the Compare Data? switch, this spreadsheet referenced in tests found for the most-at-risk executables in the specified Most-at-Risk Search Test Repositories. The spreadsheet has the following columns:

TEST_REPOSITORY_TYPE

The Most-at-Risk Search Test Repository’s type.

TEST_REPOSITORY

The Test Repository’s name.

TEST_NAME

The name of a test that matches a most-at-risk executable.

TABLE_NAME

The name of an SAP table referenced by the test.

TEST_ID

The ID of the matching test.

Test Hit Details

This spreadsheet includes details for the Hits in your Pipeline’s Most-at-Risk Search Test Repository. A Hit is a test asset that matched at least one most-at-risk object. If you access the spreadsheet from a hyperlink, it displays the details for the linked test asset. The spreadsheet has the following columns:

APP_AREA

The tested object’s Application Area.

TEST_REPOSITORY_TYPE

The type of a Most-at-risk Search Test Repository that contains a matching test asset.

TEST_REPOSITORY_NAME

The name of the Test Repository.

TEST_NAME

The name of a test that covers the tested object.

STATUS

This column has the value Covering if the test covers the tested object, or Optimal if LiveCompare identifies the test as optimal. LiveCompare identifies tests as optimal based on the number of most-at-risk objects they cover, and the usage counts of the most-at-risk objects.

RANK

The test’s rank, either H (High), M (Medium) or L (Low), based on how recently it was last run, its passes and fails, the number of runs per day, and the number of test steps. You should prioritize more highly ranked tests over tests with a lower rank.

TESTED_OBJECT

The name of the most-at-risk object that matched a test asset.

RISK

The risk value of the tested object, either H for high risk, M for medium risk, or L for low risk. The risk values are based on the depth of the impact and frequency of use of the object.

CHANGED_OBJECTS

The number of changed objects that the test covers. Select a hyperlink in this column to display the changed objects in the Changes spreadsheet.

TEST_PATH

The test asset’s path. Note that if a matched token contains path separators, these will be escaped and stored as \/ in the test path.

TEST_ID

The ID of a test that covers the tested object.

TEST_LIST_PATH

For Tosca, qTest and ALM Test Repositories, the path of an execution list or test set.

TEST_LIST_NAME

For Tosca, qTest and ALM Test Repositories, the name of an execution list or test set.

TEST_LIST_ID

For Tosca, qTest and ALM Test Repositories, the ID of an execution list or test set.

CONFIDENCE

For Tosca, qTest, ALM and SAP Solution Manager Test Repositories, the percentage of search terms for which LiveCompare found a matching test asset. If LiveCompare matches a search term with a technical name in a test, the CONFIDENCE value is set to 100.

COMMON_TERMS

For Tosca, qTest, ALM and SAP Solution Manager Test Repositories, a list of the all the SEARCH_TERMS matched in the test asset.

WORKSTATE

For Tosca Test Repositories, this column stores the workstate associated with the test asset.

TEST_URL

The test asset’s URL.

TEST_TYPE

This column isn't used.

HAS_DATA

This column is set to Y for Tosca test cases that match affected data, or to <blank> for test cases that do not match affected data. Affected data is defined by the key fields of table rows that are different, in the Analysis system only (added data), or in the Comparison system only (deleted data). If any conversion exit routines are available for the key field values on the Analysis system, LiveCompare applies these to the key field values before searching for matching test cases. LiveCompare doesn’t apply conversion exit routines if there are no key field values.

Test Hits & Gaps

This spreadsheet indicates whether each most-at-risk object is a Hit, Gap or Known gap in the specified Most-at-Risk Search Test Repository.

  • Hits are most-at-risk object names for which test assets have been found.

  • Gaps are most-at-risk object names for which there are no available test assets.

  • Known gaps are most-at-risk objects that aren’t expected to have tests in the specified Test Repository. You set these in your Pipeline’s Known Test Gaps External Data Source.

The spreadsheet has the following columns:

NAME

The name of a most-at-risk object.

TEST_COVERAGE

The most-at-risk object’s test coverage, either Hit, Gap or KnownGap.

Cross Reference

This spreadsheet lists all the impacted or most-at-risk executables for each impactful changing object. LiveCompare populates it you switch on the Cross Reference switch in the Smart Impact app variant or workflow. The Cross Reference spreadsheet is empty if there are no impacted objects, or if all most-at-risk objects are ‘New’. This spreadsheet has the following columns:

APP_AREA

The Application Area of the object in the NAME column.

TYPE

The type of an impacted or most-at-risk executable.

NAME

The name of the impacted or most-at-risk executable.

USAGE

The usage count for the impacted or most-at-risk executable.

DEPTH

The search depth at which LiveCompare found the object in the CHILD_TYPE and CHILD_NAME column.

CHILD_TYPE

The type of a changing object that impacts the impacted or most-at-risk executable.

CHILD_NAME

The name of the impactful changing object.

Changes

This spreadsheet lists the changing objects introduced by the transports, ChaRM change requests or objects analyzed by the Smart Impact app or workflow. The spreadsheet has the following columns:

CHANGE_ID

The transport or ChaRM change request that includes the impacting object This column has the value Objects if you specified a list of objects.

CHILD_TYPE

The type of the impacting changing object.

CHILD_NAME

The name of the impacting changing object. Select a hyperlink to display comparison details for the selected object.

CHANGE_STATE

If you specified a Comparison system and set the Compare ABAP? switch, this column lists the comparison status for the object on the Analysis and Comparison systems specified in your Pipeline. Select a hyperlink in this column to display comparison details for the selected object.

DEPTH

The search depth at which the used impacted object was found.

TYPE

The type of a used impacted object.

NAME

The name of the used impacted object.

DYNP

The number of impacted screens for each used impacted object.

Help

This spreadsheet provides help for each of the spreadsheet reports.

Analysis Input Data

This Excel report contains a copy of the input parameters used to produce the app’s Dashboard report. The value of each input parameter is stored in a separate worksheet, which is named after the parameter whose value it contains.

Related topics

Standard apps

Dashboard screen