Best practices | TestCases | Content

A TestCase is a series of steps that Tosca should do in your application under test. Let's talk about a few ground rules for the content of your TestCases.

DO create TestCases that have a defined outcome

Chances are, your QA team has spent a lot of time on coming up with quality and success criteria for your application. Make sure your TestCases offer definitive proof that your application actually meets these criteria.

The best way to do this is to design TestCases with verifications. They make it easy to evaluate whether your application actually shows the expected outcome. For more information, check out "Best practices | TestCases | Verifications".

DO create self-contained TestCases

Self-contained TestCases are TestCases that don't require other TestCases to run first. Fully independent TestCases let you test faster, with more reliable results. You can run any TestCase, any time, without having to worry about false positives, breaking other tests, or forgetting important connected flows.

Each TestCase should run from a defined start point to a defined end point. For self-contained TestCases, make sure that the end point of your application after a TestCase is the same as its start point.

If you use standardized, identical start and end points, you can combine TestCases into extended sequences, without having to worry about transitional steps or connected flows. Let's look at this example: 

  1. The start point of TestCase1 is the main menu of your application.

  2. TestCase1 then navigates across your application to test a specific workflow.

  3. TestCase1 goes back to the main menu. That's the end point.

  4. The start point of TestCase2 is also the main menu. It can pick up right where TestCase 1 left off.

If you don't have step 3, you have to integrate the transitional steps from where TestCase1 ends to where TestCase2 should start into TestCase2. Which means you can never run TestCase2 independently.

Or you have to create a "filler" TestCase with these steps and then run this filler TestCase between TestCase1 and TestCase2. However, fillers are not a good idea.

DON'T create "filler" TestCases

Typically, filler TestCases happen because of scenarios like the one we've discussed in the previous section about self-contained TestCases. They're TestCases with only one or two preparatory or transitional TestSteps. A large number of these tiny TestCases makes it difficult to track what you have, what you still need, and what belongs together. Plus, it creates extra maintenance effort.

Avoid this situation, so you only spend your time on TestCases that actually test something.

DO create useful TestCases

Your goal is a self-contained TestCase with a defined outcome, which tests something worthwhile. Anything else just provides too little value for the time you invest.

Useful TestCases contain a full sequence:

  • All actions that represent a realistic user workflow, from a defined start point to a defined end point. Start point and end point should be the same, so that your TestCase is self-contained.

    For example: opening an application, signing in, navigating the user interface, entering data, signing out, and closing the application.

  • The expected reaction of the application under test, so that you can verify the defined outcome along the way.

    For example, does the application only activate the Next button once all required fields are filled out? Does it calculate correct results for the given data? 

What's next

If you haven't yet, check out our other best practices articles.