Create a new test case
A test case is a series of actions that you want to perform in your application under test. It verifies a specific workflow and helps you ensure that your users have a good experience after you release.
Creating a new test case is easy:
-
In the menu bar on the left, select
Create test case.
-
Give your new test case a unique name that describes the purpose of the test case. If you test multiple applications, we recommend that you also add the application name.
This makes it easier to identify the right test case in a search or to analyze and report results.
Best practices for test case content
You've just created a blank slate, full of possibilities. Before you start adding things to it, let's talk about some golden rules for test case content:

Chances are, your QA team has spent a lot of time on coming up with quality and success criteria for your application. Make sure your test cases offer definitive proof that your application actually meets these criteria.
The best way to do this is to add verifications to your test case, to evaluate whether your application shows the expected outcome.
If you only check your test step results, you'll only know whether Tosca Cloud successfully performed all steps of your test case. However, to determine whether the test case actually has the expected outcome, you need to add verifications.
Let's say you're testing an online shopping process: adding an item to the cart. Tosca Cloud searches for the item, selects it from the search results, and adds it to the cart. So far, so good. Now think about the expected outcome: there should be one item in the cart, and the cart should show the correct total price.
Tosca Cloud passed all test steps, but this only tells you that the process of adding something to the cart works. You still don't know whether your application successfully completes the process and displays everything in the cart. That's exactly where your verifications come in.

Self-contained test cases are test cases that don't require other test cases to run first. Fully independent test cases give you more reliable results. You can run any test case, any time, without having to worry about false positives, breaking other tests, or forgetting important connected flows.
Plus, if your playlist only has self-contained test cases, you can use Parallel mode, which is the fastest way to run your tests.
Each test case should run from a defined start point to a defined end point. For self-contained test cases, make sure that the end point of your application after a test case is the same as its start point.
If you use standardized, identical start and end points, you can combine test cases into extended sequences, without having to worry about transitional steps or connected flows. Let's take a look at this example:
-
The start point of test case A is the main menu of your application.
-
Test case A then navigates across your application to test a specific workflow.
-
Test case A goes back to the main menu. That's the end point.
-
The start point of test case B is also the main menu. It can pick up right where test case A left off.
If you don't have step 3, you have to integrate the transitional steps between the point at which test case A ends to the point at which test case B starts into test case B. Which means you can never run test case B independently.
Or, you have to create a "filler" test case with these steps and then run this filler test case between test case A and B. However, fillers are not a good idea.

Typically, filler test cases happen because of scenarios like the one we discussed in the previous section about self-contained test cases. They're test cases with only one or two preparatory or transitional test steps. A large number of these tiny test cases makes it difficult to keep track of what you have, what you still need, and what belongs together. Plus, it creates extra maintenance effort.
Avoid this situation, so you only spend your time on test cases that actually test something.
Your goal is a self-contained test case with a defined outcome, which tests something worthwhile. Anything else just returns too little value for the time you invest.
Useful test cases contain a full sequence:
-
All actions that represent a realistic user workflow, from a defined start point to a defined end point. Start point and end point should be the same, so that your test case is self-contained.
For example, opening an application, signing in, navigating the user interface, entering data, signing out, and closing the application.
-
The expected reaction of the application under test, so that you can verify the defined outcome along the way.
For example, does the application only activate the Next button once all required fields are filled out? Does it calculate correct results for the given data?
What's next
Now that you've created a test case, it's time to start designing it. Build an automated test sequence that tells you how release-ready your application really is.