Best practices for MCP prompting
With the qTest MCP, you can integrate your LLM with tools that let you review and manage your testing process with natural language. For example, you can tell the AI to figure out what defects you should create from a failed test run, and it will do the legwork for you.
Effective prompting is key to getting the most out of your qTest MCP integration. This topic provides proven patterns, examples, and best practices for natural language interactions with your performance testing platform, along with any required information you may need to include in your prompts.
Prompt information
As you work with the MCP, you should organize your prompts like a funnel. Start with the broadest information first, so the AI can reference it throughout your session. However, to avoid expensive, resource-consuming queries, you should also know exactly where you want to start.
For example, when you start a session, first you need to access the project you want to work in. Just tell the AI, "Give me the details about project [ID number]." Then, the AI continues to reference this project in further interactions.
This can extend to requirements and test cases. If you ask the AI, "Give me all the test cases in project [ID number]," the AI will spend time and resources gathering all test cases. However, if you know what requirement you want to work with, you can ask the AI to "Give me all test cases attached to requirement [ID number]".
Locate ID numbers
Currently, you must reference project and object ID numbers to pull up specific objects with MCP tools. To find these, open up the object you want to work with in qTest, and take a look at the URL.
Your project ID is the first number in the URL, located after /p/
, which specifies the project you're working with in your qTest instance. Your object ID is the second number, located after id=
. This specifies the object you currently have open, whether its a requirement, module, test case, or something else.
For example, let's take a look at the following project URL: https://[sample].qtestnet.com/p/115973/portal/project#id=27374334
.
In this case, the project ID is 115973
, and the object ID is 27374334
. This example is a requirement ID, but the object ID will remain in the same location across objects of all types.
Available MCP actions
Once connected, you have access to these actions that you can use to interact with qTest.
If you are directly referencing a new project, or you want to pull up a specific object, you need to give the MCP the project or object ID number. If you've already referenced the project or object during your session, you don't need to include the ID number again.
Keep in mind that these prompts are an example. You can adjust and build on them to better meet your needs.
Action |
Example prompt |
Required input |
---|---|---|
Retrieve a list of projects | Can you show me all projects attached to this qTest instance? | |
Retrieve a project |
Can you show me details for project 115973? |
Project ID |
Retrieve a list of requirements from a project |
What requirements are in project 115973? What requirements are in this project? |
Project ID, Object ID |
Get details for a requirement | Can you show me the details for requirement 27374334? | Object ID |
Retrieve a list of test cases from a project |
Show me test cases from project 115973. Show me a list of open test cases in this project. |
Project ID |
Retrieve detailed information for a specific test case |
Can you show me details for test case 27374326? |
Object ID |
Create a new test case |
Create a list of test cases that covers this requirement. Create a test case that covers the login requirements we just talked about. |
|
Update a test case |
Can you create more detailed test steps for test case 27374326? Can you update the description for this test case to better match the requirement? |
Object ID |
Create a new test case in a module hierarchy |
Create a test case that addresses this requirement and add it to a new module named Homepage. |
|
Create a new module |
Create a new module named Homepage for project 115973. Create a new module named Homepage. |
Project ID |
Search for modules or submodules in a qTest project, including by case-sensitive name |
Can you list the modules in project 115973? Can you list the modules in this project? Can you show me the Homepage module? |
Project ID |
Link test cases to a requirement |
Link these test cases to requirement 27374334. Link test case 27374326 to requirement 27374334. Link this test case to the requirement we just looked at. |
Object ID |
Provide summaries of test cases linked to a specific requirement |
What test cases are linked to requirement 27374334? What test cases are linked to this requirement? |
Object ID |
List test runs for a test case |
Can you show me the test runs associated with test case 27374326? Can you show me the test runs attached to this test case? |
Object ID |
Retrieve test logs for a specific test run with detailed analysis, such as pass or fail |
Show me all test logs for test run 27374375. Show me all test logs for this test run. |
Object ID |
Create a defect and link it to a test log |
Review test run 27374375 and create a list of defects that cover the failures in the test logs. OK, now create a list of defects that cover the failures in the test logs. |
Object ID |
Core principles
Follow these fundamental principles to create effective MCP prompts and avoid common pitfalls:

Use precise terminology and avoid ambiguous language. Specify exactly what you want to achieve and include relevant context and constraints.
Good examples:
-
"Create a test plan with test cases that cover gaps in requirement [ID number]"
-
"Show me the test logs for the latest test run in test case [ID number]"
-
"Find all failed test runs in module [ID number]"
Avoid vague requests:
-
"Create test case"
-
"Show test logs"
-
"Find failed test runs"

Make sure you provide the AI with enough context to complete the task. For example, unless you've already talked with the AI about a project, don't assume it has the knowledge.
Include relevant information to help the AI understand your context, such as:
-
Project IDs
-
Module IDs
-
Requirement IDs
-
Test case IDs
Good examples:
-
"What requirements are in project [ID number]?"
-
"Please find all test cases attached to requirement [ID number]"
-
"Create test cases that fill gaps for requirement [ID number]"
Avoid ambiguity:
"Find all test cases"
-
"What test cases do I need?"

Start broad, then narrow down based on results. This conversation flow helps you discover and focus on what matters most:
-
"Show me recent test cases from requirement [ID number]"
→ Get overview -
"Focus on the failed test logs from last week"
→ Narrow scope -
"Create defects for the failures of the test logs"
→ Actionable insights

Avoid overwhelming complexity and break complex tasks into clear, logical steps. That way, you don't overwhelm your AI with everything, everywhere, all at once.
For example:
- Instead of:
"Analyze everything and tell me all the problems and solutions and comparisons and trends"
- Use:
"Start by showing me which test runs failed this week, then we'll dig deeper"
Let's get more specific. To get the most control over your workspace, you may also find it helpful to ask the AI to generate objects for you to preview before finalizing them. For example, once the AI provides a test plan, you can review the test cases and specify which ones it should add.
Instead of:
"Create and add test cases to cover gaps in requirement [ID number]"
Use:
"Please help me create a test plan. I need:
1. A list of reusable test cases from requirement [ID number]
2. Identification of any testing gaps that currently exist in the requirement
3. A list of test cases that cover these testing gaps"
Then, once you've reviewed the AI's output, you can follow this prompt with:
"Please create these test cases and attach them to requirement [ID number]"
Language guidelines
Follow these language guidelines for effective communication:

-
Write as you would speak to a knowledgeable colleague.
-
Avoid overly technical jargon unless necessary.
-
Use conversational connectors ("Then", "Next", "Also").

Good examples:
-
"Can you help me understand why our test run [ID number] failed?"
-
"Let's dig deeper into those error patterns."
-
"What test cases would you recommend for improving response times?"

Make your goals explicit rather than leaving them ambiguous:
-
Instead of:
"Create some test cases"
-
Use:
"Create 5 test cases that cover security testing for requirement [ID number]"
Project operations
Use these prompt patterns to navigate and learn more about your project:

"Tell me about project [ID number]."

"Show the details for test case [ID number]"

"Show me all test cases attached to requirement [ID number]"
With previous context, you can use shorter prompts:
"List all test cases"

"Find all test runs related to test case [ID number]"
With previous context, you can use shorter prompts:
"List all associated test runs"

"What are the details of test logs attached to test case [ID number]?"
With previous context, you can use shorter prompts:
"List all associated test logs"
Common workflow patterns
Use these workflow patterns for typical test case and defect creation scenarios:

-
Discovery:
"Show me all test cases attached to requirement [ID number]"
-
Generate:
"OK, now create a test plan for this requirement that focuses on any gaps in test coverage. Get ready to add test cases, but don't create the test cases yet"
-
Finalize:
"OK, please create these test cases and attach them to requirement [ID number]"

-
Historical Data:
"Can you locate the requirement in qTest linked to the Jira story [name]?"
-
Generate:
"OK, I'd like to create a test plan for this requirement and get ready to add test cases. There's a requirement called "verify navigation bar" that you can reference with existing test cases. Please propose a test case plan that reuses existing test cases. Don't create the test cases yet though"
-
Finalize:
"OK, please create the new test cases"

- Discovery:
"Show me recent test runs from test case [ID number]"
-
Narrow down:
"Focus on the failed test logs from last week"
-
Generate:
"OK, now give me a list of defects that cover the failures of the test logs. Don't create the defects yet though"
-
Finalize:
"OK, please create the new defects"
Effective prompt patterns
Use these proven patterns to structure your prompts for better results:

Structure: [Question] + [Context] + [Desired Action]
Example:
"What's causing the high test failure rate [Question]
in the last three test runs from module [ID number] [Context]?
Please analyze the patterns and suggest specific defects [Action]"

Structure: Compare [Item A] with [Item B] focusing on [Specific Metrics]
Example:
"Compare the test results between:
- Test run 1 [ID number] (baseline)
- Test run 2 [ID number](new feature)
Focus on the checkout and payment transactions"
Advanced techniques
Use these advanced techniques for more sophisticated interactions:

Frame requests from specific perspectives:
"As a QA Engineer preparing to create test cases for Requirement A [ID number 1], review Requirement B [ID number 2] and tell me what test cases we can reuse in Requirement A

Set clear boundaries and requirements:
"Generate test cases for requirement [ID number]
that:
- Cover gaps in existing test scenarios for the requirement
- Do not repeat existing test scenarios for the requirement
- Do not cover performance testing
- Do not cover security testing"
What's next?
Now that you understand MCP prompting best practices, you can connect qTest to the MCP.