Best practices for MCP prompting
Effective prompting is key to getting the most out of your NeoLoad Web MCP integration. This topic provides proven patterns, examples, and best practices for natural language interactions with your performance testing platform.
Whether you're a performance engineer streamlining your workflow, a DevOps professional integrating testing into CI/CD pipelines, or a team lead needing quick performance insights, these prompting techniques will help you interact more effectively with NeoLoad Web through AI assistants.
Core principles
Follow these fundamental principles to create effective MCP prompts:
Use precise terminology and avoid ambiguous language. Specify exactly what you want to achieve and include relevant context and constraints.
Good examples:
-
"Show me the performance metrics for the latest run of the Open-telem-demo test" -
"Execute the API load test in the MCP-Demo workspace"
Avoid vague requests:
-
"Check the test" -
"Run something"
Include relevant information to help the AI understand your context, such as:
-
Workspace names or IDs
-
Test names and their purposes
-
Time frames for analysis
-
Expected outcomes or thresholds
Good examples:
-
"Why did the overnight regression test fail? I'm looking at result ID abc-123 from the QA workspace" -
"Compare the response times between yesterday's baseline test and today's feature branch test for the checkout API"
Start broad, then narrow down based on results. This conversation flow helps you discover and focus on what matters most:
-
"Show me recent test results"→ Get overview -
"Focus on the failed tests from last week"→ Narrow scope -
"What caused the failures in the payment service tests?"→ Specific analysis -
"Show me the error patterns and recommend fixes"→ Actionable insights
Break complex tasks into clear, logical steps rather than overwhelming the AI with everything at once.
Instead of:
"I need to know everything about performance issues and create a report with recommendations and comparison data"
Use:
"Please help me create a performance analysis report. I need:
1. Performance metrics for test result XYZ
2. Comparison with the previous baseline
3. Identification of any bottlenecks
4. Recommendations for optimization
5. Format as an executive summary"
Effective prompt patterns
Use these proven patterns to structure your prompts for better results:
Structure: [Question] + [Context] + [Desired Action]
Example:
"What's causing the high response times [Question]
in our e-commerce API tests from the last 3 runs [Context]?
Please analyze the patterns and suggest specific optimizations [Action]."
Structure: Compare [Item A] with [Item B] focusing on [Specific Metrics]
Example:
"Compare the throughput and error rates between:
- Test result abc-123 (baseline)
- Test result def-456 (new feature)
Focus on the checkout and payment transactions."
Structure: [Problem Statement] + [Context] + [Investigation Request]
Example:
"Our API response times increased by 40% after the last deployment.
Looking at workspace 'Production-Tests', test 'API-Regression'.
Can you analyze the recent results and identify what changed?"
Structure: Create [Report Type] for [Audience] including [Specific Elements]
Example:
"Create an executive performance summary for the development team including:
- Key metrics from this week's tests
- Trend analysis vs. last month
- Top 3 performance concerns
- Recommended actions with priorities"
Common workflow patterns
Use these workflow patterns for typical performance testing scenarios:
-
Discovery:
"Show me all workspaces" -
Selection:
"List tests in the [Workspace name]" -
Execution:
"Run the [Test name] test with name 'API Performance Test - June 18'" -
Monitoring:
"What's the status of the running test?" -
Analysis:
"Analyze the results once the test completes"
-
Historical Data:
"Show me the last 3 test results for [test name] from workspace [workspace name]" -
Analysis:
"Compare response times across these results" -
Trend Identification:
"Are there any performance degradation trends?"
Workspace operations
Use these patterns for workspace and test management:
"Show me all available workspaces"
"Find the workspace named 'MCP-demo'"
"Show the details for workspace [workspace name]"
"Show me all tests in the [workspace name] workspace"
With previous context, you can use shorter prompts:
"List all tests"
"Find all tests related to [test name] in workspace [workspace name]"
"What are the details of the [test name] test in workspace [workspace name]?"
Running tests
Use these patterns for test execution:
"Run test [test name] from the workspace [workspace name]"
With test ID: "Run the test [test-id] with the name 'Performance Test - [Date]'"
"What's the status of the running test?"
"Stop the currently running test"
Zone management
Use these patterns for managing your load testing zones:
"Show me all available zones"
"List only the cloud zones"
"What dynamic zones do we have configured?"
"Create a static zone named 'Production-EU'"
"Create a dynamic zone called 'Load-Test-US' using provider [provider-id] with MEDIUM sizing"
"Set up a new dynamic zone with custom resources: 16GB RAM and 8 CPUs for controller, 8GB RAM and 4 CPUs for load generators"
"Enable cloud zone in AWS_US_EAST_1"
"Update the resource sizing for zone [zone-id] to LARGE"
"Change the controller resources to 32GB memory and 16 vCPUs for zone [zone-id]"
"Rename the static zone 'Test-Zone' to 'QA-Environment'"
"Disable the cloud zone [zone-id]"
"Delete the unused zone named 'Old-Test-Zone'"
Advanced result analysis
Use these patterns for detailed performance analysis:
"Get the request performance values for test result [result-id]"
"Show me transaction metrics for the latest test run"
"Analyze the request patterns in interval [interval-id] of result [result-id]"
"Generate intervals for test result [result-id] to analyze performance trends"
"List all intervals for my latest test and identify performance degradation periods"
"Compare request values across different intervals in result [result-id]"
"Show me all elements from test result [result-id]"
"Get the events from the failed test run, sorted by offset"
"Get the last 100 events from test [result-id] to understand the failure sequence"
Infrastructure and webhook management
Use these patterns for infrastructure and webhook operations:
"List all available infrastructure providers"
"Which infrastructure providers are available for dynamic zones?"
"Check if provider [provider-id] is available"
"Verify connectivity to our primary infrastructure provider"
"Test the connection to all configured providers and report their status"
"Show me all webhooks configured for workspace [workspace-id]"
"List the webhook endpoints for our production workspace"
Advanced prompt examples
Use these complex workflows for sophisticated testing scenarios:
"I need to set up a complete testing infrastructure. Please:
1. List current zones and their types
2. Create a new dynamic zone called 'Performance-Test-2024' with LARGE sizing
3. Enable cloud zones in AWS_US_WEST_2 and AWS_EU_CENTRAL_1
4. Verify the infrastructure provider connectivity
5. Confirm all zones are properly configured"
"Analyze the performance regression in our API tests:
1. Get result details for test [result-id]
2. Generate intervals for trend analysis
3. Get request and transaction values for each interval
4. Compare with the baseline result [baseline-id]
5. Identify specific requests or transactions causing degradation
6. Check the event logs for any errors during degraded periods"
"Help me optimize our zone resources:
1. List all dynamic zones and their current sizing
2. For zone [zone-id], show recent test results
3. Based on the load patterns, recommend appropriate sizing
4. Update the zone to use custom sizing with your recommendations
5. Document the changes for the team"
Resource sizing and sorting patterns
Use these patterns for resource configuration and data organization:
Fixed sizing options:
-
SMALL: Controller: 3GB RAM, 2 vCPUs | Load Generator: 1.5GB RAM, 1 vCPU
-
MEDIUM: Controller: 8GB RAM, 4 vCPUs | Load Generator: 3GB RAM, 2 vCPUs
-
LARGE: Controller: 16GB RAM, 8 vCPUs | Load Generator: 4GB RAM, 2 vCPUs
Custom sizing pattern:
"Configure zone [zone-id] with custom resources:
- Controller: [2-128]GB memory, [1.5-128] vCPUs
- Load Generator: [1.5-64]GB memory, [1-64] vCPUs"
For test results:
"Show me the last 50 test results sorted by start date (newest first)"
Sort options: duration, startDate, status, qualityStatus, name, project
For events and logs:
"Get test events sorted by offset in descending order"
Sort options: code, offset, fullName, source, date
For request and transaction values:
"List transaction values sorted by average duration"
Sort options: name, averageDuration, errorRate, maximumDuration, minimumDuration
Best practices for advanced features
Follow these guidelines when using advanced MCP features:
Zone type selection:
-
Use Static Zones for stable, pre-configured infrastructure
-
Use Dynamic Zones for scalable, on-demand testing
-
Use Cloud Zones for geographic distribution and cloud-native testing
Resource sizing guidelines:
-
Start with SMALL for initial tests and proof-of-concepts
-
Use MEDIUM for standard load testing scenarios
-
Choose LARGE for high-volume or complex testing
-
Use custom sizing for specialized requirements
Zone lifecycle management:
-
Regularly review and turn off unused zones
-
Name zones descriptively (e.g., "Prod-EU-LoadTest", "QA-API-Testing")
-
Document zone configurations and purposes
Interval analysis:
-
Generate intervals to identify performance trends over test duration
-
Compare intervals to pinpoint when degradation occurs
-
Correlate interval data with system events
Request and transaction analysis:
-
Focus on high-impact transactions first
-
Compare percentile values (50th, 90th, 95th, 99th) not just averages
-
Investigate both error rates and response times
Event investigation:
-
Sort events by offset to understand chronological sequence
-
Filter events by error code to identify patterns
-
Correlate events with performance metrics
Dashboard discovery:
"Show me all available dashboards in my default workspace"
Dashboard creation:
"Create a new dashboard called 'Q4 Performance Analysis' in workspace xyz"
Tile configuration:
"Add a tile to dashboard abc with the title 'Response Times' at position (0,0) spanning 2 columns and 2 rows"
Data visualization:
"Add average duration data for all requests from my last test result to the tile"
Language guidelines
Follow these language guidelines for effective communication:
-
Write as you would speak to a knowledgeable colleague
-
Avoid overly technical jargon unless necessary
-
Use conversational connectors ("Then", "Next", "Also")
Good examples:
-
"Can you help me understand why our load test failed?" -
"I'm seeing some concerning trends in our API performance..." -
"Let's dig deeper into those error patterns." -
"What would you recommend for improving response times?"
Make your goals explicit rather than leaving them ambiguous:
-
Avoid:
"Do something with the data" -
Use:
"I want to identify performance bottlenecks"
Common mistakes to avoid
Avoid these common pitfalls that can reduce the effectiveness of your prompts:
-
Avoid:
"Check performance" -
Use:
"Analyze response time trends for the last 5 test runs"
-
Avoid:
"Why is it slow?" -
Use:
"Why are the API response times in test result abc-123 three times slower slower than our baseline?"
-
Avoid:
"Analyze everything and tell me all the problems and solutions and comparisons and trends" -
Use:
"Show me the tests that failed this week, and then help me investigate further"
-
Avoid:
"Fix the issue from yesterday" -
Use:
"Help me troubleshoot the timeout errors in yesterday's load test for the payment API"
Advanced techniques
Use these advanced techniques for more sophisticated interactions:
Guide the AI through your reasoning process step by step:
"I'm investigating a performance regression. Let's work through this step by step:
1. First, show me the recent test results for the user-service.
2. Then identify which metrics degraded compared to last week.
3. Next, analyze the error logs for those degraded tests.
4. Finally, correlate any errors with the performance drops."
Frame requests from specific perspectives:
"As a DevOps engineer preparing for a production release, analyze the staging environment performance tests and tell me if we're ready to deploy based on our SLA requirements."
Set clear boundaries and requirements:
"Generate a performance report that:
- Takes no more than 5 minutes to read
- Focuses only on critical issues (>10% degradation)
- Includes specific remediation steps
- Uses non-technical language for management review"
What's next?
Now that you understand MCP prompting best practices, you can:
-
Set up your MCP connection if you haven't already.