Create comparison dashboards
Comparison dashboards help you analyze performance differences between multiple test results. Use comparison dashboards to identify trends, validate improvements, and track performance regression across your testing cycles by comparing up to 4 test results side-by-side.
Create a comparison dashboard
To create a comparison dashboard, follow these steps:
-
In the Dashboards view, select + Create dashboard.
-
In the create dashboard dialog, give your dashboard a name.
-
Select Comparison dashboard.
-
Add up to 4 test results to compare from the Test result dropdown. This dropdown contains all running and finished test results from your account:
-
To filter results, start typing in the dropdown menu.
-
To remove a selected test result, select Remove next to it.
-
-
Select Create to generate your comparison dashboard.
To learn more about managing your dashboards, check out Manage your dashboards.
Customize your dashboard
NeoLoad Web generates a pre-configured dashboard comparing test results side-by-side. However, you can customize the dashboard as you need. Use these features to adjust the layout and content:
To add a tile, follow these steps:
-
Select + Add tile in the top right corner or drag a tile from the sidebar.
-
Choose the tile type:
-
Series: Shows result metrics in a line graph.
-
Table: Displays result metrics in a structured table.
-
Text: Adds notes or explanations using the rich text editor.
-
Widget: Includes a predefined single result set.
-
-
Configure the tile in the right panel.
-
Select Save to add the tile to your dashboard.
To remove a tile, select Remove in the top right corner of the tile.
To rearrange the tiles, hover over the heading of the tile and drag it to your preferred location on the dashboard.
You can also build custom dashboards from scratch and organize the layout and content as you need. Check out Create custom dashboards to learn more.
Comparison dashboard types
Use different comparison approaches based on the data you want to analyze:
Compare performance metrics between different test executions to track improvements or identify regressions:
-
Before and after: Compare performance before and after code changes or infrastructure updates.
-
Baseline comparison: Measure current performance against established baseline metrics.
-
Configuration testing: Compare performance across different system configurations or test scenarios.
Analyze how performance changes over different time periods:
-
Trend analysis: Track performance evolution across weeks, months, or release cycles.
-
Seasonal patterns: Identify performance variations based on usage patterns or system load.
-
Release impact: Compare performance before and after major releases or updates.
Compare performance across different environments or deployment configurations:
-
Development vs. production: Validate that performance characteristics remain consistent across environments.
-
Infrastructure comparison: Analyze performance differences between cloud providers, regions, or hardware configurations.
-
Load balancer comparison: Compare performance across different load distribution strategies.
Key comparison metrics
Not all metrics are equally valuable for comparison analysis. Focusing on the wrong metrics can lead to misleading conclusions or missed performance issues. Choose metrics that directly impact user experience and your business objectives:
Response times directly affect user satisfaction and conversion rates. Even small increases in response time can significantly impact business metrics. Compare these response time indicators to identify performance changes that matter to your users:
-
Average response time: Compare typical response times across test runs.
-
Percentile values: Analyze 95th and 99th percentile response times to understand performance consistency.
-
Response time distribution: Compare how response times spread across different performance ranges.
Throughput metrics reveal whether your application can handle expected user loads and if infrastructure changes affect processing capacity. Track these indicators to identify bottlenecks impacting real users:
-
Requests per second: Compare system throughput capabilities across different conditions.
-
Concurrent users: Analyze how systems handle different user load levels.
-
Transaction rates: Compare business transaction processing rates across test scenarios.
Error rates often reveal problems that performance metrics alone miss. A system might maintain good response times while failing requests, creating a poor user experience. Track these reliability indicators to ensure your application not only performs well but works correctly:
-
Error rates: Compare error percentages to identify stability improvements or regressions.
-
Success rates: Track how successfully systems handle requests across different conditions.
-
Timeout occurrences: Compare timeout frequencies to assess system reliability.
Analysis workflows
Random comparison analysis often misses critical insights or leads to wrong conclusions. These proven workflows help you approach comparison analysis systematically, ensuring you catch performance regressions before they impact users and validate that improvements actually work:
Performance regressions can slip into production if not caught early. This workflow helps you systematically identify when new code or infrastructure changes have negatively impacted performance, so you can address issues before they affect users:
-
Create a comparison dashboard with current test results and established baseline performance.
-
Identify metrics that show significant degradation from baseline values.
-
Analyze the magnitude of performance changes to assess business impact.
-
Document findings and recommend corrective actions for performance issues.
Performance optimizations sometimes fail to deliver expected benefits. With proper before-and-after comparison, you can easily see which optimization efforts are worthwhile. Use this validation process to demonstrate real improvement and justify optimization investments:
-
Establish baseline metrics before implementing performance improvements.
-
Run tests after implementing changes using identical test conditions.
-
Create a comparison dashboard with before and after test results.
-
Validate that improvements don't negatively impact other performance areas.
To make strategic decisions about architecture, infrastructure, and optimization priorities, you need to understand long-term performance trends. Multi-version analysis reveals whether your application performance is improving, declining, or remaining stable as your codebase evolves:
-
Select up to 4 test results from different application versions or configurations.
-
Create a comparison dashboard to analyze performance evolution over time.
-
Identify performance trends and patterns across multiple releases.
-
Make data-driven decisions about performance optimization priorities.
Best practices
Follow these guidelines to create effective comparison dashboards:
-
Use consistent test conditions: Ensure test results being compared use similar load patterns, duration, and environment conditions.
-
Select meaningful comparisons: Choose test results that provide valuable insights for your analysis objectives.
-
Focus on key metrics: Concentrate on performance indicators that matter most to your business goals.
-
Document context: Record why specific comparisons are meaningful and what changes occurred between test runs.
-
Validate statistical significance: Ensure observed differences represent meaningful performance changes, rather than normal variation.
-
Share insights: Use comparison dashboards to communicate performance findings with stakeholders and team members.
What's next?
Now that you understand comparison dashboards, here's what you can do next:
-
Learn about specific tile types and how to configure them.
-
Use custom dashboards to build personalized dashboards from scratch.
-
Manage your dashboards to keep them organized and relevant.