Best of 2021 – Best Practices for Application Performance Testing

As we head into 2021, we at DevOps.com wanted to highlight the most popular articles of the year. Here is the 18th issue in our Best of 2021 series.

When performed correctly, software application performance testing determines whether a system meets certain acceptable standards for both responsiveness and robustness. Before you start testing, there are some best practices to remember.

Start by defining test plans that include load testing, stress testing, stress testing, availability testing, composition testing, and insulation testing. Align these plans with accurate metrics in terms of goals, acceptable metrics, and thresholds, and a plan for overcoming performance issues to get the best results. Ensure that you can sort out performance issues in your test environment; You should analyze issues affecting application performance by checking system functionality under load, not just indicators of poor performance on the side of the load test tool. Application performance management (APM) tools, which simulate production environments, provide much deeper insights into application functionality, as well as overall performance under stress or load.

10 Performance Testing Best Practices

1. Test early and often

Performance testing is often an afterthought, and is done hastily late in the development cycle, or just in response to user complaints. Instead, you need to be proactive. Take an agile approach that uses iterative testing throughout the entire development lifecycle. Specifically, provide the ability to run performance ‘unit’ testing as part of the development process – then repeat the same tests on a larger scale at later stages of application readiness. Use performance testing tools as part of an automated pass/fail pipeline, where the code that passes through the pipeline and code that fails is returned to a developer.

2. Consider the users, not just the servers

Performance tests often focus only on the performance of the servers and clusters that run the software. Don’t forget that people use software, and performance tests should measure the human element as well. For example, benchmarking the performance of aggregated servers may yield satisfactory results, but users on a single problematic server may experience an unsatisfactory result. Tests should take user experience into account, and UI timing should also be logged along with server metrics.

3. Understand the definitions of performance testing

It is critical to have a common definition of the types of performance tests to perform on your applications, such as:

  • Individual user tests. Testing with one active user results in the best possible performance, and response times can be used for basic measurements.
  • Download tests. Understand the behavior of the system under an average load, including the expected number of concurrent users executing a specified number of transactions within an average hour.
  • Peak load tests. Understand system behavior under expected heavier usage of concurrent number of users and transaction rates.
  • Endurance tests (soaking). Determining the longevity of components, and whether the system can withstand average to peak load over a predetermined period. Monitor memory usage to detect potential leaks.
  • Stress tests. Understand the upper limits of power within a system by intentionally pushing it to the breaking point.
  • High availability tests. Check how the system behaves during the failure state while there is a load. There are many operational use cases to include, such as a smooth failure of network equipment or a rolling server restart.

5. Build a complete performance model

Measuring the performance of your application includes understanding the capacity of your system. This includes the steady state chart in terms of concurrent users, concurrent requests, average user sessions, and server usage during peak periods of the day. In addition, you should define performance goals, such as maximum response times, system scalability, user satisfaction scores, acceptable performance measures, and the maximum capacity for all of these metrics.

6. Determine baselines for important system functions

In most cases, the performance of quality assurance systems does not match that of production systems. Having basic performance measurements for each system can give you reasonable goals for each test environment. These baselines provide an important starting point for response time goals, especially if there are no previous metrics, without having to guess at or base them on the performance of other applications.

7. Conduct standard tests and system performance tests

Modern applications involve many single and complex systems, including databases, application servers, web services, legacy systems, and so on. The performance of all these systems must be tested individually and collectively. This helps expose vulnerabilities, highlight interconnections and understand which systems need to be isolated for further performance tuning.

8. Measure averages, but include outliers

When testing the performance, you need to know the average response time, but this measurement can be misleading in itself. Be sure to include other metrics, such as the 90th percentile or standard deviation, to get a better view of system performance.

KPIs can be measured by looking at the mean and standard deviations. For example, set a performance goal for the mean response time plus one standard deviation beyond it (see Figure 1). In many systems, this improved metric affects the pass/fail criteria of the test, matching the actual user experience more accurately. Higher standard deviation coefficients can be adjusted to reduce system response time variability and improve the overall user experience.

9. Report and analyze results frequently

The design and implementation of performance testing is important, but test reports are also important. Reports communicate the results of your app’s behavior to everyone in your organization, especially project owners and developers. Constantly analyzing and reporting results helps identify future updates and fixes. Remember to be mindful of your audience and tailor the reports to each audience. Developer reports should differ from reports sent to project owners, managers, corporate executives, and even clients, if possible.

10. Screening performance problems

Presenting performance test results is fine, but those results, especially when they prove a failure, are not enough. The next step should be to sort out the code/application and system performance, and involve all parties: developers, testers and relevant operations personnel. Application monitoring tools can provide clarity regarding screening efficacy.

Additionally, remember to avoid throwing your software ‘on the wall’ at a separate testing institution, and make sure that QA platforms are matched for production as closely as possible. As with any profession, your efforts are just as good as the tools you use. Ensure that a mix of manual and automated testing is included in all systems.

Leave a Comment