JAXenter: What is the ideal testing strategy for companies? How do they sometimes fall short?
Mark Price: Ideally, a company should have different levels of testing. Staring with unit-testing, moving up to integration and acceptance testing, each level of testing being broader in scope than the previous one. If systems performance is a business goal, then it makes sense to test it alongside a product’s functional features.
All these tests should be part of a continuous integration strategy, so that failures and regressions can be caught quickly. One of the most important things is making the state of tests visible, so that trends can be identified and everyone is aware when something is broken.
JAXenter: What is a performance test harness? Why do companies need to have this as a part of their testing strategy?
Mark Price: A performance test harness is simply another type of test interaction that can be used to model user behaviour. It is typically focused on measuring how well the system performs under varying levels of load. Reporting of performance tests may be different than for functional tests; for instance, it may be difficult to define what constitutes a test failure.
No business wants their product to fail when it suddenly becomes popular, so as part of capacity planning it is important to know how much load the system can handle. Having a test harness in place allows us to inform the business of how popular the product can become before there are going to be scaling issues. It also provides a platform for analysing and improving system performance in a safe environment, where experiments can be performed without worrying about affecting production systems.
JAXenter: How can we determine that the tests are reflecting what is actually happening accurately?
Mark Price: This is probably the starting point for building a test harness. First we need to understand how systems are being used in production, so some form of traffic analysis is required (e.g. looking at requests per second, distribution of request types, etc). While the test harness is being developed and run, we can analyse the traffic come from the harness, to make sure that it has the same “shape” as that which is seen in production. It’s also possible to automate this as an extra validation step to ensure that production traffic loads are not diverging from the test model.
JAXenter: What’s the biggest challenge in testing? How do you mitigate it?
Mark Price: Performance regressions are one of the hardest things to track down. Due to the duration of most performance tests (they may run for several minutes, or even hours), there is usually lots of change incorporated into each run. When a regression occurs, it is very useful to use something like ‘git bisect’ to track down the change that caused the regression. Sometimes, a regression is caused by a configuration change at the system level, so it is also important to have a record of any changes made external to the actual application (e.g. OS updates).
JAXenter: What can attendees expect from your workshop?
Mark Price: We will cover more detail about the whys and whats of performance testing, and look at techniques to ensure that we are accurately measuring system performance. Really trusting your tools (in this case a test-harness) involves understanding how they work at a very low level. The workshop covers the use of profilers and other monitoring tools, along with low-latency coding techniques that will result in a test-harness that can measure system performance down to the microsecond.