Performance Testing
Performance Testing
Performance testing is fundamental for organisations of all sizes. And without effective performance testing, costs can spiral.
In 2009, Amazon found that for every 100ms of latency, it cost them 1% of sales. And for a huge company such as Amazon, this translates to millions of pounds worth of sales and lost revenue. Google also found that 0.5 seconds in search page generation dropped traffic by 20%, which ultimately lost them revenue in the form of advertising reach and analytics. Time literally is money.
Related pages
What is Performance testing?
The expectations of customers are growing with every single technological release. And with the recent release of 5G, the bottleneck of bandwidth throughput is moving more and more into the hands of the business.
To be able to offer usable systems, it is essential to understand the systems we provide and what they are capable of. And to do this, we have performance testing.
Purpose of performance testing
To effectively implement performance testing, we need to understand the purpose of the testing, which can be boiled down to following:
- Build confidence and ensure usability.
- Determine response times and user experiences.
- Discover load limitations.
- Detect memory leaks and/or bottlenecks.
- Understand and improve the experience.
Our approach to performance testing
Performance testing shares similarities with automated testing, in that you are automating inputs into a system under test and monitoring the outputs. The difference lies with what outputs we are monitoring.
Automation testing will create a simulated instance of a web browser or application and send inputs to simulate a realistic scenario. With performance testing, we are only sending requests to web servers or REST endpoints and monitoring how long it takes to send the first byte and the full response back.
We look at these key components to identify the best solution to achieve that:
- Tech Stack (Is this an API endpoint, a web server or a desktop app?)
- Some tools are better than others for certain systems.
- Testing Requirements (How complex are the user flows?)
- Depending on the complexity, it may need more planning time to optimise the building of tests.
- Budget (What are the costs of each tooling and the associated work?)
- Particularly large loads of virtual users require more expensive architecture.
- Skill (What is the skill level in the organisation for the available tools?)
- Some tools can test using code like Scala, these skills may already exist in the business – having team members adopt performance testing can increase productivity in building upon it.
- User Metrics (What does a normal load look like and what is the maximum load?)
- We need to understand what we are trying to simulate.
- Business Metric (What does ‘good’ look like, what is the acceptance criteria for a usable system?)
- Every business has a different acceptable response time depending on its services.