Our approach to performance testing
Performance testing shares similarities with automated testing, in that you are automating inputs into a system under test and monitoring the outputs. The difference lies with what outputs we are monitoring.
Automation testing will create a simulated instance of a web browser or application and send inputs to simulate a realistic scenario. With performance testing, we are only sending requests to web servers or REST endpoints and monitoring how long it takes to send the first byte and the full response back.
We look at these key components to identify the best solution to achieve that:
- Tech Stack (Is this an API endpoint, a web server or a desktop app?)
- Some tools are better than others for certain systems.
- Testing Requirements (How complex are the user flows?)
- Depending on the complexity, it may need more planning time to optimise the building of tests.
- Budget (What are the costs of each tooling and the associated work?)
- Particularly large loads of virtual users require more expensive architecture.
- Skill (What is the skill level in the organisation for the available tools?)
- Some tools can test using code like Scala, these skills may already exist in the business – having team members adopt performance testing can increase productivity in building upon it.
- User Metrics (What does a normal load look like and what is the maximum load?)
- We need to understand what we are trying to simulate.
- Business Metric (What does ‘good’ look like, what is the acceptance criteria for a usable system?)
- Every business has a different acceptable response time depending on its services.
Get in touch with us today to discuss your requirements.
+44 (0)114 399 2820