The University of Kent needed to validate the performance and scalability of their Attendance Registration System for students, together with its integration with their Student Management System, Tribal Group SITS. The solution selected was Simac Presto – performance testing was needed to ensure the application could handle large expected volumes of users simultaneously accessing the API’s, without impacting the responsiveness of the system. In addition to testing Presto directly, the integration and data transfer between Presto and the Student Records System (Tribal SITS) also needed to be tested.
Background to API Performance Testing
API performance testing assesses the responsiveness, scalability, reliability, and speed of an API under load. It helps ensure that APIs can handle expected and unexpected user loads, as well as meet performance standards and response times.
Key Aspects of API Performance Testing include:
- Latency: Time taken for a request to travel from the client to the server and back
- Throughput: The number of transactions processed within a given time
- Scalability: How well the API handles increasing user loads
- Response Time: The time taken by the API to respond to a request
- Error Rate: Percentage of failed requests during testing
- CPU and Memory Usage: Measures resource consumption during the test
API-based solutions often face performance challenges, such as connection timeouts with large payloads, increased response times and error rates under high loads, and uncertainty around optimal configuration settings. Addressing these issues is crucial to ensuring system reliability and scalability.
Solution
To address the performance challenges, a comprehensive load testing approach was recommended to the university. The first stage was to work with the technical teams from UoK and Presto, to gain a detailed understanding of the scope of testing, including the details of each of the API’s – endpoints, datatypes, format and frequency, for both Presto (POST) and Stutalk (GET & POST). Stutalk is a proprietary integration framework designed exclusively for use with SITS by the vendor, Tribal Group. Once the architecture of the in-scope items was understood, the detailed scope was documented in a test plan, including the details on volumes, data / dependencies and the test profiles. Our technical team provided a walkthrough of the plan to all parties, to gather feedback, make changes and gain consensus, before embarking on detailed test preparation. JMeter was then used to re-create user actions on the API’s that could be played back at volume, using the test data provided.
Once scripting was complete, scenarios were run on the applications, consisting of combinations of API’s being triggered at specific volumes, to simulate normal and peak expected traffic loads on the systems. Testing simulated workloads ranging from 1,000 to 3,000 API requests per hour, to assess the performance limits of the APIs and the overall system. As the tests were executed, our tools captured key statistics on system resource utilisation, including CPU and memory usage, to compare against response times and identify potential performance bottlenecks.
Results
The performance tests conducted on the Presto and StuTalk APIs revealed important insights into their capacity and reliability under varying workloads. For the Presto APIs, it was determined that the system could reliably process up to 2,000 API requests per hour, maintaining stable CPU and memory usage without encountering errors. Beyond this limit, the system experienced significant performance degradation, with connection timeouts and increased CPU load. Based on these results, a recommended configuration for optimal performance is to handle up to 2,000 requests at a maximum rate of 42 requests per second.
Similarly, the StuTalk API demonstrated reliable performance with up to 1,200 API requests per hour. However, exceeding this limit led to performance issues, including connection timeouts and system failures. It was recommended that for best results, the system should be configured to handle 20 requests per second, ensuring efficient and reliable operation.
These tests highlight the critical benefits of performance testing, as they allow for precise configuration to optimise system reliability, prevent bottlenecks, and ensure smooth API operations even under high loads. Once the scripts and the test harnesses have been developed, it is quick to re-run tests, which is common and allows technical teams to make changes to the code or infrastructure, before repeating the tests. Performance tuning in this way not only provides increased confidence, but allows software applications to be optimised to ensure maximum efficiency and throughput when in a production situation.
Once projects are completed, our team provide all developed test assets, so our clients can re-run tests on demand.