A leading Global Law Firm were implementing Aderant Expert Billing, a leading Practice Management System (PMS) and needed to ensure their global infrastructure could support the expected load once their employees begun using the application, both via a Citrix desktop and directly, from their data centre in the Cayman Islands. There were three main areas of focus for the performance test, which were selected as those high volume / high priority activities that would be likely to generate the most load on the initial release of the application.
- Expert WIP Aware (web application: matters, time entries, create pre-bill)
- Expert / Paperless Billing (desktop application: bill edits, mark bill as complete)
- Expert Assistant (desktop application: billing approvals workflow)
During the Test Planning phase, the number of records for each of the activities under test were assessed and agreed with the client, so the simulation would be as realistic as possible. Peak volumes were used to drive these numbers, so that month-end activity is simulated, and the tests would be realistic. Data could then be carefully prepared for each user journey and linked together so that the logical flow of matters, time and bills is preserved.
As well as testing the application directly, there was also a need to generate load via Citrix, to ensure the capacity of the Citrix server farm was sufficient to service the required load. As the tests were needed to be simulated via web, desktop and Citrix, there was a need for an enterprise grade performance tool that would be capable of recording and executing tests recorded using different protocols. After a short PoC of the tool against Aderant in the customer’s environment, OpenText LoadRunner was selected for this purpose. An additional advantage of LoadRunner was the flexibility, which allowed the rental of the multi-protocol licences needed for the tests. The necessary hardware, including Controller machine and Load Generators, were then setup within the customer’s network, to allow full access to the applications under test. It is vital that the right sized hardware is specified up front, to have sufficient resources available to be able to run the tests – the Citrix protocol particularly demands significant CPU and RAM.
Following on from agreement of the Test Plan and installation of our tooling, scripting was carried out via both web and Citrix protocols. Due to the nature of the different protocols, it was necessary to repeat scripting across web and Citrix. Our team provided a list of test data requirements to support the scripts, which was delivered by the client’s technical team in parallel with the scripting effort. During initial scripting there were some issues with recording Citrix scripts, the root cause of which was found to be that ActiveX controls were disabled in the customer standard Windows build. Once enabled, the Citrix ICA protocol was successfully enabled and proven to record network traffic.
As the test environment was situated within a production cluster, the decision was taken to execute the tests over a weekend, to minimise any risk of consequences of failure to other systems. It was agreed to test up to 400 concurrent users, to generate up to 4500 bills in the approvals workflow, per test. The tests executed were designed against the following profiles:
- Normal Load: normal level of activity, transactions, and concurrent users, over 1 hour
- Peak Load: peak levels of activity, transactions, and concurrent users, over 1 hour
- Soak: normal to peak levels of activity, over 3 hours, to identify memory leaks or system degradation, after extended use of the application
- Stress: identifying the breaking point of the system by increasing the load until the system becomes unresponsive or falls outside of acceptable boundaries
During non-Citrix initial tests, our team quickly identified an issue with the Aderant application servers, which caused a bottleneck and would not support the anticipated level of load. Before running any more tests, the 6 servers were upgraded from 8 cores to 12 and 16Gb to 32Gb of RAM. This increase in resources proved to be sufficient to support the tests, as well as the production instances once our tests were completed.
We also identified a bottleneck caused by the SQL Server supporting the application – there was insufficient processing power; this server was maxed out at 100% CPU during the subsequent tests. Our team recommended an increase in the number of cores from 16 to 24, which removed this bottleneck and allowed us to proceed.
Once these infrastructure limitations were resolved, it was proven that the application could support the intended number of users and transactions and we could proceed onto the Citrix tests, which demonstrated the Citrix infrastructure could comfortably accommodate a maximum of 200 users, given a slow ramp of users. Where a more rapid user ramp up was tested, then an increasing number of users were unable to login; where only half of the users attempting to login were able to access the application.
It was concluded that the planned Citrix infrastructure would be unable to cope with a load greater than 200 users. It was recommended that the number of Citrix servers needed to support the level of anticipated usage should be increased from 16 to 24.
Conclusion
Testing was effective at identifying bottlenecks in hardware infrastructure which prevented the levels of users accessing the application, both directly and via Citrix. Resources were increased significantly on the application and database servers, which was proven via re-testing to be an effective solution to be adopted in production. On the Citrix side, further capacity constraints were identified, which again needed a substantial increase in the number of Citrix servers to support the application, to allow the anticipated number of users to access Aderant.
The combination of direct application and Citrix performance testing was vital to isolate the bottlenecks in the application hardware itself, but also the Citrix farm providing vital global access to the PMS. Having completed the tests, our client was able to make changes to their environment and go live with confidence that performance had been proven and would allow their users to be productive from day one, with no down time or system problems.