When a tech story comes in from Albuquerque, New Mexico, it stands out for location alone, given that we do not generally identify the city (or state) as a tech hub. LoadStorm from CustomerCentrix also stands out as an interesting cloud-based web performance testing tool.
It can simulate hundreds of thousands of website visits and visitor actions from many geographic areas. Detailed reports then help web developers remedy web performance issues.
The LoadStorm 2.0 tool is available as self-service or bundled with performance engineering consulting from its makers. These extended services include work designing tailored test scripts, executing tests, analyzing results, and / or providing optimization advice. The consulting services are designed help avoid common mistakes.
The product includes a capability to record a test script without coding, which can then be modified within the LoadStorm application. Scripts allow testers to parameterize virtual user (VUser) actions from generated data or uploaded files. Actions include logins, shopping cart processes, searches, downloads, auction bidding, and registration. Testers also have access to real-time analytics provided in graphs and reports.
Users can drill down into the performance metrics for specifics on a variety of elements. They include performance details on servers, scripts, resource types, VUser, or web pages. With this performance intelligence, testers can pinpoint scalability bottlenecks and potential problem areas in any web application.
Calculated performance metrics include response times, throughput, requests per second, error rate, data transferred, average throughput, and so on. The testing tool also provides performance measurement of page completion times throughout the test. Realistic traffic is simulated and works like a Chrome/Firefox/Internet Explorer/or Opera browser. VUsers accurately put stress on the servers for all types of behavior such as AJAX-driven requests, form posts, parallel connections, and JSON payload transfers.
Test scripts can be weighted during a test run in order to generate the correct percentage of VUsers signing-in, searching, and so on. Tests can be run for long durations to identify specific potential problems like memory leaks. For example, a test may run for eight hours with scheduled volume ramping patterns.