We have seen an overview of all necessary tools separately.
A bash script file will orchestrate all tools together and will generate a report.
This report is then uploaded on our intranet so every one can consult and comments the results.
In our case we use a confluence specific command line tool to upload our results (confluence CLI), but it could also be done with standard command like wget (to push via http) or ftp.
The full script execution looks like:
The main Bash script can execute multiple times the same scenario with different variables configuration.
For example, we can vary the number of threads and make a graph of the response time with increasing load:
for thread_number in 10 20 30 40
jmeter.bat -n -t BlogExample.jmx -Jresult_file=res.csv -Jthread_number=$ thread_number
#compute the average samples response time and store in a tmp file
With all collected average sample response time we can draw a graph showing the evolution with the load increasing:
Here we keep a constant response time until 100 users and then the server went into an OOM. The jvm activity graph helps to understand:
The green dots represent full garbage collect over the time and the throughput value represents the time ratio working/garbaging. In this case, the jvm was really working 28% of the time and 72% was spent in garbaging.
Creating an automated load testing environment can be achieved with only few open source tools.
Scripting all steps will help you to make sure the tests are always executed in the same conditions and will give you more time on analyzing the results than on collecting them.
Clear key graphs can also help to quickly detect problems and compare results for non regression tests.
We spend some time to put in place this kind of benchmarking environment, but now we can really focus on improving application performance or trying new configurations (jvm heap, oracle config, new framework…)