Here's My Theory...
As theoretically, the best software testing model is always follow from it literatures - a standard one I meant. There are hundreds step that must be done sequentially, need additional peoples involved to work on it, also complete of infrastructure to support it, also huge of times to take and also lot of money to spend, etc, etc. So, based on my own practical experiences - in a minimal set of peoples - the software testing life-cycle can be cut to 3 sub-sequences-types (as it follow on the picture below); (1) Performance Test, (2) Load test, and (3) Stress Test.
Why? Here's the reasons:
1. Performance Test
This is baseline test. The test running in a static maximal number of concurrent user (X) as expected the system will hit. Various results at the end, will tuning some of critical points: the application level (block-code or algorithm changes), the database level (SQL code or query optimizing) and the server level (profiling OS and some other specific configurations). The network level is at the end of this boundary tuning. Tens of experiment s giving the best mix and match about the whole combinations, where the system can serve it optimum functionality. The goal of this test is : eliminating what can cause bottleneck and exactly increasing performance.
2. Load Test
Load testing often called as volume testing. The test start with maximal number of concurrent users (X) provided from test #1 and increasing to relevant number (Y) by increment of some number (Z). The goal is : to find highest number of load that the system can accept while still function constantly and properly. At the end of this test, hardware and network resources may need increased as it predicted. The scaling-up factor should improved vertically or horizontally, which give best value for money to spend efficiently and effectively.
3. Stress Test
The goal of the test is to ensure that the system recoverability functioned properly after the system down. This is important, since we also need to know how the system reacts to failure, including the interoperability of DRC infrastructure to work.
Again, these 3 software testing cycle above came from my logical thinking considering to small scale of peoples involved in a project. It is opened for debate on below comment box...
As basically, software testing tool purposed to measure performance, for exactly to know throughput and response time. Both number are main objectives of overall testing. Anyway, tons of software testing tool are available, but few of it are easy to operate and less, bit of it are free (See this source: http://en.wikipedia.org/wiki/Load_testing#Load_testing_tools). jMeter (http://jmeter.apache.org/) seems to be relevant to use. As is to others, it provide comprehensive way to "burn" the system anyway and providing easy-to-doing operation.
jMeter is a desktop application built from Java, such that, it's an independent platform and have ability to run on several OS platform (including my Mac :) - with Java Runtime installed of-course. To learn how to make a test, download latest version of jMeter and extract it somewhere on your PC.
Execute jMeter from bin folder by clicking ApachejMeter.jar file. After it launched - on the left pane - you'll see only 2 menu; Test Plan and WorkBench menu. The Test Plan describe about what the test physically do, while WorkBench define in-contrary (it's highly needed to create scenarios). Don't get confused, since this tools are designed for developer, you'll familiar to use in a day. Anyway, just like peoples shout, this tools lack of graphical report. A basic report viewer contains data listed on a table can be viewed from Add::Listener sub-menu:
However, it seem so hard understand to read the test result. So, before we take a test, so I hardly recommend to having a plugins called Statistical Aggregate Report. Download (StatAggVisualizer.zip ~ 1.8MB) and extract it. You'll see the same composition just like picture below:
Move the 3 files (jcommon-1.0.5.jar, jdnc-0.6-aR.jar and jfreechart-1.0.2.jar) to jMeter lib folder and move every files under ext to ext folder. After restart jMeter, you'll see the plugins in Add::Listener sub-menu:
So, we're getting ready to take a test. First, take a look picture below. Here's a basic skeleton of jMeter:
Add the appropriate 6 basic components under Test Plan and WorkBench menu just like above image. After it done, now we start to record the activities from browser and create a test scenario. But, we need to make browser's proxy heading to jMeter HTTP Proxy server (locally). So, open browser proxy configuration (I particularly using Firefox) and set to localhost with port 8080.
Close the browser configuration and back to HTTP Request Defaults menu on jMeter. Set the web server IP where we'll open the application (for this test, my application is ready on 10.2.2.212).
Until this point, we need an application to test - a single web based application, but jMeter seems able to run others too. In my experience, I use what I developed before: Yet Another Concept of Custom Manageable FTP - a PHP based FTP client. Since jMeter capable to handling FTP burning test, so the scenario is to measure uploading process simultaneously by number of users.
Now, we're doing a scenario recording by running some scene we've like to test. It's better to measure from login session until logout sequentially. If you've done it, now take view back to jMeter Recording Controller pane. See that the scenario recording process is created automatically.
At this point, we need to take a bit adjustment from the scenarios. At least (optionally) modifying the server IP on each HTTP request window. Here in my app test, I also add full path to the object file required for uploading process.
Next, let's try burning it for once. Click green start button to start the test. This test - as default - will only executing a single of thread (by means, one user for one loop). Click the Statistical Aggregate Report to show the visual graph:
Take a look at the result, the total throughput was 7.6/min. It's means that the server can accept 7.6 request per minute (from single thread). While response time showing 7,840ms (7.8 sec) as per process (by the Average column). Now, how about making a test with some number of concurrent users? For this, first you need to remove and add a new Statistical Aggregate Report. Then, modify thread number from Thread Group menu.
Here I change the number the thread (users) to 100 and leave loop time to 1. It's mean that the application will hit with 100 users simultaneously. Ready to start, click the green icon, wait until it done (from green box until it show gray) and here below the result:
From picture above, let's try to count the TPS (Transaction Per Second). This can be calculated by dividing the number of threads with total amount of times needed to finish the test (in seconds). So, the test equal with 0.91 TPS from 100 concurrent users at the same time. Note that this is local testing condition, no network-simulator used on it. For better result, please combine bandwidth limiter, sniffer or other networks tools to the test environment.
Based on previous test, some improvement needed in application level, database and OS to remove existing system bottleneck, also increasing TPS calculation in the next testing and gain higher number of concurrent users. The higher TPS calculation means more efficient system. This is what I called as Performance Test, to get the mix and match overall system configuration.
Concurrent User vs Transaction Per Second
As I said before, the thread number reflecting concurrent user accessing the system at the same time. In a real world system, calculating concurrent users can be done from additional built-in statistical software assist (eg: MRTG or other statistical software). For easy example, we can count it using Google Analytics (GA) free service. Take a look an example graph below, where I combined a web page with GA. The data provided in a specified month (April for example):
Set the appropriate time range in April to get the amount of visitor and average visit time. The concurrent users can be calculated using formula as follow:
concurrent_users = (monthly_visits x time_on_site) / (3600 x 24 x 30)
From the picture, we got 110,055 total visitor with average 2 mins and 36 sec of visit time (equal with 156 sec). So, here the concurrent users:
concurrent_users = (110,055,963 x 156 sec) / (3600 x 24 x 30)
concurrent_users = (17,168,580) / (2,592,000)
concurrent_users = 6.62 users
Software testing is an interesting topic - as a part of software development - and it need to be done frequently to maintain the system performance and scalability. However, some projects ignore this procedure to cut down operational cost that may bring another future possibility of potential expense (eg: hardware replacement or anything else). The using of jMeter just for an example, it's a learning media for beginner about knowing behind-the-scene of creating the scenario work-flow. Please add your comment below and thank's for stay tune on this blog.
PS: If you've benefit from this blog,
you can support it by making a small contribution.