This blog post examines classical engineering Test and Measurement process improvement strategies that have been successfully applied to manufacturing, in order to define an appropriate model for the performance verification of today’s Rich Internet Applications (RIA).
Although software applications continue to evolve in terms of scale and complexity the basic models of verification, derived from their manufacturing counterparts, still hold today as they did over 70 years ago.
Critical to all software verification is the establishment of a Test and Measurement function that is active through all phases of the software production process (be that Agile or Water Fall).
Establishing the most effective Test and Measurement activities is only the initial step, these activities need to be subjected to continuous improvement as enabling technologies (as well as the System Under Test SUT) continue to evolve.
Applying a Test and Measurement continuous process improvement framework, derived from manufacturing, to software production has several challenges due to the intangible nature of software. To begin with it is not obvious ‘what’ is to be measured. In manufacturing the diameter, length, weight or some other physical characteristic of the component can be measured and verified for compliance during every stage of the production process.
In software there is no physical representation of the product so a scheme of software quality characteristics has been devised in order to Test and Measure the finished product (or components during production) so that adherence to specification can be verified.
Although there have been a number of software quality characteristic models one of the most popular is the FURPS model (HP). The FURPS model breaks out software characteristics into:-
Functionality:
Feature set, Capabilities, Generality, Security
Usability:
Human factors, Aesthetics, Consistency, Documentation
Reliability:
Frequency/severity of failure, Recoverability, Predictability, Accuracy, Mean time to failure
Performance:
Speed, Efficiency, Resource consumption, Throughput, Response time
Supportability:
Testability, Extensibility, Adaptability, Maintainability, Compatibility, Configurability, Serviceability, Install-ability, Localizability, Portability
Having a perspective (such as the FURPS model) of ‘what’ is desired in the software product or component is only the first step in establishing an effective Test and Measurement process that can be subjected to continuous improvement.
The next step is to define the measurements (and tolerances, or ranges) for each of these characteristics. By way of example Functionality can be measured by simply writing a series of functional tests that verify the software specification has been correctly implemented.
In fact functional testing services is by far the most common Test and Measurement activity in software production today. Having decided the desired measurements for each of the software characteristics the next step is selecting the most useful tools and techniques to perform the Test and Measurement of the desired characteristic (at various stages of production). Finally the Test and Measurement process itself is subjected to continuous improvement, as is done in the best manufacturing quality management systems.
A practical example of a process improvement initiative for Test and Measurement of software performance.
Having examined a basic manufacturing analogy of a Test and Measurement process improvement strategy that could be applied to software, what follows is a summary of an implementation of this strategy (and decision process) for the verification of performance of Ajax based web applications.
The example scenario represents a simplified version of the steps that would be required to analyze, design and implement a suitable improvement for performance verification Test and Measurement activities.
The steps followed, in the example, are for:-
The scenario presented, by way of example, is for CompanyCRM ( a fictitious maker of CRM products) that wishes to address performance issues with their Ajax based CRM product.
Define the current problem
CompanyCRM has been seeing numerous performance issues (response times of > 8 seconds) just after product launch and has decided to embark upon a process improvement initiative to address this issue.
Analyze and Measure the current process
CompanyCRM needs to establish a benchmark, in terms of their current Test and Measurement performance verification process. It is important, as with any process improvement initiative, that the current situation is objectively measured (benchmarked) in order to be able to measure the effectiveness of any counter measures (changes) to the process.
The failure of a given Test and Measurement performance verification point can be identified by the presence of subsequent performance issues (defects) that the given performance verification point should have detected. By way of example consider a Test and Measurement performance verification point after a given jsp (or other GCI, php, asp etc.) has been developed.
The value of such a verification point, early in the delivery cycle, would be to verify component stability, including the lack of memory leaks.
If an application delivered into production leaked memory then the memory leak can be traced to a given component and the Test and Measurement process of the given component could be ‘improved’ so as future memory leaks could be detected at that particular Test and Measurement point in the software delivery cycle.
Example – Analyze and Measure the effectiveness of the current process:
Following our example scenario CompanyCRM is finding performance issues (slow response times) when the product has been shipped to the production servers. The application is Ajax based and these performance issues were not detected in their current system load testing just prior to release.
CompanyCRM is using Jmeter as their load testing tool and after further investigation it has been determined that Jmeter is not generating the same workloads (HTTP traffic) that are being experienced in production. CompanyCRM’s performance engineers believe (hypothesis) that if they could generate more realistic web traffic during performance system testing then they would uncover the defects that the current process misses.
Analyze the current causes of the issue
Given the hypothesis, that it is the nature of the generated HTTP traffic that can be changed to improve the effectiveness of CompanyCRM performance Test and Measurement capability, CompanyCRM’s performance engineers begin to analyze the short comings of traffic that Jmeter (and other HTTP testing tools) generate.
CompanyCRM’s performance engineers examine HTTP requests made against the production server and compare these requests with what Jmeter has been reproducing and they see significant differences when generating traffic originating from an Ajax based Web Browser.
Ajax enabled applications, challenges for HTTP generation.
Ajax places more processing logic in the browser which in turn has moved the http traffic structure, and timing, away from simple Get resource (URL). By way of example consider the typical Ajax type ahead (auto complete) function that is popular in searches.
In the Ajax enabled CompanyCRM application the user might start to enter the customer contact’s surname and as the first letters of this surname are entered a list of qualifying customers are returned (auto complete) for the user to select. The http traffic for this Ajax search operation would look something like:-
http://mycrm.com/search?lang=en&search=j
http://mycrm.com/search?lang=en&search=jo
http://mycrm.com/search?lang=en&search=jon
http://mycrm.com/search?lang=en&search=jone
http://mycrm.com/search?lang=en&search=jones
The Jmeter script only had the final http request being called:-
http://mycrm.com/search?lang=en&search=jones
The Jmeter script was data driven so the the search term was given as a variable i.e &search=$lastname.
Having identified the issue the CompanyCRM performance engineers examined potential improvements for the performance verification process.
Design a process improvement
The CompanyCRM performance engineers knew that they could write scripts to read and parse out names one letter at a time (they are Jmeter experts) in order to recreate the desired requests. However, they decided to experiment with another approach to web traffic generation which they believe would be more cost effective (in terms of their labor) at generating the required traffic. The new approach they decided to research was utilizing the Headless browser strategy to performance testing.
The Headless browser approach to web traffic generation for load testing
One recent innovation that has enabled the load driver traffic to move closer to the ‘Real user’ experience is the Headless browser. The headless browser concept allows for a browser to be automatically executed without the user interface (GUI) portion.
The advantage of running a browser without the GUI component is that all of the underlying API (that communicates with the web server) is still available but the process executes with a much lower CPU and Memory resource requirement.
Utilizing the Headless browser approach enables multiple browsers to be executed simultaneously on a single computer (or VM). The execution of simultaneous (headless) browsers combined with the scaling capability of the public (or private) Cloud has proved useful for generating large amounts of web based traffic that is close to the real user experience.
An added bonus of using the headless browser approach, for load testing services, is that it allows for the re purposing of automated scripts that have been traditionally used for functional verification (i.e. Selenium Web driver scripts).
One of the reasons why the headless browser load driving approach has become so popular, in recent years, is the popularity of Ajax which brings with it a significant change in the format and timing of http requests as well as dynamically changing html content.
Following their own analysis and further research, which included a Proof of Concept testing experiment, CompanyCRM performance engineers implemented the change to the current performance verification process by utilizing a Headless Browser approach.
Implement and measure the costbenefit of the process improvement
Cost
The cost of Test and Measurement performance verification is basically time (labor) and materials, with materials being the verification architecture (load testing tools) and the required hardware to execute the tests.
Although this is a simple breakdown the labor costs will depend on the testing tool’s ease of use (usability). Usability, for all software, impacts the labor cost drivers of:- Training (time to become proficient with the testing technology), efficient tool utilization (how many hours does it take to produce the test cases).
Benefit
The Benefit of Test and Measurement performance verification process can be measured by the number of performance defects discovered after the product has been moved to production.
In this way the performance defects, found in production, should be documented and reviewed, in order to determine if this type of defect could have been discovered earlier in the SDLC and if needed the verification (load testing) process should be changed.
In the above example CompanyCRM performance engineers should be able to identify performance defects earlier in the SDLC given the improved software testing services process.
Subject the process to continuous improvement
That said, the verification process (both cost and benefit) should be continually monitored and scrutinized for process improvement as new tools and techniques come onto the market and the nature of the software being produced (and it’s environment) evolves.
Conclusion
Identifying the right test automation services framework to validate web based applications is an ongoing search for the approach that aligns the value (purpose) of the verification process with the lowest cost.
Within the SDLC no single load driver (load testing tool) will be appropriate for all the given performance verification points.
By identifying, measuring and documenting the appropriate load driver for the given performance verification point, an end to end Test and Measurement process can be established that is analogous to it’s manufacturing counterpart.
For driving load that is close to the Real User, for AJAX based application, there is a compelling framework that runs multiple ‘headless’ browsers.
The AJAX illustration is just one example of identifying the most appropriate framework for the given Test and Measurement performance verification task and in any event all verification processes should be subjected to continuous improvement as we all move toward the ultimate goal of zero defects (and satisfied customers).
Although software applications continue to evolve in terms of scale and complexity the basic models of verification, derived from their manufacturing counterparts, still hold today as they did over 70 years ago.
Critical to all software verification is the establishment of a Test and Measurement function that is active through all phases of the software production process (be that Agile or Water Fall).
Establishing the most effective Test and Measurement activities is only the initial step, these activities need to be subjected to continuous improvement as enabling technologies (as well as the System Under Test SUT) continue to evolve.
Applying a Test and Measurement continuous process improvement framework, derived from manufacturing, to software production has several challenges due to the intangible nature of software. To begin with it is not obvious ‘what’ is to be measured. In manufacturing the diameter, length, weight or some other physical characteristic of the component can be measured and verified for compliance during every stage of the production process.
In software there is no physical representation of the product so a scheme of software quality characteristics has been devised in order to Test and Measure the finished product (or components during production) so that adherence to specification can be verified.
Although there have been a number of software quality characteristic models one of the most popular is the FURPS model (HP). The FURPS model breaks out software characteristics into:-
Functionality:
Feature set, Capabilities, Generality, Security
Usability:
Human factors, Aesthetics, Consistency, Documentation
Reliability:
Frequency/severity of failure, Recoverability, Predictability, Accuracy, Mean time to failure
Performance:
Speed, Efficiency, Resource consumption, Throughput, Response time
Supportability:
Testability, Extensibility, Adaptability, Maintainability, Compatibility, Configurability, Serviceability, Install-ability, Localizability, Portability
Having a perspective (such as the FURPS model) of ‘what’ is desired in the software product or component is only the first step in establishing an effective Test and Measurement process that can be subjected to continuous improvement.
The next step is to define the measurements (and tolerances, or ranges) for each of these characteristics. By way of example Functionality can be measured by simply writing a series of functional tests that verify the software specification has been correctly implemented.
In fact functional testing services is by far the most common Test and Measurement activity in software production today. Having decided the desired measurements for each of the software characteristics the next step is selecting the most useful tools and techniques to perform the Test and Measurement of the desired characteristic (at various stages of production). Finally the Test and Measurement process itself is subjected to continuous improvement, as is done in the best manufacturing quality management systems.
A practical example of a process improvement initiative for Test and Measurement of software performance.
Having examined a basic manufacturing analogy of a Test and Measurement process improvement strategy that could be applied to software, what follows is a summary of an implementation of this strategy (and decision process) for the verification of performance of Ajax based web applications.
The example scenario represents a simplified version of the steps that would be required to analyze, design and implement a suitable improvement for performance verification Test and Measurement activities.
The steps followed, in the example, are for:-
- Define the current problem
- Analyze and Measure the current process
- Analyze the current causes of the issue
- Design a process improvement
- Implement and measure the costbenefit of the process improvement
- Subject the process to continuous improvement
The scenario presented, by way of example, is for CompanyCRM ( a fictitious maker of CRM products) that wishes to address performance issues with their Ajax based CRM product.
Define the current problem
CompanyCRM has been seeing numerous performance issues (response times of > 8 seconds) just after product launch and has decided to embark upon a process improvement initiative to address this issue.
Analyze and Measure the current process
CompanyCRM needs to establish a benchmark, in terms of their current Test and Measurement performance verification process. It is important, as with any process improvement initiative, that the current situation is objectively measured (benchmarked) in order to be able to measure the effectiveness of any counter measures (changes) to the process.
The failure of a given Test and Measurement performance verification point can be identified by the presence of subsequent performance issues (defects) that the given performance verification point should have detected. By way of example consider a Test and Measurement performance verification point after a given jsp (or other GCI, php, asp etc.) has been developed.
The value of such a verification point, early in the delivery cycle, would be to verify component stability, including the lack of memory leaks.
If an application delivered into production leaked memory then the memory leak can be traced to a given component and the Test and Measurement process of the given component could be ‘improved’ so as future memory leaks could be detected at that particular Test and Measurement point in the software delivery cycle.
Example – Analyze and Measure the effectiveness of the current process:
Following our example scenario CompanyCRM is finding performance issues (slow response times) when the product has been shipped to the production servers. The application is Ajax based and these performance issues were not detected in their current system load testing just prior to release.
CompanyCRM is using Jmeter as their load testing tool and after further investigation it has been determined that Jmeter is not generating the same workloads (HTTP traffic) that are being experienced in production. CompanyCRM’s performance engineers believe (hypothesis) that if they could generate more realistic web traffic during performance system testing then they would uncover the defects that the current process misses.
Analyze the current causes of the issue
Given the hypothesis, that it is the nature of the generated HTTP traffic that can be changed to improve the effectiveness of CompanyCRM performance Test and Measurement capability, CompanyCRM’s performance engineers begin to analyze the short comings of traffic that Jmeter (and other HTTP testing tools) generate.
CompanyCRM’s performance engineers examine HTTP requests made against the production server and compare these requests with what Jmeter has been reproducing and they see significant differences when generating traffic originating from an Ajax based Web Browser.
Ajax enabled applications, challenges for HTTP generation.
Ajax places more processing logic in the browser which in turn has moved the http traffic structure, and timing, away from simple Get resource (URL). By way of example consider the typical Ajax type ahead (auto complete) function that is popular in searches.
In the Ajax enabled CompanyCRM application the user might start to enter the customer contact’s surname and as the first letters of this surname are entered a list of qualifying customers are returned (auto complete) for the user to select. The http traffic for this Ajax search operation would look something like:-
http://mycrm.com/search?lang=en&search=j
http://mycrm.com/search?lang=en&search=jo
http://mycrm.com/search?lang=en&search=jon
http://mycrm.com/search?lang=en&search=jone
http://mycrm.com/search?lang=en&search=jones
The Jmeter script only had the final http request being called:-
http://mycrm.com/search?lang=en&search=jones
The Jmeter script was data driven so the the search term was given as a variable i.e &search=$lastname.
Having identified the issue the CompanyCRM performance engineers examined potential improvements for the performance verification process.
Design a process improvement
The CompanyCRM performance engineers knew that they could write scripts to read and parse out names one letter at a time (they are Jmeter experts) in order to recreate the desired requests. However, they decided to experiment with another approach to web traffic generation which they believe would be more cost effective (in terms of their labor) at generating the required traffic. The new approach they decided to research was utilizing the Headless browser strategy to performance testing.
The Headless browser approach to web traffic generation for load testing
One recent innovation that has enabled the load driver traffic to move closer to the ‘Real user’ experience is the Headless browser. The headless browser concept allows for a browser to be automatically executed without the user interface (GUI) portion.
The advantage of running a browser without the GUI component is that all of the underlying API (that communicates with the web server) is still available but the process executes with a much lower CPU and Memory resource requirement.
Utilizing the Headless browser approach enables multiple browsers to be executed simultaneously on a single computer (or VM). The execution of simultaneous (headless) browsers combined with the scaling capability of the public (or private) Cloud has proved useful for generating large amounts of web based traffic that is close to the real user experience.
An added bonus of using the headless browser approach, for load testing services, is that it allows for the re purposing of automated scripts that have been traditionally used for functional verification (i.e. Selenium Web driver scripts).
One of the reasons why the headless browser load driving approach has become so popular, in recent years, is the popularity of Ajax which brings with it a significant change in the format and timing of http requests as well as dynamically changing html content.
Following their own analysis and further research, which included a Proof of Concept testing experiment, CompanyCRM performance engineers implemented the change to the current performance verification process by utilizing a Headless Browser approach.
Implement and measure the costbenefit of the process improvement
Cost
The cost of Test and Measurement performance verification is basically time (labor) and materials, with materials being the verification architecture (load testing tools) and the required hardware to execute the tests.
Although this is a simple breakdown the labor costs will depend on the testing tool’s ease of use (usability). Usability, for all software, impacts the labor cost drivers of:- Training (time to become proficient with the testing technology), efficient tool utilization (how many hours does it take to produce the test cases).
Benefit
The Benefit of Test and Measurement performance verification process can be measured by the number of performance defects discovered after the product has been moved to production.
In this way the performance defects, found in production, should be documented and reviewed, in order to determine if this type of defect could have been discovered earlier in the SDLC and if needed the verification (load testing) process should be changed.
In the above example CompanyCRM performance engineers should be able to identify performance defects earlier in the SDLC given the improved software testing services process.
Subject the process to continuous improvement
That said, the verification process (both cost and benefit) should be continually monitored and scrutinized for process improvement as new tools and techniques come onto the market and the nature of the software being produced (and it’s environment) evolves.
Conclusion
Identifying the right test automation services framework to validate web based applications is an ongoing search for the approach that aligns the value (purpose) of the verification process with the lowest cost.
Within the SDLC no single load driver (load testing tool) will be appropriate for all the given performance verification points.
By identifying, measuring and documenting the appropriate load driver for the given performance verification point, an end to end Test and Measurement process can be established that is analogous to it’s manufacturing counterpart.
For driving load that is close to the Real User, for AJAX based application, there is a compelling framework that runs multiple ‘headless’ browsers.
The AJAX illustration is just one example of identifying the most appropriate framework for the given Test and Measurement performance verification task and in any event all verification processes should be subjected to continuous improvement as we all move toward the ultimate goal of zero defects (and satisfied customers).
No comments:
Post a Comment