Thursday, December 26, 2019

Quality is inversely proportional to variability

As this is my first post on this new Blog (more coming soon),  I will deal with one of my favourite subjects :–

Trying to apply Statistical Process Control methods to the process improvement of software production and maintenance.

There are many perspectives and definitions of quailty that speak to the pioneering Statistical Process Control work that W. Edwards Deming and others did in post war Japan to rebuild its depleted manufacturing capability.

 A more recent and succinct definition of quality comes from Douglas C. Montgomery – 1996 and is:-

 Quality is inversely proportional to variability

 This quote is taken from his book entitled:-

 Introduction to Statistical Quality Control: Student Resource Manual

 Whilst this book is still on my reading list the overall notion that quality improvement can be seen as a reduction of variability in process and products got me thinking about how variability is addressed in the various Project related process areas of CMMi (The Software Engineering Institute’s, Software Process Improvement framework).

 General issues with applying Statistical Quality Control methods to Software production:-

 The basic idea of any process improvement initative is to characterize the current state of production, together with the value and cost of what is produced, with the intention to alter the given process in such a way that the value increases andor the cost decreases.

The issue with software production (achieved under projects) is that the processes and their outputs are highly variable when compared to their manufactured counterparts. This raises issues of comparing like for like when trying to form hypothesises for potential improvements or verifying that any change has in fact made an improvement.

 How project management variation is addressed in CMMi

 The CMMi for software development, a software process improvement framework, breaks down the various components of Project Management and recommends a configurable (mix and match) approach to selecting an appropriate project lifecycle.

By making the whole project management process configurable for various predefined lifecycles, what can be achieved with CMMi is an isolation of those processes (procedures) that are consistent over time and therefore more readily subjected to traditional process improvement methods and techniques.

 Here is a look at how CMMi splits up the components of project management and how this separation allows for an identification of consistency that will lend itself to Statistical Quality Control.

 The Organizational Process Definition (OPD) process area.

 Within CMMi the goals (remembering CMMi specifies What is required and not How to do it) of the OPD are concerned with creating a Process Asset Library (PAL).

 The PAL documents all the processes that are to be followed and subjected to measurement for improvement. These processes include various project life cycle models e.g.

Waterfall
Spiral
Evolutionary
Iterative

In this way there are multiple lifecycle models defined and the appropriate one is chosen for a given project.

 The PAL would also document smaller tasks such as Usability Testing so that during Project Planning the appropriate Project lifecycle would be selected in addition to mixing and matching the various tasks within the selected project lifecycle.

 This project lifecycle configurability allows for similar tasks to be compared over the course of multiple projects. For example the Usability testing task would remain stable and be applied equally to an iterative or waterfall project lifecycle.

 CMMi has a Quantitative Project Management (QPM) process area that also makes use of this modular structure. By recording metrics for the previous performance of standard (modular) tasks, performance and quality objectives can be established for the given project in order to plan and monitor the project’s progress.

 In essence CMMi allows for a project’s structure to be assembled from tasks and processes defined in the PAL. Each individual component in the PAL also has documented performance metrics which can be used for Quantitative Project Management to monitor and react to performance testing services or quality issues.

 This modular approach to breaking down project tasks with associated performance metrics will ultimately reduce variability in the process, which in turn will facilitate a process improvement initiative.

 Although project management is given as an example the question we should all ask, as quality engineers, is what processes (or parts of processes) will be consistent over multiple projects and how can that consistency be leveraged for process measurement and improvement.

The pragmatic difference between Verification and Validation

There are many definitions of the terms Verification and Validation on the web and in essence they all come down to the objectives and goals of the undertaking, rather than any given methods or techniques applied.

The typical definitions for the V&V terms are:-

Verification: Determining if the software product is being built correctly

Validation: Determining if the correct software product is being built.

Although these terms look similar and are both concerned with providing the right product at the optimal cost, each of these process areas serves a different purpose.

Validation speaks to the overall value of the delivered product, when placed in its working environment whilst verification speaks to the correct procedures and standards, to deliver the product, being followed. Ironically a product may be verified to have followed the correct procedures but invalid in that ultimately it did not fulfill its intended purpose when placed in the production environment.

A typical SDLC with Validation and Verification activities.

For further clarification consider the typical SDLC as a reference point for verification and validation activities.

Business requirements are the main focus of validation both in terms of inspection and testing the delivered application. In terms of inspection (of the business requirements) the question “will this satisfy the business need when implemented” is one that is addressed.

 There is also a verification dimension to an inspection (or walk-through) of the business requirement that is concerned with the correct standards or format being used for the business requirements document. In this way the verification of the business requirements can be done by someone who does not know the business needs but rather knows the correct structure for the requirements document itself.

Following on to the technical specification that has been derived from the business requirements we can see that a similar certification exercise can be performed in that the document can be verified to meet standards.

The question of validation of the technical specification is more complicated. If validation of the technical specification is to be done it must be done by someone who knows the intended purpose of the software.

If there is a traceability exercise done, i.e. someone checks that each paragraph in the business requirements has an entry in the technical specification, then this is not strictly a validation activity.

The process of requirements traceability, as noted in the previous sentence, is part of verification. If there is some walkthrough of the technical specification, with potential screen shots and mock ups, with the business proponents then a validation activity could take place.

As we move to the developed software product it is possible for further validation activities to take place but these will require either a prototype or some other view of the software that is visible to the business proponents.

One of the reasons small iterations (agile style) have become popular is that validation can be done incrementally rather than waiting for the final product to be completed.

When the system has been developed the testing of the system against the technical specifications is a verification activity whilst the testing of the software against the business requirements is a validation activity.

Through all stages of the development cycle any review of the software (or specifications) that answers the fundamental question of ‘will this product fulfill its business purpose?’ is considered validation.

Validation should be done as early as possible in the development life cycle, in order to avoid late surprises that the software is not what the end user (or customer) required. Many software projects have followed the prescribed standards and procedures but have failed due to producing an ‘invalid’ product.

The consequences, for the Quality assurance and software testing services team’s organizational structure, of these definitions will depend on the complexity of the business problem being addressed.

If the business problem is trivial then the business and systems analysis together with validation and verification activities can be combined.

Generally it is more useful to separate out the duties of developers from testers, so that so called independent verification and validation (IV&V) can be carried out.

Strictly speaking the term independent verification and validation refers to a completely separate team (outside of the developer management structure) being engaged to conduct the IV&V activities.

That said there is no issue combining the actual V&V activities, so that someone may perform both system and business requirements testing. The acceptance testing should still be done by a business proponent as the main function of acceptance testing is the final approval.

Learn more about performance testing services

Where the business problem is complicated, and in most large companies the problem is complex due to the many interconnecting systems, then a separation of validation from verification, in terms of personnel, is both effective and efficient.

In terms of effectiveness, having individuals (typically business analysts) focus on the definition then subsequent testing (validation) of the requirements will create the channel by which the Voice of the Customer (VOC) can be heard throughout the software delivery process.

Business Process Management (BPM) and other business facing modeling techniques are effective communication artifacts that specialized business analysts can acquire and build skills in. In this way (specialization around validation activities) a professional team of business proponents can be established and developed.

The methods and techniques used for verification are considered more technical in orientation (than business facing validation techniques).

 Having more technical team members specialize in verification activities will yield efficiency and effectiveness gains. Consider white box testing techniques, which require code coverage to be measured, as a quality control activity that does not require any knowledge of the business requirements.

Such an activity is a prime example of software verification, in that a high percentage of code coverage can be ‘specified’ as a requirement of testing. In this way having a high percentage of code covered gives a ‘yes’ answer to the question “Are we building this product correctly?”. The application of white box testing techniques is better served by having personnel specialize in those particular (more technical) skills.

Conclusion.

Separating team members into verification of validation activities, for large complex projects, is an effective divide and conquer strategy for breaking down the overall software quality control goals.

By recognizing the essential differences between verification and validation an organization is better positioned to engage in meaningful process improvement as personnel develop the required skills and techniques in these two process areas.

Selenium Webdriver and Jmeter testing tools should always be on your radar

Over the past 3 or 4 years I have noticed two open source software testing services tools mature into a must have for any testers personal tool box. These two basic tools (Selenium Webdriver and Jmeter) are continually evolving in terms of their widespread use and the functionality offered.

 Although I am sure most readers here will have heard of them, I believe that they are always worth revisiting as they evolve in line with the software that they are testing evolves.

Selenium (Webdriver)

For Web browser automated testing services, this open source tool has become the industry standard. Many QA departments are now favoring this mature testing tool over the commercial offerings. Its popularity is due to a combination of a wide spread talent pool (for employers to utilize) as well as many great articles and add-ons that its loyal followers have produced.

Tips:-

Install Firebug (or other DOM inspector) to determine how a given element can be accessed.

Establish a framework that facilitates reuse, such as Page Objects

Use the Java version of WebDriver, there are many complimentary frameworks written in Java: JDBC, JUNIT, TESTNG etc. and there are also numerous Java examples of implementing Webdriver in Java.

Jmeter

Just as Selenium Webdriver has become the standard for automated Web browser testing Jmeter has become the open source standard for performance testing services.

Although Jmeter is considered a leader in the open source arena it is not as dominant as Selenium Webdriver when compared with the overall market (including commercial tools). Part of the reason why Jmeter is not as popular as commercial load testing tools is lack of ease of use.

That said, if you can invest a little of your time into understanding the basic Jmeter architecture, it will provide you with a useful load testing tool (and not just for the web but Databases, SOAP and email can all be tested with Jmeter).

Tips

Become familiar with a good network protocol analyzer (such as Fiddler) so you can understand your web traffic and its re-construction when using Jmeter).

Use the Plug Ins from Google which are ever growing and now include a REST sampler, JSON to XML converter and the useful server side metrics collection agents.

If you have looked at these tools in the past (especially over 3 years ago) and you are not using them, I would strongly recommend a second look.

Selenium Webdriver data driven page objects using java hash maps

There are many articles on the web about Webdriver frameworks that encourage a data driven approach to test case development. By separating the data from the executable code two benefits are realized, the first is that the test case can be reused in various scenarios: for example the ‘log on’ could be for an admin user or the chief purchasing officer by varying the name and password in the data parameters.

The second, not so obvious benefit, is that the data itself can be separately maintained and updated as required, this approach towards data maintenance is similar to making everything configurable in the application code itself.

For example if there was an implementation of a drop down option list that contained various shipping rates then by separating the list of rates and feeding them into the test cases, the list could be easily changed when the requirements changed.

One caveat to the above mentioned benefits of separating out data from the test case code is: Do not be tempted to validate from the same data source as what is used in the application.

This approach is tempting but the validation should be done from an independent source of data otherwise the application becomes self validating. That said it is valid to seed test data from values in the test database, just do not seed the verification data.

Hash maps are an ideal java structure for data driving your tests, whether you are using Webdriver or some other java based automation testing services framework.

To use hash maps you need the following package import:-

import java.util.HashMap;

The hash map can be thought of as a list of key and value pairs. Although this is a simplistic description, and I would encourage you to Google hash maps to find out more, the value of hash maps for data driving automated tests can be illustrated using this simple analogy.

The data type of both that Key and the Value can be defined when the hash map is created:-

HashMap wordcount = new HashMap();

Within an automated testing framework the hash map can be seeded (from a CSV file, DB or other data source) within the test runner or scenario you are driving the tests from.

Seeding the hash map from a data source is a matter of reading through the data source and loading the hash map. In this way the data file (or other source) is easily maintained should the requirements change.

There are many good examples of doing this, on the web. Just Google reading a CSV file into a hash map and you will get some useful articles.

Having seeded the hash map a page object can be called with the hash map as a parameter. The page object is a reusable function that is targeted at a given screen (page), for example the Log On page.

Within the page object, which contains all the Webdriver locators and code to set the values of the various elements, the hash map value pairs can be used to retrieve the key (data field) and the value that is to be set – e.g. (username, “Mike Smith) and (password, “mikes password”).

There are many examples on the web of how to retrieve specific Key values from the hash map, so I won’t repeat them but I will emphasize the additional benefits of using this construct.

If you want to make your page object flexible you could have a series of If statements, for each Key (data field) and If that Key exists (in the hash map) it is populated otherwise that field is left empty. In this way a specific number of fields (with their unique values) can be set in the page object, which could then be used in multiple scenarios.

In the simple Log on example you could just send the username to the Page object and the password could be omitted.

 If you did this then, to test the validation, that the system returns password required should be done outside of the Log On page object as the page object does not validate (as it only repeats the required steps). See the page object framework for more details of this approach.

Although this Blog post has only just touched on the use of hash maps for passing data variables into selenium Webdriver test cases (which could be organized as page objects), I hope I have conveyed the overall flexibility and advantages of using this versatile construct.

Using Mock objects in load testing

Introduction

One of the main problems with performance testing services is the overall scope and in particular applying scrutiny to an individual software component.

The first issue with scope is that if multiple components are being tested, as in a full systems performance test, it is difficult to isolate the system resources used by a given component (process). By way of example consider a Web services architecture that has Common Gateway Interfaces CGI (or asp and jsp etc) running on the same server.

When load is put on the front end (Web browser) the Web Services are called via the CGI and the resource utilization is an aggregate of both the CGI and the other Web services. This issue does go away if the components are separated out onto different servers but in most cases some shared server resource, by components, will exist.

The second issue, concerning scope, with performance testing a system, is that the complete system has to be built before a complete test can be executed through the front end (i.e. Web Service, CGI or Web Browser).

In the case of Web services both the client (usually Web browser, CGI, asp, jsp etc) and the backend data base need to be available to perform the end-to-end performance test.

Both of the issues of scope for performance testing can be addressed with mock services (stubs) and a test harness to drive the load from.

Mock Web Services.

Mock Web services are place holders for any SOA request that has to be developed. Mock services are similar to Mock objects for unit testing . There are many papers on Mock Objects, in the software testing services directory, that go into further detail.

The advantages of mock services for load testing are:-

The called service does not have to be fully complete.
The calling component (i.e. Web service) can be isolated in terms of measuring performance.
It is also easier to control the behavior of the returned values, using a Mock service, as these (canned) returned values can be defined for the given purpose.

It is important to note that this test is not a system test and other functional tests will need to be performed to validate the component but for the purposes of performance testing the mock object will be able to ‘stand in’ for the real thing.

An open source Mock service tool.

SoapUI has a web service mocking feature that allows for quick and easy building of mock services. This tool set also includes building the various responses that the anticipated calling program expects from the complete delivered web service.

The harness or driver.

Given we can isolate the server component from the Web Services; we now look to isolate that component from the client. This isolation is needed in any event in order to recreate large volumes of simulated service calls to the component under test.

If the component under test is itself a Web service then a SOAP load testing tool can be used as the driving harness.

If the component is a CGI (or other web server side component) a HTTP load driver can be used to put load on the component under test.

In either case a suitable open source load testing tool can be found.

Measuring the component under test.

When isolated there are a number of useful monitoring tools that can be used to measure the performance of the given component.

Although CPU is a useful measure to determine the overall performance of a middleware component it is essential to establish that a server side component is not leaking memory or not returning other unused resources (such as threads) back to the pool when appropriate. The first type of load test that should be performed on a middleware component, and this can be done early in the SDLC with Mock services, is stability testing.

With stability testing the memory, thread pool and CPU are monitored over an extended period, about six hours, of consistent load to determine if performance degrades over time.

Wednesday, December 25, 2019

An approach to regression testing data processing systems

Introduction.

This article seeks to describe an approach to software regression testing that can be used in the typical commercial application setting. The strategy presented here uses a canned data approach as well as a simple regression test selection (RTS) technique.

The strategy itself is targeted at commercial data processing environments, such as Sales Order processing, CRM, Payroll, General Ledger etc.

Regression Testing, a quick definition.

There are many definitions of software regression testing services on the web but for our purposes regression testing refers to a systematic re-testing of code that has previously worked in order to detect any new errors that may have been introduced during code changes.

The problems with Regression Testing are:-

Typically there is a narrow time window, just after full system testing has been completed and prior to production release, during which regression testing can be performed on a stable system.
Regression testing itself, for any significantly large system, is an expensive process of preparation and execution.

The need for a Regression testing strategy.

Even with test automation services tools there is a need to reduce the regression testing effort scope and at the same time optimize the effectiveness of this testing effort.

The identification of those tests that would give the most ‘pay back’, in terms of finding software defects, requires the use of a regression testing strategy that includes building out a comprehensive set of test cases and selecting an appropriate subset to use.

The Retest all strategy should be self explanatory and that is to run the entire library of regression tests you have built. If time and money are not an issue, this is the strategy to select.

The deep dive when changed strategy.

Most commercial application testers will already use this strategy in one form or another and what is presented here is an attempt to formalize what many already practice.

The idea is to select a subset of test cases from the regression test library that will ‘exercise’ a single entrysingle exit block of code, which we will refer to as a component. The technique could be applied to Web Services (as a component) but the idea is to ‘touch’ every component that makes up the system. The modified component itself should be tested to a deeper extent.

By way of example, for a commercial application, lets say we have a Sales Order Processing system that schedules finished parts and creates an order and booking in Accounts Receivable.

We would have a regression test library for every component, or function, e.g. enter customer details, check customer credit, check parts availability, create sales order etc. In the test library we would also have deeper test cases that went beyond a basic touching of the component.

When a component is changed, for example credit check, then that component is subjected to the extensive test cases whilst the other components are tested with the minimal set of regression test cases.

Learn more about performance testing services

Some testers refer to this as ‘basic end to end testing’ for the entire system. The important point of using this strategy is that the full regression suite has to be constructed in a way that each single entry and exit point, for all components, has to be identified. By way of example if there is a ‘preferred customer’ route thru the system, with its own component (code module), then this has to be tested all the time during regression.

In essence this regression testing strategy requires an extensive set of test cases to match the various Use Case paths for the system.

The basic two level approach.

The two level approach is the simplest in terms of constructing the regression test suite. With this approach two test suites for each component (single entry, single exit) are constructed, one is the simple path thru and the other suite tests multiple paths thru the component.

Selecting the appropriate test suite(s).

A cross reference from the test cases to the components is constructed and for each release the changed components are known and the appropriate test suite (simple or complex) can be selected. That is the process selects the simple test suite for unchanged components and the deeper test suites for the changed components.

Data dependencies, the canned data approach

There is an obvious issue with this approach and that is data dependencies. By way of example lets say I have a component that has changed and to test it I need extensive data from another component (that has not changed), in this case the simple test of the first component may not present the second component with sufficient data to exercise the extensive test paths. There are other more complicated variations of this but the problem of data dependence between the components is an issue with this strategy.

The answer is building out and keeping an extensive database that is restored just prior to the regression tests being executed. This strategy, of building out a known state for a given database is widely used for training environments and other testing.

DbUnit is an open source data preparation utility that could be used for the purpose of setting up the requisite data for this regression testing strategy. There are other resources referenced for data quality as well as regression testing in the software testing directory.

Issues with the canned data approach

The above canned data approach provides a simple strategy to data dependencies between components but there will be cases when this strategy is not appropriate. For example there may be significant data base changes, or changes are made in several key components where the dependent data has itself been changed.

In these cases the canned data may not be representative (of what will happen in production) as the component that created the data has been changed in a way to compromise that assumption. In these cases impact analysis has to be undertaken and a decision needs to be made in terms of whether or not an extensive regression (deep dive with many components) needs to be executed.

Conclusion

Many of the decisions as to how much regression testing should be executed are dependent on the extent (and nature) of the changes to the system and the amount of time and resources available to test those changes.

The above regression strategy provides a basic framework to plan and build regression suites (and corresponding data). This strategy should facilitate informed decisions, and actions, on the many trade offs involved in the regression testing phase of a project.

As with most regression testing strategies a well thought out automated approach is essential, including the set up and tear down of the appropriate data.

Requirement considerations for an Automated Testing Framework

In this blog post I pose a question that few ask or fully address:

What are the requirements for an automated testing framework?

Years ago the automated ‘requirements’ question would be confined to an individual testing tool (whether functional or performance) and looking at features such as recordplayback, GUIprotocol support, HTTPS and authentication support etc.

Today there is a whole context, or environment, in which an test automation services framework operates and it is this broader context that extends the ‘requirements’ consideration beyond the standard feature set of a given testing tool.

The requirements for an automated testing framework now include:-

Interoperability with:-

Continuous Integration systems (such as Hudson and Jenkins)

Test Management Systems, including Agile Story based systems (such as Testopia, Rally and Jira).
Source revision control systems (such as Perforce, GIT and Subversion).

The important point is that we have now moved the requirements discussion to a testing ‘framework’ rather than an individual testing tool.

Indeed one of the requirements of a ‘testing framework’ is to be able to plug and play (orchestrate) a variety of individual testing tools that target a specific technology that is being subjected to verification, for example SOAP, REST testing tools as well as Browser GUI testing tools.

The expression ‘plug and play’ as a requirement, stated in the previous paragraph, applies not only to testing tools but to any of the testing framework components or systems that the framework interfaces with (e.g Jenkins C.I.).

In this way the testing framework itself becomes an open architecture (not necessarily open source) that allows for extensions and the mixing and matching of appropriate components in order to contribute to the efficient and effective verification of the given System Under Test (SUT).

The above paragraph, including a definition of a testing framework, positions the scope of a set of requirements. By way of clarification, lets look at a requirements template for a testing framework.

The testing framework must be able to utilize the ‘appropriate’ individual testing tool(s).

Here the word appropriate is key, and it potentially refers to SOAPUI, Selenium, Jmeter or any other individual testing tool that has been deemed appropriate (useful) for testing a given aspect of the System Under Test.

In an actual set of requirements the list of testing tools would be stated as would the ability of the framework to be extended (to use other testing tools) in the future.

To know more about Software testing services

The testing framework must be able to integrate with:-

As noted above, and for each integration point there is a process requirement i.e. for Test Management integration: Report on the total number of tests executed (passfail) including automated testing, for a given Jira story.

The testing framework should be maintainable, useable, supportable……

These requirements are concerned with the overall flexibility of the framework, since the main advantage of a framework is that is can change and adapt to new environments (interfaces) and support new testing tools. When we examine these type of requirements (sometimes referred to as ‘non-functional requirements) we need to illustrate them with examples (or scenarios). By way of example:-

Adaptability:

We need to be able to swap out the Test Management system at a minimal cost.

Although the expression minimal cost could be replaced with a dollar amount, the idea here is to evaluate a number of design alternatives then estimate the cost to substitute a given component. In this way the Design choice that gives the lowest ‘adaptability’ cost is selected.

Optimizing the process of load testing for AJAX based web applications

This blog post examines classical engineering Test and Measurement process improvement strategies that have been successfully applied to manufacturing, in order to define an appropriate model for the performance verification of today’s Rich Internet Applications (RIA).

Although software applications continue to evolve in terms of scale and complexity the basic models of verification, derived from their manufacturing counterparts, still hold today as they did over 70 years ago.

Critical to all software verification is the establishment of a Test and Measurement function that is active through all phases of the software production process (be that Agile or Water Fall).

Establishing the most effective Test and Measurement activities is only the initial step, these activities need to be subjected to continuous improvement as enabling technologies (as well as the System Under Test SUT) continue to evolve.

Applying a Test and Measurement continuous process improvement framework, derived from manufacturing, to software production has several challenges due to the intangible nature of software. To begin with it is not obvious ‘what’ is to be measured. In manufacturing the diameter, length, weight or some other physical characteristic of the component can be measured and verified for compliance during every stage of the production process.

In software there is no physical representation of the product so a scheme of software quality characteristics has been devised in order to Test and Measure the finished product (or components during production) so that adherence to specification can be verified.

Although there have been a number of software quality characteristic models one of the most popular is the FURPS model (HP). The FURPS model breaks out software characteristics into:-

Functionality:

Feature set, Capabilities, Generality, Security

Usability:

Human factors, Aesthetics, Consistency, Documentation

Reliability:

Frequency/severity of failure, Recoverability, Predictability, Accuracy, Mean time to failure

Performance:

Speed, Efficiency, Resource consumption, Throughput, Response time

Supportability:

Testability, Extensibility, Adaptability, Maintainability, Compatibility, Configurability, Serviceability, Install-ability, Localizability, Portability

Having a perspective (such as the FURPS model) of ‘what’ is desired in the software product or component is only the first step in establishing an effective Test and Measurement process that can be subjected to continuous improvement.

The next step is to define the measurements (and tolerances, or ranges) for each of these characteristics. By way of example Functionality can be measured by simply writing a series of functional tests that verify the software specification has been correctly implemented.

In fact functional testing services is by far the most common Test and Measurement activity in software production today. Having decided the desired measurements for each of the software characteristics the next step is selecting the most useful tools and techniques to perform the Test and Measurement of the desired characteristic (at various stages of production). Finally the Test and Measurement process itself is subjected to continuous improvement, as is done in the best manufacturing quality management systems.

A practical example of a process improvement initiative for Test and Measurement of software performance.

Having examined a basic manufacturing analogy of a Test and Measurement process improvement strategy that could be applied to software, what follows is a summary of an implementation of this strategy (and decision process) for the verification of performance of Ajax based web applications.

The example scenario represents a simplified version of the steps that would be required to analyze, design and implement a suitable improvement for performance verification Test and Measurement activities.

The steps followed, in the example, are for:-


  • Define the current problem
  • Analyze and Measure the current process
  • Analyze the current causes of the issue
  • Design a process improvement
  • Implement and measure the costbenefit of the process improvement
  • Subject the process to continuous improvement


The scenario presented, by way of example, is for CompanyCRM ( a fictitious maker of CRM products) that wishes to address performance issues with their Ajax based CRM product.

Define the current problem

CompanyCRM has been seeing numerous performance issues (response times of > 8 seconds) just after product launch and has decided to embark upon a process improvement initiative to address this issue.

Analyze and Measure the current process

CompanyCRM needs to establish a benchmark, in terms of their current Test and Measurement performance verification process. It is important, as with any process improvement initiative, that the current situation is objectively measured (benchmarked) in order to be able to measure the effectiveness of any counter measures (changes) to the process.

The failure of a given Test and Measurement performance verification point can be identified by the presence of subsequent performance issues (defects) that the given performance verification point should have detected. By way of example consider a Test and Measurement performance verification point after a given jsp (or other GCI, php, asp etc.) has been developed.

The value of such a verification point, early in the delivery cycle, would be to verify component stability, including the lack of memory leaks.

If an application delivered into production leaked memory then the memory leak can be traced to a given component and the Test and Measurement process of the given component could be ‘improved’ so as future memory leaks could be detected at that particular Test and Measurement point in the software delivery cycle.

Example – Analyze and Measure the effectiveness of the current process:

Following our example scenario CompanyCRM is finding performance issues (slow response times) when the product has been shipped to the production servers. The application is Ajax based and these performance issues were not detected in their current system load testing just prior to release.

CompanyCRM is using Jmeter as their load testing tool and after further investigation it has been determined that Jmeter is not generating the same workloads (HTTP traffic) that are being experienced in production. CompanyCRM’s performance engineers believe (hypothesis) that if they could generate more realistic web traffic during performance system testing then they would uncover the defects that the current process misses.

Analyze the current causes of the issue

Given the hypothesis, that it is the nature of the generated HTTP traffic that can be changed to improve the effectiveness of CompanyCRM performance Test and Measurement capability, CompanyCRM’s performance engineers begin to analyze the short comings of traffic that Jmeter (and other HTTP testing tools) generate.

CompanyCRM’s performance engineers examine HTTP requests made against the production server and compare these requests with what Jmeter has been reproducing and they see significant differences when generating traffic originating from an Ajax based Web Browser.

Ajax enabled applications, challenges for HTTP generation.

Ajax places more processing logic in the browser which in turn has moved the http traffic structure, and timing, away from simple Get resource (URL). By way of example consider the typical Ajax type ahead (auto complete) function that is popular in searches.

In the Ajax enabled CompanyCRM application the user might start to enter the customer contact’s surname and as the first letters of this surname are entered a list of qualifying customers are returned (auto complete) for the user to select. The http traffic for this Ajax search operation would look something like:-

http://mycrm.com/search?lang=en&search=j

http://mycrm.com/search?lang=en&search=jo

http://mycrm.com/search?lang=en&search=jon

http://mycrm.com/search?lang=en&search=jone

http://mycrm.com/search?lang=en&search=jones

The Jmeter script only had the final http request being called:-

http://mycrm.com/search?lang=en&search=jones

The Jmeter script was data driven so the the search term was given as a variable i.e &search=$lastname.

Having identified the issue the CompanyCRM performance engineers examined potential improvements for the performance verification process.

Design a process improvement

The CompanyCRM performance engineers knew that they could write scripts to read and parse out names one letter at a time (they are Jmeter experts) in order to recreate the desired requests. However, they decided to experiment with another approach to web traffic generation which they believe would be more cost effective (in terms of their labor) at generating the required traffic. The new approach they decided to research was utilizing the Headless browser strategy to performance testing.

The Headless browser approach to web traffic generation for load testing

One recent innovation that has enabled the load driver traffic to move closer to the ‘Real user’ experience is the Headless browser. The headless browser concept allows for a browser to be automatically executed without the user interface (GUI) portion.

The advantage of running a browser without the GUI component is that all of the underlying API (that communicates with the web server) is still available but the process executes with a much lower CPU and Memory resource requirement.

Utilizing the Headless browser approach enables multiple browsers to be executed simultaneously on a single computer (or VM). The execution of simultaneous (headless) browsers combined with the scaling capability of the public (or private) Cloud has proved useful for generating large amounts of web based traffic that is close to the real user experience.

An added bonus of using the headless browser approach, for load testing services, is that it allows for the re purposing of automated scripts that have been traditionally used for functional verification (i.e. Selenium Web driver scripts).

One of the reasons why the headless browser load driving approach has become so popular, in recent years, is the popularity of Ajax which brings with it a significant change in the format and timing of http requests as well as dynamically changing html content.

Following their own analysis and further research, which included a Proof of Concept testing experiment, CompanyCRM performance engineers implemented the change to the current performance verification process by utilizing a Headless Browser approach.

Implement and measure the costbenefit of the process improvement

Cost

The cost of Test and Measurement performance verification is basically time (labor) and materials, with materials being the verification architecture (load testing tools) and the required hardware to execute the tests.

Although this is a simple breakdown the labor costs will depend on the testing tool’s ease of use (usability). Usability, for all software, impacts the labor cost drivers of:- Training (time to become proficient with the testing technology), efficient tool utilization (how many hours does it take to produce the test cases).

Benefit

The Benefit of Test and Measurement performance verification process can be measured by the number of performance defects discovered after the product has been moved to production.

In this way the performance defects, found in production, should be documented and reviewed, in order to determine if this type of defect could have been discovered earlier in the SDLC and if needed the verification (load testing) process should be changed.

In the above example CompanyCRM performance engineers should be able to identify performance defects earlier in the SDLC given the improved software testing services process.

Subject the process to continuous improvement

That said, the verification process (both cost and benefit) should be continually monitored and scrutinized for process improvement as new tools and techniques come onto the market and the nature of the software being produced (and it’s environment) evolves.

Conclusion

Identifying the right test automation services framework to validate web based applications is an ongoing search for the approach that aligns the value (purpose) of the verification process with the lowest cost.

Within the SDLC no single load driver (load testing tool) will be appropriate for all the given performance verification points.

By identifying, measuring and documenting the appropriate load driver for the given performance verification point, an end to end Test and Measurement process can be established that is analogous to it’s manufacturing counterpart.

For driving load that is close to the Real User, for AJAX based application, there is a compelling framework that runs multiple ‘headless’ browsers.

The AJAX illustration is just one example of identifying the most appropriate framework for the given Test and Measurement performance verification task and in any event all verification processes should be subjected to continuous improvement as we all move toward the ultimate goal of zero defects (and satisfied customers).

Common Issues Found In Performance Testing

Common Issues Found In Performance Testing

Testing Environment

Most of the issues will be because of the test environment because the test environment might not be properly configuration as per the production. Down stream system might not be configured in the environment. Servers are not configuration as per the production or most important the configuration of the server might not be same as the production.

Wrong Use of the Test Data

Important part of execution will be using the data. Because if you use the wrong data for your script then during the execution you will see the error and you will not achieve the target. So understanding the correct data for the test is important.

Learn more about : Performance testing services

Wrong Workload Modelling

Need to prepare the workload correct keeping all the factor in mind, such as production and test environment set up. Number of server used in both the environment. Usage of each functionality. So understand this factory before we prepare the workload model.

Not Using the Best Scripting Practice

Using the hard coded values in the scripts. Using different script for each functionality where we can modulaize the flow with same script. Not using the think time and miss use of think time inside the transaction.

Response Time

High response time of the application during the execution because of many factor .

Capacity

System is not able to handle the higher load when the user increase. So customer need to plan the capacity of the application.

Reliability

Application is not running properly with different operation system and with different browser.

CPU Utilization

CPU utilization of the system high. Which is reaching more than the threshold  because of it the system performance is degraded.

Memory Utilization

Memory utilization of the system high. Which is reaching more than the threshold  because of it the system performance is degraded.

Disk Utilization

Disk utilization of the system high. Which is reaching more than the threshold  because of it the system performance is degraded.

Performance Testing Process

Performance Testing Process

Five most important phases are

Initial Phase

In test initial phase of the project we collect the requirement of the project such as use cases or the work flow of the system. Will a discuss with business analyst team to understand the important flows of the system software.

Strategy document for the system will created such as
  • Overview of the System.
  • Objective of the testing .
  • Scope of the document.
  • Approach.
  • Enter criteria and Exist criteria.
  • Script Pass/Fail criteria.
  • Data setup process.
  • Environment setup.
  • Tools to be used.
  • Assumption and dependencies.
  • Risks.
  • Deliverable.
High Level test plan will be shared with all the customer along with all the team involved in the process. And all the information will be in details in the test plan for reference.

Planning Phase

It is most important phase in performance testing because in this phase we will plan how are we going to do performance testing services. We will get to know the requirement of test data and the dependencies of the test data for each script. Which is most time consuming part in the testing phase.
Test Environment setup, you need to understand the environment where your going to perform testing.

Because most of the performance test environment will not be similar to the production environment. So you need to understand the dependents of the down stream system and what are the system you need to stub or configure to perform your testing. It will help you to define the work load modelling of your system.

And also the tools which we are going to use for performance testing and there dependencies and there limitations.

Scripting Phase

In Scripting phase, the script are created using the tools available or agreed as per the planning phase. Most importantly all the use cases are scripted for the system as agreed with the business.

The script catalog which will be created which contains the script flow for each of the use cases.Use cases are mapped with simple, moderate and complex. Depending on the complexity of the test cases the duration of the scripting will be confirmed.

You understand the test data required for your system because during planing phase you will get to know what are the test data. But during the scripting you will understand the unique data you need for each script. And also the dependencies of each data for the script. Because of this you can plan your test data preparation activity.

Execution Phase

It is the most important phase of the performance testing because in this phase you will find the bottle neck of the system. This phase is where most of the recommendation and the system turning will done. And below are some important part of this phase
Data Preparation
Data setup activity for each script need to be completed before the test. Because without the data setup you cannot run the performance testing the script which are data deepened will fail.
Type of Performance Testing
We will be performing different type of performance testing on the system to understand the bottle neck of the application. Because each different type of execution are meant for different purpose and it will help to understand different issues.

Reporting and Analysis Phase

This is the last phase of the process, once the test execution is completed. We will analysis the results collected from different tools used during the testing and share the test will the customer.

We will analysis from each section of the system such as server side, client side, db side and web sever side. Because each section will give you details how the system is behavior and what is the utilization of resources. 

Most importantly it will give you the exact section where there is an issue in the system.
And Most important recommendation on the application are shared with all the stakeholder and development team.

Once the development team gives the fixes for the issue and recommendation we will gain perform the testing and certify the application if the fixes are working or not.

What is Performance Testing?

What is Performance Testing?

Performance Testing services is a type of testing to ensure that the applications will behavior as expected with the workload on the customer application. Most important factor are
  • Response time.
  • CPU Utilization.
  • Memory Utilization.
  • DB Utilization.
  • Capacity.
  • Reliability

Why is Performance Testing is needed?

It is done to provide customer with the information about their application regarding response time, stability and capacity. More importantly, it is done to improve the application behavior before it goes into market.  It is important because, without it the application will suffer the issues such as:
  • Running slow if multiple users access the application simultaneously.
  • Inconsistencies across different operating systems and usability.
  • Capacity of the application.
It will determine if the customer application is meeting the  response time, speed, capacity and stability. It will give the confidence to customer about the behavior in the market.
Customer application sent to market without performance testing will gain bad reputation and will not meet the market sales.

Different type of Performance Testing

Load Test

It will help to check the application's  behavior to perform under anticipated loads on application. The objective is to identify performance bottlenecks of the application under expected load.

Endurance Test

It is done to make sure the software can handle the expected load on the application over a long period of time. Because the memory leak will be observed when the application is running for longer duration. So the most important objective of the endurance test is to find the memory leak and behavior during the load for longer duration.

Stress Test

Stress testing is one on the application too find the breaking point of the application. It will determine how much load the application can handle. How will the response time and resource behavior when the extreme load is applied on the system. Important it will give the customer the need confidence how much load the system can handle with existing resources.

Spike Test

Spike testing is done to check if the application can handle, if there is any sudden spike of user on the application. It will let the customer know if the system will crash or it will handle the spike with some  degradation on the application behavior.

Volume Test

Under Volume Testing large number of data is populated in database and the overall software system's behavior is monitored. The objective is to check software application's performance under varying database volumes.

Scalability Test

Capacity/scalability testing is done to determine the software effectiveness in scaling pattern of the user load on the application. It will help the customer to plan the capacity of the application by adding the resources to your software system.

How to Choose the Right Performance Testing Tool

Apache JMeter, Load Runner and NeoLoad are undoubtedly the top 3 performance testing tools in the market today. Moreover these 3 are the most widely used performance testing tools by organizations today. This infographic highlights the differences and key features of these performance testing tools.
Performance Testing Tools Comparison
Looking for a trustable QA partner? Learn more about our performance testing services

Performance Testing for Ecommerce – Is your Ecommerce Website ready this Holiday Season?

Getting ready this holiday season

The holiday season is just around the corner and it is time for retailers to get busy for the next couple of months.
One of the biggest sales events of the year Black Friday and Christmas is just days away now. As an online retailer, this is just about time to ensure your site is ready for this holiday season.
For a brick and mortar store, it is more about stocking up and ramping up staffs. As for online retailers, it is more about the site performance and making sure the site runs well throughout the season.

Why site performance is important?

Though people still like to visit the brick and mortar store, the number of transactions that happen online is on the rise. Last year, the Black Friday online sales hit a record USD 6.22 billion, which is 23.6% more than the previous year.
why performance testing
The amount of traffic you get will be doubled or even more in most situations, during the holiday season sale. It is important or at least in this case mandatory to test your website for performance before the holiday season begins.
Did you know? That 40% of people will abandon your site if your site takes more than 3 secs to load.
YES! webpage loading time plays a vital role in your business. Performance testing services from a leading software testing service provider will ensure your website will be ready to handle the traffic and execute multiple transactions simultaneously.

Test your website for performance

Irrespective of the exciting offers on your website, your customers can abandon you for web page loading time. Hence, performance testing is important. There are 2 important types of performance testing needs to be done.
Load testing – Increase in website traffic can be a good thing for your business as you will make a lot of money.
However, it may turn ugly, if your website crashes due to the sudden influx in traffic. This will affect your business very badly. Especially in the digital world where people take it up in social media. However, it may turn ugly, if your website crashes due to the sudden influx in traffic. This will affect your business very badly. Especially in the digital world where people take it up in social media.
load testing
A 2-second delay in loading time could cost you fortune. Hence, you have to make sure your eCommerce website has optimal loading time.
Load testing will help you optimize the performance of your website. Schedule a load test prior to the sale so that you will have ample time to fix the issues from load testing results.
Spike testing – Sudden spike in user load could crash your website.
Introducing Spike testing within your performance testing plan will ensure your eCommerce website is ready to handle the sudden rise and fall of user load. This is mandatory especially before the holiday season and during special discount days.

End-to-end Testing

Now that you have tested the performance of your website, it is time now to do complete end-to-end testing of your eCommerce website.
As a first step, prepare a checklist with the list of testing that has to be completed before the holiday season.
Here is a list of testing an eCommerce website must undergo,
Functional Testing – Testing all the functionalities just before the sale begins ensures the website is ready to make money.
A lot of new functionalities could have been added to your website just for the holiday season. For instance, integration with other new applications, features, etc. It is worthwhile to test all the functionalities of the website before the sale begins.
Security testing – Security testing is not an option anymore. It is mandatory to ensure your website is secure from security threats. While other types of testing can be skipped (I strongly recommend not to skip), serious security threats may impact your business heavily. Hence, Security testing is a must.
Performance testing – Similar to security testing, Performance testing services is mandatory for your eCommerce website, especially before the holiday sale.
Usability Testing – – Though there won’t be many changes to the features of your eCommerce website, it is advisable to perform usability testing, just to make sure everything is fine.
This will be particularly helpful if you have added new features to your website either to improve the shopping experience or for holiday sale purpose.
Test your website for performance 2

Mobile app testing – People are not interested to turn on their laptops or PCs anymore.
The smartphone revolution has given the power to people to access everything from their handheld device. For the same reason, many eCommerce businesses have their own android or IOS app in order to retain their loyal customers. In these situations, testing the mobile app thoroughly will be a good remuneration.

Mobile first (Bonus tip)

According to a recent survey by Boston Consulting Group, almost a third of Americans will give-up sex instead of their smartphones.
I don’t have to stress the importance of mobile-friendly websites here.
Google has made it clear on so many occasions that websites should be mobile-friendly, and it is one of the primary weightage for your website to rank in Google’s search results.
If you don’t care about Google, then you should at least be worried about your future customers. According to Statista, the number of smartphone users is expected to reach 2.87 billion by 2020
You would have probably designed your website for mobile devices during the launch of your website.
If not, it is best to make sure your website is responsive to different screens just before the sale begins.
The focus should be on giving the best experience to your shoppers this holiday season. Since webpages optimized for mobile screens boost conversions and sale.

Stock up

The digital revolution has opened the doors for a lot of small businesses. There are many external factors (which is not in your control) that directly affect your business, such as competition and economy.
You never know, this might be the best or worst holiday season yet for you.
The least you can do is to be prepared and deliver the best experience to your customers. This blog should help you prepare for this holiday season and keep your website crash-free.

So, go ahead, stock up and happy selling!