Thursday, December 26, 2019

The pragmatic difference between Verification and Validation

There are many definitions of the terms Verification and Validation on the web and in essence they all come down to the objectives and goals of the undertaking, rather than any given methods or techniques applied.

The typical definitions for the V&V terms are:-

Verification: Determining if the software product is being built correctly

Validation: Determining if the correct software product is being built.

Although these terms look similar and are both concerned with providing the right product at the optimal cost, each of these process areas serves a different purpose.

Validation speaks to the overall value of the delivered product, when placed in its working environment whilst verification speaks to the correct procedures and standards, to deliver the product, being followed. Ironically a product may be verified to have followed the correct procedures but invalid in that ultimately it did not fulfill its intended purpose when placed in the production environment.

A typical SDLC with Validation and Verification activities.

For further clarification consider the typical SDLC as a reference point for verification and validation activities.

Business requirements are the main focus of validation both in terms of inspection and testing the delivered application. In terms of inspection (of the business requirements) the question “will this satisfy the business need when implemented” is one that is addressed.

 There is also a verification dimension to an inspection (or walk-through) of the business requirement that is concerned with the correct standards or format being used for the business requirements document. In this way the verification of the business requirements can be done by someone who does not know the business needs but rather knows the correct structure for the requirements document itself.

Following on to the technical specification that has been derived from the business requirements we can see that a similar certification exercise can be performed in that the document can be verified to meet standards.

The question of validation of the technical specification is more complicated. If validation of the technical specification is to be done it must be done by someone who knows the intended purpose of the software.

If there is a traceability exercise done, i.e. someone checks that each paragraph in the business requirements has an entry in the technical specification, then this is not strictly a validation activity.

The process of requirements traceability, as noted in the previous sentence, is part of verification. If there is some walkthrough of the technical specification, with potential screen shots and mock ups, with the business proponents then a validation activity could take place.

As we move to the developed software product it is possible for further validation activities to take place but these will require either a prototype or some other view of the software that is visible to the business proponents.

One of the reasons small iterations (agile style) have become popular is that validation can be done incrementally rather than waiting for the final product to be completed.

When the system has been developed the testing of the system against the technical specifications is a verification activity whilst the testing of the software against the business requirements is a validation activity.

Through all stages of the development cycle any review of the software (or specifications) that answers the fundamental question of ‘will this product fulfill its business purpose?’ is considered validation.

Validation should be done as early as possible in the development life cycle, in order to avoid late surprises that the software is not what the end user (or customer) required. Many software projects have followed the prescribed standards and procedures but have failed due to producing an ‘invalid’ product.

The consequences, for the Quality assurance and software testing services team’s organizational structure, of these definitions will depend on the complexity of the business problem being addressed.

If the business problem is trivial then the business and systems analysis together with validation and verification activities can be combined.

Generally it is more useful to separate out the duties of developers from testers, so that so called independent verification and validation (IV&V) can be carried out.

Strictly speaking the term independent verification and validation refers to a completely separate team (outside of the developer management structure) being engaged to conduct the IV&V activities.

That said there is no issue combining the actual V&V activities, so that someone may perform both system and business requirements testing. The acceptance testing should still be done by a business proponent as the main function of acceptance testing is the final approval.

Learn more about performance testing services

Where the business problem is complicated, and in most large companies the problem is complex due to the many interconnecting systems, then a separation of validation from verification, in terms of personnel, is both effective and efficient.

In terms of effectiveness, having individuals (typically business analysts) focus on the definition then subsequent testing (validation) of the requirements will create the channel by which the Voice of the Customer (VOC) can be heard throughout the software delivery process.

Business Process Management (BPM) and other business facing modeling techniques are effective communication artifacts that specialized business analysts can acquire and build skills in. In this way (specialization around validation activities) a professional team of business proponents can be established and developed.

The methods and techniques used for verification are considered more technical in orientation (than business facing validation techniques).

 Having more technical team members specialize in verification activities will yield efficiency and effectiveness gains. Consider white box testing techniques, which require code coverage to be measured, as a quality control activity that does not require any knowledge of the business requirements.

Such an activity is a prime example of software verification, in that a high percentage of code coverage can be ‘specified’ as a requirement of testing. In this way having a high percentage of code covered gives a ‘yes’ answer to the question “Are we building this product correctly?”. The application of white box testing techniques is better served by having personnel specialize in those particular (more technical) skills.

Conclusion.

Separating team members into verification of validation activities, for large complex projects, is an effective divide and conquer strategy for breaking down the overall software quality control goals.

By recognizing the essential differences between verification and validation an organization is better positioned to engage in meaningful process improvement as personnel develop the required skills and techniques in these two process areas.

No comments:

Post a Comment