How Much Testing is Enough?

Dec 1, 2004 12:00 PM, By Don Sturgis


         Subscribe in NewsGator Online   Subscribe in Bloglines

Testing a physical security system at various stages in a security project ensures that the user gets what he wants out of the system. But how much testing is enough? Highly critical systems — such as those for a nuclear power facility or facilities subject to government regulations — have very strict and formal testing requirements. For less critical systems, the scope of testing depends on the testing requirements established by the customer.

Bias regarding testing

Consultants tend to favor formal, well-documented customer-led testing, especially for large or critical systems. This is due in part to their responsibility as consultants and the potential for blame if a serious security failure were to occur due to a system problem. It also has to do with the familiarity consultants have with security system installations, and their experience with all the things that can and do go wrong. Of course, the role of the consultant is an important factor. Is the customer expecting the consultant to make acceptance decisions on his behalf? Or is the customer calling the shots with the consultant providing help and guidance?

System integrators tend to favor less formal, integrator-led testing due to the amount of resources and associated costs involved with formal customer-led testing. Integrators are usually confident that the system will perform as expected, and that if a problem does occur, they will be able to handle it quickly and effectively. Integrators tend to view formalized testing as overkill.

Customers without previous testing experience often tend to rate the importance of testing based on their confidence in the integrator. What has the integrator's project performance been to date? Have the integrator's submittals been on time, complete, and free of major errors? Do the submittals reflect what the customer has asked for? Has the integrator explained about the project and the system in terms that the customer understands, without seeming condescending or false? Has the integrator always delivered what was promised, or have unexpected changes been made without advance discussion? Good integrator performance raises customer confidence; poor performance erodes it.

The customer can be caught in the middle between a consultant who favors maximum testing and an integrator who favors minimal testing. Instead, the consultant and the integrator should both use the customer's true needs as the basis for determining the scope of testing, unless the customer is experienced enough to generate the testing requirements himself.

The scope of testing

The overall purpose of the full spectrum of testing, from the Factory Acceptance Test to the Operational Acceptance Test, is to demonstrate — according to the customer's requirements — that the system:

  • is installed correctly;

  • operates correctly per the customer's expectations; and

  • is reliable under continuous operation.

Each type of test exists to allow the customer to make a go/no-go decision to accept the work to date (such as key equipment selection, system design, installation, or system setup), to authorize a progress payment, and to move on to the next phase of the project. Testing that does not enable the customer to easily make that decision is insufficient. Testing sufficient to enable a sound customer decision is good enough. Testing that goes beyond enabling the decision is wasteful of the time and resources of both the customer and the system integrator.

This means that the scope of testing is highly dependent upon the following factors:

  • the size and complexity of the system;

  • how much new systems integration work is required;

  • whether any new or cutting edge (i.e., relatively unproven) technology is involved;

  • the performance of the system integrator to date; and

  • the knowledge and experience of the customer or the customer's project team.

A small testing effort is required for a small system installed by a highly proficient system integrator for a very knowledgeable customer. If any of the factors listed above increase, the amount of required test planning and execution required also increase. Two systems of identical size and complexity may require the same or very different levels of testing, depending upon the performance of the integrator and the knowledge of the customer.

System size and complexity

A large or complex system requires careful test planning in order for testing to be sufficient without being too costly or too time-consuming. Acceptance testing complexity should not increase with system complexity. At the user level, complex systems should present a simplicity of options or choices at any one time. Although there may be many configuration options, only those in use by the customer need to be tested. System use scenarios, based on normal security operations usage and incident response usage, help to keep the scope of testing both realistic and manageable. Simultaneous multiple site tests should not attempt 100 percent coverage of system functionality. Testing for multi-site systems should incorporate scenarios that test network communications bandwidth and speed of system response over long distances, using realistic scenarios.

Systems integration work

Integration test planning may be the responsibility of a separate V & V (verification and validation) team for large-scale integrations. Integration test planning should always be based on the customer's operational requirements, and should be undertaken by a group that is separate from the integration team. Tests designed from an integration team perspective may not test the system appropriately according to the customer's planned usage, which may tax the integration in ways the integration team did not anticipate. However, technical documentation of the integration should be made available to the test designers, so they can ensure that testing covers all the points of systems integration.

New or cutting edge technology

If the system integrator only has minimal experience with new or cutting edge technology, it is important to obtain the involvement of the manufacturer's technical personnel. A good test plan can shed light on the customer's intended use of the technology, which in turn may lead the manufacturer's technician to adjust or optimize the setup for the technology to be more in tune with the customer's requirements. Additionally, if the system integrator or customer has “gone overboard” with regard to test design, the manufacturer's technical personnel can identify the most meaningful and relevant tests and help eliminate unnecessary testing.

Integrator performance

Poor integrator performance during early stages of a project reduces customer confidence and trust in both the integrator and in the system itself. The impact on testing can be significant. The customer will need more thorough test documentation, and will require slower test execution on the part of the integrator in order to more closely observe the test execution. Anything the customer does not understand will be questioned, and the integrator will not be given the benefit of the doubt.

If customer questions cannot be answered satisfactorily, testing may even be suspended until the integrator can provide test personnel who can properly explain each test action to the customer. If there are some punch-list items resulting from the test, a distrustful customer may call the test a failure, whereas a trusting customer would give an overall “pass” to the test as long as the integrator commits to an acceptable time frame for handling the punch-list items.

Customer knowledge and experience

Some integrators consider themselves unfortunate when they approach a test with a customer who has little knowledge or experience with security system projects. Experience may have shown them that such customers can be unintentionally troublesome during testing, because they do not understand a lot of what is being done and insist upon having even the smallest of actions explained to them.

Customers have the right to understand the tests being performed, and are not out of line for asking questions, no matter how many they ask. A savvy integrator will recognize such a situation as an opportunity to provide greater value to the customer and will develop a training curriculum that includes the key system functions the customer will use, including those planned for inclusion in testing. Trained customer personnel make willing and able test participants, and take satisfaction in being able to help with test execution and keep it going at a fast pace. Integrators who have not caught on to this fact schedule training for after acceptance testing, and thus make testing more arduous and lengthy for themselves and for their customers.

The reliability of a system is determined by how well all of its “parts” work together, and that includes the people who use the system. A properly designed high-quality system has only potential reliability when put into use; its actual reliability will be determined by how well the system operators can run the system. Thus training before testing can truly make testing a demonstration of system reliability in the fullest sense.

General recommendations

The following recommendations for customers, consultants and integrators (as indicated below) are intended to support “good enough” testing. Apply them as necessary for the project at hand:

  • Do not skip phases of testing, but reduce the scope of each test where appropriate. For example, visit an installed identical system in place of performing a Factory Acceptance Test. [Customers, Consultants, Integrators]

  • Clearly define and document the systems integration work, and determine the smallest set of tests that will be needed to verify its correct operation. [Integrators]

  • Perform early thorough testing of new or cutting edge technology components during installation and setup, and invite the customer to observe. This will allow a simplified set of test steps to be included in the formal test procedures. [Customers, Consultants, Integrators]

  • Establish miniature milestones for each project phase, and deliver them on time, so that customer satisfaction is high when approaching testing. [Integrators]

  • Make quality assurance a high priority and keep submittals free of mistakes and omissions both big and small, to maximize customer confidence and trust. [Integrators]

  • Design customer education and training to parallel but lead each test. Having educated customers can help reduce the scope of acceptance testing required and reduce the actual test execution time. [Customers, Consultants, Integrators]

FOR THE RECORD

About the Author

Don Sturgis, CPP, is a senior security consultant for Ray Bernard Consulting Services (RBCS), a firm that provides high-security consulting services for public and private facilities. This article is excerpted material from the upcoming book The Handbook of Physical Security System Testing by Ray Bernard and Don Sturgis, scheduled for publication by Auerbach Publications in the spring of 2005. For more information about Don Sturgis and RBCS, go to www.go-rbcs.com or call 949-831-6788.

Want to use this article? Click here for options!
© 2014 Penton Media Inc.

Today's New Product

Product 1 Image

Privaris Biometric Verification Software

In support of the Privaris family of personal identity verification tokens for secure physical and IT access, an updated version of its plusID Manager Version 2.0 software extends the capabilities and convenience to administer and enroll biometric tokens. The software offers multi-client support, import and export functionality, more extensive reporting features and a key server for a more convenient method of securing tokens to the issuing organization.

To read more...


Govt Security

Cover

This month in Access Control

Latest Jobs

Popular Stories

Back to Top