The Safety Scorecard: Using Multiple Measures to Judge Safety\r\nSystem Effectiveness

May 1, 2001
What is the best way to measure safety -- audits, incident rates or workers' comp costs? The answer may be

Measuring the effectiveness of an organization's safety system has been a particularly difficult problem for all organizations. We have been uncomfortable with our traditional measures for many years. Measuring by number of incidents usually is relatively worthless. The metric often has little statistical validity and reliability, the numbers have little meaning, and they do not diagnose why the improvement or deterioration has occurred. Stating this is not news. We have known this for perhaps 50 years.

Our discontent with subsets of incident data as a metric forced us to other measures years ago, starting with the development of the audit -- the predetermination of who should be doing what to get results and then checking to see whether it is, in fact, happening. We perceived audits to be effective measures until we began to run correlational studies between audit scores and accident statistics in large companies over time. We often found zero correlations and even negative correlations between the two.

As far back as the 1970s, we were beginning to research the relationship between audits and accident statistics. In a 10-year study by the Association of American Railroads, an extensive survey was made of safety program activities in all major railroads in the United States. The report of the survey stated:

"An extensive survey of safety program activities was completed, which covered more than 85 percent of the employees in U.S. railroads at that time. The Safety Office and staff on each railroad completed a detailed questionnaire. The questions were designed to determine the presence of and absence of activities popularly thought to influence program effectiveness and to trace organizations' responsibility, staffing, program content, effectiveness measures and perceived quality of performance. The information gained was then fed into a computer to be compared with other data measuring safety program effectiveness."

This safety activities survey was basically an audit covering 12 components of a safety system:

1. Safety program content;

2. Equipment and facilities resources;

3. Monetary resources;

4. Reviews, audits and inspections;

5. Procedures development, review and modifications;

6. Corrective actions;

7. Accident reporting and analysis;

8. Safety training;

9. Motivational procedures;

10. Hazard control technology;

11. Safety authority; and

12. Program documentation.

At the same time, the precursor to today's perception survey was developed by the Association of American Railroads study group and validated by the test development experts, and was administered to the same companies.

The results of the audit and of the perception surveys were correlated with the accident statistics for these large organizations.

The hypothesis was that high scores in the 12 areas would correlate generally with lower accident and injury rates. Instead, they found little correlation with these factors. Following is a partial statement of the findings.

"It was an unexpected result of this study that so little correlation was found to exist between actual safety performance and safety activity scores. The overall score has almost no correlation with train accident rates and cost indicators, and is somewhat counter-indicative with respect to personal injury rates. The only two categories that correlated consistently and properly with accident rates were monetary resources and hazards control. Two categories -- equipment and facilities resources, and reviews, audits and inspections -- had counter-intuitive correlations."

The results seemed to be saying:

1. The effectiveness of safety programs cannot be measured by the more traditional criteria popularly thought to be factors in successful programs.

2. A better measure of safety program effectiveness is the response from the entire organization to questions about the quality of the management systems, which have an effect on human behavior relating to safety.

Similar studies have been made since the 1970s correlating audit results with accident statistics. The results usually show a low or no correlation. In a recent study of audits at Tulane University, some of the reasons were discussed. The characteristics of nine occupational health and safety management models, or audits, were examined. Each was different. The OSHA model suggested five basic components for a safety system. The American Industrial Hygiene Association suggested five also, but all different from OSHA's. The British Standards Institute had two basic elements, the Department of Energy five, the American Chemistry Council six, Det Norske Veritas three, the Hospital Association two and Australian Work two.

Not mentioned in the Tulane Study were others. The British Safety Council, at one time, had 30 elements, while the NOSA system from South Africa had five. The original International Loss Control Institute system had 17 to 20.

In short, they are all different as they reflect the ideas and biases of persons who put their beliefs on paper. Some seem "patterned" after the original concept of Jack Fletcher from Canada or Frank Bird in his work originally done for the South Africa mining industry. Others have copied the original concepts of Roman Diekemper and Donald Spartz.

Seldom do you find mention of correlational studies between the content of these audits and the accident record.

The whole concept of the audit is that there are certain defined things that must be included in a safety system to get a high rating or the biggest number of stars. How does this thinking jibe with the research? Not too well:

  • A NIOSH study in 1978 identified seven crucial areas needed for safety performance. Most are not included in the above programs.
  • A Michigan State study had similar results.
  • Foster Rhinefort's doctoral dissertation at Texas A&M suggested there was no one right set of elements.
  • A 1967 National Safety Council study suggested that many of the "required" elements of packaged programs were quite suspect in terms of effectiveness.
  • A 1992 National Safety Council study replicated the 1967 study with the same conclusions.

Likewise, the study done by the Association of American Railroads conclusively showed that the elements in most packaged programs had no correlation with bottom-line results.

In light of today's management thinking and research, the audit concept has become suspect. When audits became popular, apparently no one bothered to ask many questions. In most systems, a number of elements were defined and, in most, all elements equally weighted. Thus, the right books in the corporate safety library counted as much as whether supervisors were held accountable for doing anything about safety. Most of us never thought to question this, and most of us also did not question what components were included.

In addition, apparently there was little effort to correlate the audit results to the accident record. Thus, safety people were buying into an unproven concept. When some correlational studies were run, the results were surprising:

  • One Canadian oil company location consciously chose to lower its audit score and found its frequency rate significantly improved.
  • One chain of U.S. department stores found no correlation between audit scores and workers' compensation losses. It found a negative correlation between audit scores and public liability loss payout.

Regardless of the research, OSHA still wants to establish a standard dictating what elements must be in a safety system, as some states have done.

Some organizations have used the audit as a primary metric quite successfully because, over time, they have run correlational studies to ensure that the elements in their safety system get results. One example of this is the Procter & Gamble (P&G) system.

Gene Earnest, former safety director for P&aG, explained the system:

"A number of years ago, the P&G corporate safety group developed what is known as the Key Elements of Industrial Hygiene & Safety. In effect, these are the 'what counts' activities for IH&S and are the basis for site surveys. It was believed that if these activities were effectively implemented, injuries and illnesses would be reduced. Conversely, if they were done poorly, injuries and illnesses would increase. Because line management was involved in the development of this list, there was 'buy in.'

"I cannot stress enough the importance of having a clearly identified IH&S program against which goals can be established at all levels of the organization and people held accountable for before-the-fact measures of injury and illness prevention.

"Each key element is rated by the surveyor utilizing a scale of 0 to 10, where 0 means 'nothing has been done' and 10 means the key element is 'fully implemented and effective.'"

The validity of the key elements has been proven over the years through correlational studies. The audit can be an effective metric once an organization is sure that audit elements will lead to real results.

The Perception Survey as a Metric

The perception survey is used to assess the status of an organization's safety culture. Critical safety issues are rapidly identified, and any differences in management and employee views on the effectiveness of company safety programs are clearly demonstrated.

The goal of the perception survey is to understand how your company is performing in each of 20 safety categories. These categories include accident investigation, quality of supervision, training and management credibility. The survey begins with a short set of demographic questions that will be used to organize the graphs and tables that show the results.

The second part of the survey consists of 74 questions. The questions are designed to uncover employee perceptions about the 20 safety categories. Each question has been statistically validated over 10 years of use in the field, as development of the survey was a product of the previously mentioned study for the Association of American Railroads.

As a result of that 10-year study, several conclusions were reached:

1. The effectiveness of safety efforts cannot be measured by traditional audit criteria.

2. The effectiveness of safety efforts can be measured with surveys of employee (hourly to executive) perceptions.

3. A perception survey can effectively identify strengths and weaknesses of elements of a safety system.

4. A perception survey can effectively identify major discrepancies in perception of program elements between hourly rated employees and level of management.

5. A perception survey can effectively identify improvements in, and deterioration of, safety system elements if administered periodically.

The above conclusions, based on the data, carry with them considerable importance to safety management thinking. In effect, the data strongly suggests that those 12 elements described as a systems approach are not closely related to safety performance or results.

The 12 elements are not a bad description of the elements of many safety programs in use in industry today. This data does not question the audit approach to safety performance improvement. It does question the validity of any audit approach made up of arbitrarily decided elements such as these 12.

It supports, rather strongly, the belief that a perception survey, as described in this article (sometimes called a "culture survey") and properly constructed, is a better measure of safety performance and a much better predictor of safety results.

The Scorecard Approach

The trend today is toward multiple measures to assess safety system effectiveness. These usually include at least three measures:

1. The accident record,

2. The audit score, and

3. Perception survey results.

For the present, the accident record (statistics) is still used, as management is extremely hesitant to give it up. Eventually, it will be phased out of the scorecard. This phaseout could come soon if the OSHA concept of counting a reported pain as a recordable happens. If this happens, the OSHA incident rates will be many times what they currently are and will have little meaning, becoming only a pain index.

There could be other things in a scorecard, for instance:

4. Behavior sampling results,

5. Percentage to goal on system improvements, and

6. Dollars (claim costs, total costs of safety, etc.).

P&G's scorecard contains two measures: the OSHA incident rate and Key Element ratings (basically an audit score). Mead is considering using three: incident rate, perception survey results and audit scores. PPG is considering using four.

Other organizations are experimenting with other mixes for their scorecard of metrics to assess safety system effectiveness:

  • Navistar uses eight: incident frequency rate, lost-time case rate, disability costs, percent improvement in safety performance, actual health care costs, absenteeism, short-term disability and long-term disability.
  • Kodak sets goals and measures in seven areas: lost time, plant operations matrix (percent to goal), employee surveys, assessment findings, integration matrix, vendor selection and "best in class" (a benchmark metric).
  • The National Safety Council has suggested "performance indexing," which includes six: number of team audits, process safety observations, employee attitude ratings, required safety training, safe acts index and management audits.

Many other possible metrics, both leading and trailing, are discussed in detail in a new book on metrics from the Metrics Task Force of Organization Resources Counselors. There is considerable new and innovative thinking taking place in many organizations. As with safety system content, it is also true with safety metrics: There is no one right way to do it. Each organization must determine its own "right way."

In addition, after deciding the components to be included in the scorecard, you must decide how each component should be weighted, making it possible to come up with a single metric, if so desired.

It is obvious that we have two serious problems. One, we must figure out what should go into our scorecard; and two, we must convince our middle and upper management about the appropriateness of the scorecard elements. These changes pose serious challenges to safety professionals. They must create considerable dissatisfaction in their organizations with status-quo metrics (create cognitive dissonance) from the CEO down. Then, they must install the selected scorecard to replace the current, single metric. Because this could mean taking away executives' ways to compare to other organizations, we can expect considerable reticence in this change at that level.

At some point, we will have to do this. Probably the sooner we start down this route, the better.

References

Bailey, C., Using Behavioral Techniques to Improve Safety Program Effectiveness, Association of American Railroads, Washington, 1988.

Bailey, C., and D. Petersen, "Using Perception Surveys to Assess Safety System Effectiveness," Professional Safety, February 1989.

Bird, F., International Safety Rating System, 5th Edition, International Loss Control Institute, Loganville, Ga., 1988.

Diekemper R. and D. Spartz, "A Quantitative and Qualitative Measurement of Industrial Safety Activities," ASSE Journal, December 1970.

Earnest, R., "What Counts in Safety,' in Insights into Management, National Safety Management Society, 1994.

Esler, J., "Kodak's Health, Safety and Environmental Performance Improvement Program," presentation to ORC, 1999.

Farabaugh, P., "OHS Management Systems; A Survey in Occupational Health and Safety," Occupational Health & Safety, March 2000.

Fletcher J., Total Loss Control, National Profile Ltd., Toronto, 1972.

Herbert, D., "Measuring Safety and Trends in Safety Performance," presentation at National Safety Congress, 1999.

Navistar, "Operationalizing Health and Productivity Management," presentation to ORC, March 1999.

Organization Resources Counselors, Metrics, ORC, Washington, 2001 (expected).

Petersen, D., The Perception Survey Manual, from Core Media Training Systems, Portland, Ore., 1993.

Petersen, D., Techniques of Safety Management, 3rd Edition, ASSE, 1996.

Dan Petersen, Ed.D., PE, CSP, is a consultant in safety management and organizational behavior. He is past president of the National Safety Management Society and author of 14 books and tape series' on safety management.

Sponsored Recommendations

Managing Subcontractor Risks: Ensuring Compliance and Mitigating Disruptions in Complex Supply Chains

Sept. 26, 2024
Learn how to manage subcontractor risks and ensure compliance in complex supply chains. Explore best practices for risk mitigation, communication, and accountability.

Navigating ESG Risk in Your Supply Chain

Sept. 26, 2024
Discover the role of ESG in supply chains, from reducing carbon footprints to complying with new regulations and enhancing long-term business value.

Understanding ESG Risks in the Supply Chain

Sept. 26, 2024
Understand the critical role of ESG in supply chains, the risks for hiring companies, and the competitive edge suppliers gain by prioritizing sustainability.

Best Practices for Managing Subcontractor Risk

Sept. 26, 2024
Discover how to effectively manage subcontractor risk with unified strategies, enhanced oversight, and clear communication for consistent safety and compliance.

Voice your opinion!

To join the conversation, and become an exclusive member of EHS Today, create an account today!