Her name was Sandy, and after a brief visit, the lights were out, thousands homes and businesses were damaged or destroyed and emergency planning programs were put to the test.
Hurricane Sandy caused widespread damage and serious economic disruption in 24 states from Florida to Maine and as far inland as Wisconsin, with the worst damage in New Jersey and New York. Also known as “Superstorm Sandy” and “Frankenstorm” because it later merged with other storm systems, Sandy formed on October 22, peaked as a Category 2 storm, spanned 1,100 miles in diameter and caused billions of dollars worth of damage – some estimates say that insured loss payouts will likely reach $10-20 billion and that the total cost of the storm will likely be between $30-50 billion.
Sandy gave plenty of warning of her arrival, but still caused lingering disruptions from flooding, water leaks and extended power outages.
Sandy also gave us all a lesson in enterprise resilience and business continuity.
Lights Out
Overwhelming flooding. Swamped subway lines. Widespread power outages. Was the East Coast ready for Sandy? Did people believe and heed the weather forecasts? Some of the thorniest problems after Sandy, including a gasoline shortage and power outages, ended up being dealt with largely on the fly.
“I don’t know that anyone believed,” said Gov. Andrew Cuomo in an APreport after the storm. “We had never seen a storm like this. So it is very hard to anticipate something that you have never experienced.”
Asked how well prepared state officials were for Sandy, Cuomo said in the report, “Not well enough.”
Once the storm passed, the biggest issue seemed to be over the lack of power, says Bill Raisch, director of the InterCEP – International Center for Enterprise Preparedness – at New York University. InterCEP is an international research and development center for strategic risk management and organizational resilience. The Center focuses on solutions to challenges in private sector management of risk and the interface of the public and private sectors in addressing shared risks.
“One of the big lessons we learned was about power and our primary and secondary dependencies on power,” says Raisch. After the storm, the DOE estimated that 6,000 homes and businesses in the Mid-Atlantic region remained without power because of Hurricane Sandy or the following Nor’easter storm. The number had reached nearly 9 million.
“In the U.S., we have a reliable source of power, in general. However, here in NY, our dependency is not evident until something breaks,” Raisch says. “More importantly, secondary dependences became evident: many firms had a work-from-home contingency, but when people don’t have power at home, that contingency fails. The lack of power was one of the lessons learned from Sandy. First, a business has to identify its power dependencies as a firm and then look at how to prioritize and supplement. We should look at nations with less power supplies who deal with it on an ongoing basis. We can learn from them,” he says.
According to a federal study, the U.S. power grid has many key pieces of equipment that are either unguarded or so old they lack the sensors to limit outages from cascading. The report by the National Research Council (NRC), a private independent agency operating under a congressional charter, says that the power losses from Sandy were nothing. “We could easily be without power across a multistate region for many weeks or months, because we don’t have many spare transformers,” says M. Granger Morgan, engineering professor at Carnegie Mellon University in Pittsburgh and chairman of the NRC committee that wrote the report.
The report says the federal government faces difficulty in addressing weaknesses in the nation’s power grid, because more than 90 percent of the grid is privately owned and regulated by the states. And, it calls on DHS or DOE to study where the U.S. is most vulnerable to extended blackouts and to develop cost-effective strategies for reducing their length and their impact on critical services.
In a storm such as Sandy, self reliance comes into play. “Municipalities face critical crossroads when a storm such as Sandy comes through because they have to reprioritize their activities. So my two key words have been ‘self resourcing,’” says Thomas J. Rohr, Sr. CPP, Director, Worldwide Corporate Security for Carestream in Rochester, NY.
One of the company’s manufacturing facilities didn’t have power for one week, but Rohr says he was prepared. “If something goes wrong we want to know that we have a plan. And with our risk assessment and business continuity plan, our company is above par. In the storm of the century, until the next one hits, what is your self resourcing? We were able to get emergency power generation to the site. Sandy showed us where relationships come in handy. One of the relationships we have is with a local airport, which is extremely community-minded, so when neighbors needed something, if they could do it, they reached out and helped people.
“Could we have gotten power up faster?” he asks. “Yes, but in this situation we did well. We can have all of the power in the world, but if our workers have been evacuated or if they have family in trouble, you don’t get on the soap box and tell them that they need to get to work. Overall, you cannot spend the money and gear for these 1 percent types of situations, which Sandy was. You hope that your basic programs and plans get you started and then you work through the rest.”
Relationships also helped Jim Govro, Director of Facility Management at Charter Communications, which has 4.4 million square-feet in 1,300 facilities, including facilities on the North Carolina coast. “We have developed plans that leverage our class A contractors across the country,” Govro explains. “When things go wrong, I know that we have contractors in place to give us an assessment on what needs to be done. In an emergency situation, nine times out of 10 unless you are a big player in the area, all of the building materials are usually pre-bought or have hold tickets, so you might not be able to get building materials within a 500 mile radius. So we hire a trucking company to help us get our systems back in line. For example, after Katrina hit we brought in fuel trucks from St. Louis.”
So in most emergency situations, Govro believes that it is “Every man for himself, and it’s about leveraging great relationships. When your business is not concentrated in one area, we consider our contractors to be great assets to us, because the last thing that you need is someone you can’t trust or who can’t come through. If you asked me to send you my disaster plan, I can’t do it. The fact is that we mitigate a lot of it on the run. The operations guy can say ‘Let’s just go buy some lumber,’ but you can’t take that chance. That’s where I leverage my contractors to do that for me.”
“We were ready, and looking back, I don’t think that we would have done anything different,” adds John R. Bellucci, Executive Director with the NYS Bridge Authority. “In 2011, we put together a 24/7 way to function in a hurricane situation and to anticipate potential problems. One of the biggest things that we did was make sure that every generator would turn on in the event of a power outage. Really, our biggest problem with Sandy was staffing. We had employees in place on each bridge that would not normally be there and if their replacements did not show up they were not allowed to leave. What I would do in the event of another storm is to expand the use of Twitter and other social media to get people solid information. There were a lot of rumors prior to the storm – that bridges would close at noon, and all of the bridge offices were inundated with phone calls. In the event something happens and we need to close a single bridge facility, there are diversions that we need to make with traffic centers, so it’s about getting the news out.”
Enterprise Resilience 2.0
How long can you survive if your doors are closed or your systems are down? A strong enterprise resilience plan must consider many factors, including the safety of personnel, alternative office space and the loss of vendors. And once the plan is established, it should be tested regularly. When things go wrong, who is in charge, and who is on the response team?
“In this economic environment, risk management is still being perceived as an overhead item, and that’s wrong,” notes Ray Thomas, who heads the business assurance division for Booz Allen, a consulting firm. “In reality, risk management should be a core part of achieving [business] objectives. We see increasingly firms contemplating a more collaborative nature or consortium approach to common risks, and that’s smart thinking. The CSO and head of business continuity are just as critical as marketing and operations in an enterprise.”
Thomas suggests that a holistic view of risk management is needed. “Hurricane Sandy should not have been an unexpected event in the NY area. Hurricanes are a known risk so it gets to ensuring that business continuity plans are developed based on perceived risks,” he says. “In an emergency, a larger organization can shift things out of a region or elsewhere. Small businesses don’t have that luxury. So I also stress having key partnerships in place.”
This article was previously published in the print magazine as "Lights Out and Lessons in Enterprise Resilience."
Ten Ways To Improve Your Business Continuity Plan
By Kevin Howells, Manager at BTG Global Risk Partners
Over the years I have looked at hundreds of disaster recovery plans, business continuity plans and other variously termed business resilience documents, going back to my auditing days. In my present role, I continue to view them and review them, and now also test them as well, via bespoke scenarios written for clients. In my experience I have noticed that the problems that could be found with the plans can be summarized into ten categories.
1. IT Specific
During 2007, I was surprised to receive the documentation from a public sector organization relating to business continuity for review, and find that it all related to IT, and the main document related to Year 2000 compliance. As we all know, business continuity professionals constantly have to educate colleagues (normally senior management colleagues) that all processes need protecting, and not just IT specific ones, but it still surprises me when I find plans that are still solely IT-focused.
2. Narrow Plans
A business continuity plan needs to take into account the threats to the operation of all aspects of an organization, or at least the key threats. Plans I have reviewed have sometimes been too operational, i.e. only including the threats to specific projects and locations. Alternatively, I have also seen business continuity documentation that only includes the processes in relation to central or head office functions, such as IT, human resources and finance. It is also important for a business continuity plan to consider beyond just the organizational level. For a start there is the organization’s supply chain to consider, and then stakeholders such as customers, staff and partner organizations, and then the community as a whole.
3. Not Process Driven
Too many plans I review have a specific resolution in place for dealing with a small number of specific potential incidents, normally a fire, power failure, IT failure and more recently a flu pandemic. The problem with this approach is that unless the actual incident exactly matches the potential incidents planned for, the organization is essentially unprepared, and there is little point in having the plans in place at all. Business continuity plans need to concentrate on processes and generic threats (such as a building being unusable, lack of power, lack of telecommunications or a significant proportion of staff being unavailable).
4. Documentation Not Available
In my internal audit days, we were aided by the use of a checklist for reviewing business continuity plans, and the first question on that was “Where is the plan held?” More often than not, the answer was either “In my desk drawer” or “On the computer.” This was great in one way, as I had an easy recommendation for my report, but not so great for the organization itself, particularly if an incident occurred, and that incident involved the building they were in or the IT systems, or both! If those with responsibility for business continuity do not have access to the plans, then they cannot be applied.
5. Outdated
Any document needs to be updated in order for it to remain relevant, and this applies as much to business continuity plans as to any critical document. In a time of crisis, the last thing a business continuity team need is to find that their plan has not been kept up-to-date, and they are then scrabbling around trying to contact someone whose number they don’t have (or that the insurers/bankers have been changed), or being referred to a document (or department) that does not exist in the organization anymore. Business continuity plans should be formally reviewed on at least an annual basis.
6. No Testing is Planned
Many organizations sit with a BS25999 compliant business continuity plan in place, ignorant that there are problems within their plans; maybe not major ones, but of a scale that they could cause significant deterioration of either efficiency or effectiveness of service should an incident affect them, and they need to invoke the plan. Only by actually testing a plan, in as close to actual conditions as it possible (preferably a scenario workshop rather than a desk-top exercise), will an organization identify the improvements that are needed, and the limitations inherent within the plan. A follow up exercise may also be needed to truly iron out the problems identified in the first test and enable their business continuity plan to be a ”ready” document.
7. Too Complicated
It is easy to get carried away when writing business continuity documentation and worrying that they are not detailed enough for those who need to use them when they are invoked. However, I have seen several plans where there is so much detail contained within them that in the end the core information is lost beneath different categories and levels of incident. By way of an example, in a time of crisis, a member of staff should not be confused as to when to apply Disaster Code A, B or C, these codes/levels should be clearly defined in the document, or not be there at all. The core detail that should be included within a business continuity plan is what tasks should be undertaken, by whom, when and why.
8. Wrong Tense
A small number of business continuity plans I have reviewed have been written in completely the wrong tense. Instead of saying, “The Key Contact Listing is shown at Appendix A” they say “We will put together a Key Contact Listing,” showing that they do not have a plan in place, merely a strategy to put a plan in place. A business continuity plan needs to be ready for use, not an aspiration.
9. Poor Assumptions
Many a business continuity plan falls down because of a bad assumption on the part of the staff that put it together. For example, why should it be assumed that there could not be more than one incident affecting the organization at any one time? When identifying their key threats, there is also a very real danger of assuming that “we will get by.” or even “someone else will do that.”
There are also many plans in existence that do not have provision within them for defining either what an incident is (and therefore when a business continuity plan should be invoked), or when an incident is over (and normal operations, and authorizations, restart).
10. No Media Planning
There is a plethora of footage and newsprint showing company representatives being interviewed and saying something either controversial, inappropriate or downright damaging to their employer’s reputation. This normally happens when someone that has not received media training takes it upon themselves to be interviewed, often with the best of intentions. A good business continuity plan should clearly assign personnel to liaise with the media, train them, and specifically prohibit anyone else from talking to a media organization. Dealing with a damaging incident is bad enough without having also to deal with criticism from the media, and thereafter the local community and other stakeholders.