Organizations may overlook implementing a proper data protection strategy program because there is a misconception that SaaS providers can recover data when a problem occurs; however, organizations are responsible for backing up and restoring their own data.

Because data resiliency and security remain real and present concerns for all companies leveraging cloud services, proactive cyber risk management strategies that use backup analytics to maintain business continuity are crucial.

The need for timely recovery

Organizations need to understand that having a backup does not equate to being resilient against data loss or corruption. The ability to restore data rapidly back into good condition is harder than many organizations realize. Data shows that 36% of company backups are incomplete, and 50% of restores fail. 

A key to timely recovery is being able to restore precisely what was damaged rather than having to restore everything in its entirety unnecessarily, which wastes valuable time and resources. Performing routine analytics on backups and periodic tests of recovery capabilities can ensure timely recovery, preventing weeks of downtime and six to seven-figure business interruption. Regulations also require organizations to test their backup and recovery readiness at least annually to ensure that their personnel, processes and platform can meet the Recovery Time Objective (RTO), limiting disruption and cost to a tolerable level for the business.

Organizations can optimize their data resiliency strategies by periodically practicing data recovery from backups to ensure they are able to restore business operations quickly and reliably.

Hidden SaaS dependencies

Organizations often overlook a critical hidden dependency for handling IT issues, which is their ticketing and workflow system to support IT troubleshooting. When a service disruption occurs, resolving the problem may take longer if IT support staff cannot access their ticket tracking and troubleshooting workflows, many of which are SaaS solutions. An effective approach to uncovering such hidden dependencies is to run a tabletop exercise that simulates the response to an incident that simultaneously disrupts a critical service and damages workflows and supporting data used for IT troubleshooting.

Another often overlooked dependency is the links between items stored in databases and SaaS applications, creating a parent-child hierarchy that can be multiple levels deep. Depending on the depth of impact a data loss has on this hierarchy, it can be difficult to restore the original parent-child relationships in the data. True data resiliency requires fit-for-purpose technical solutions that can completely reconstruct both the data and parent-child relationships from backups. Discrepancies between the identifiers used to link between SaaS data elements also add complexity. This can confuse automated solutions that do not notice the same information already exists and restore the duplicate data from the backup, resulting in added work to clean up the duplicates. 

By recognizing hidden risks to data resiliency, organizations can be poised for better recovery when an IT disruption or data loss happens. 

Proactive detection strategies

Business disruption increases proportionally to the time it takes to detect and recover from the problem. For that reason, organizations must shift their focus from a reactive approach to a proactive strategy. This includes problem detection by performing automated analytics and anomaly alerting on backups.

Having proactive preservation strategies in place that can perform analyses on a separate secure platform eliminates the risks of touching live systems and impacting business operations. This approach follows forensic best practices of performing analysis on a working copy of data rather than the original. Proactively preserving all the available contents and context of a SaaS database enables forensic data recovery. Preserving data that can be easily accessible after a ransom attack or when a SaaS provider system is inaccessible supports business continuity.

Data loss and corruption incidents are not always obvious and can cause problems down the road if they are not detected and repaired promptly. Examples of data modifications between backups include content additions, deletions and alterations. Metadata modifications between backups include schema and permission additions, removals and changes. The most effective way to detect data loss and corruption is to perform a comparative analysis between backups over time, looking for anomalous modifications.

By utilizing the historical data available in backups, deviations from past normal patterns of use can be detected. Automatically generating alerts, when deviations occur, enables proactive detection and reduces the risk of such problems going unnoticed for prolonged periods.

Mitigating data disruption 

Implementing approaches that enable organizations to test and understand the effectiveness of plans and processes increases their ability to bounce back quickly from a major data loss/corruption, which helps reduce the chances of issues further escalating. For example, a Recovery Point Objective (RPO) limits how much an organization can afford to lose in a data loss or corruption. This means that they need to consider adopting technologies like continuous backup to help reduce RPO to zero.

Data resiliency and timely recovery are about more than having the necessary people, processes, and platforms in place. 

In today’s landscape, organizations must ensure they are prepared, have the practices in place, and are supported by experienced experts to mitigate issues. The emphasis on preparation, practice and support truly enables organizations to handle any data loss or corruption issue promptly and effectively.