Novell Home

Novell Connection Magazine Home

March 2008

Features

Backup
and Recovery

Trend Talk by Amin Marts

  • Show Menu for Current Issue

Two techniques are used when analyzing data for recurring patterns: byte and block level. At the byte level the file itself is in play. The deltas, or changes, are recorded as file versions. Think of this as the version control WebDAV associates to collaboration. Typically they are stored in 100MB chunks, as opposed the 8Kb chunks in which block data is stored. The type of data, median file size, and amount of data changed daily, weekly or monthly are typical factors to take into account when determining the appropriate solution. At a high level, fewer 'data chunks' means less file system defragmentation. As capacity-optimized storage is a disk-based solution, preservation of file system performance is critical to the overall health of the backup solution.

An analogous battle-tested solution within the Novell Workgroup stack is iFolder. Designed to provide mobile users with access to their data anytime and anywhere, it has also seen action in augmenting backup and recovery solutions. By design, it transmits only the deltas from the directory of origin (such as a desktop) to the target, back-end server. Replication and transmission of the deltas from the desktop to the data center preserve bandwidth while simultaneously reducing data upload times. In this way, iFolder represents the same conceptual idea as capacity-optimized storage—but applied to a different use case.

Continuous Data Protection (CDP):

According to the Storage Networking Industry Association, Continuous Data Protection (CDP) is “a methodology that continuously captures or tracks data modification and stores changes independently of the primary data, enabling recovery points from any point in the past. CDP systems may be block-, file- or application-based and can provide fine granularities of restorable objects to infinitely variable recovery points.” (For more information, see snia.org/forums/dmf/programs/data_protect_init/cdp/.)

In essence, the technology provides a framework from which aggressive RTOs can be met with minimal effort. The technology can be best thought of as a robust file system journal capturing data writes. Because of its architecture, CDP technology is targeted at the

application. Its primary role is to get the application and its associated data back online quickly in case of failure.

A typical use case scenario for this technology is not complete disaster recovery, but repairing database corruption or accidental file deletions. Typically, the technology provides the greatest benefit in environments that meet the following criteria:

  • Frequent data change
  • Traditional backup is not an option
  • Datasets are large transactional systems

Introducing newer technologies like continuous data protection to an existing disaster recovery strategy provides for additional layers of data survivability. Many organizations understand this, but don't know exactly where or how to implement it.

When it comes to backup and recovery best practices, it's best to standardize on a solution where possible. Successfully mixing and matching best-of-breed software components from multiple vendors poses significant interoperability challenges. Additionally, it's a disaster recovery nightmare waiting to happen. The reasons for this are simple: More so than with hardware, the backup and recovery market is not built around interoperability between vendors, an important caveat as it relates to the media server. CDP is the brains of the operation and, as such, must be compatible with other applications in the backup and recovery system. Its primary responsibility is to manage backup operations—mainly scheduling; however, it also serves as the location of the backup catalog. The catalog is a record of all the data that has been backed up. To have a catalog that spans years and terabytes of data is common.

Case in point: HIPPA regulations dictate a seven-year capture of information. Think about the number of digital records encapsulated within a single patient's file in that time frame. Then multiply that by an entire hospital's files. Real-world examples such as this demonstrate why data backup and recovery have become so critical to enterprise operations.

Next



© 2014 Novell