Novell Home

Novell Connection Magazine Home

March 2008

Features

Backup
and Recovery

Trend Talk by Amin Marts

  • Show Menu for Current Issue

Novell Open Enterprise Server, now in its second generation, has a deeper, richer and more capable ecosystem of support encompassing it. A major facet of that ecosystem is backup and recovery. As with any platform that serves as a foundational component of enterprise computing and collaboration, having compatible tools to recover lost files is not a luxury—it's a must have.

The world of backup and recovery is chock full of technologies that offer much greater capabilities today than they did a few short years ago. No longer is backing up data incrementally or differentially a standard best practice. Nor is it the only way to approach the task. Technologies such as synthetic backup, capacity-optimized storage/data deduplication and continuous data protection are a few of the newer capabilities vendors are offering in this space.

These technologies are especially compelling when paired with Open Enterprise Server. Unlike many technologies focused directly on the knowledge worker, implementing Open Enterprise Server lays the foundation for a comprehensive backup and data recovery plan.

This article provides an overview of the latest backup and recovery technologies available for Open Enterprise Server 2, with particular focus on the new and often misunderstood offerings in the space.

Synthetic Backup:

One of the newest and most important technologies in the backup and recovery space is synthetic backup; however, in the strictest sense, the term 'newest' is misleading. Synthetic backup has been on the scene for close to three years. It was introduced by a number of niche data management startups and has most recently been added to the portfolio of tier-one vendors such as Symantec and CommVault.

A major differentiator between this technology and traditional backup methodologies is the ability to create a full backup without accessing the original, online data. Enterprise backups are generally responsible for the safekeeping of terabytes of data. Depending on the available bandwidth between the data being copied and the backup device, a full backup can take upwards of an entire weekend. Many times while this is happening, endusers and applications are changing and manipulating the data. Synthetic backups mitigate this challenge by reducing the amount of time needed to complete the full backup.

Borrowing a typical scenario from the real-world, incremental backups take place Monday through Thursday with the full backup starting on Friday. During these incremental backups, data is streamed from the primary data store to a backup medium. Whether it's tape or disk, the same rules apply. Regardless of the destination, bandwidth and processing overhead, resources are consumed by both the media server and the targets. The aggregate amount of data can be relatively small per target, but that changes drastically when a full backup takes place.

During a full backup, all of the target's data is streamed to the backup device. The data sent to the backup device is quite different from that of an incremental backup, because it includes everything. Everything—meaning data that has been altered as well as data that hasn't been touched in weeks, months or years. As you might guess, this is a high-touch, resource-intensive process.

Synthetic backup transforms this resource-intensive scenario at the full backup stage. This is accomplished by leveraging the backup file meta data. Instead of streaming data from the backup target to create the full backup, data is messaged from the incremental dataset. The heavy lifting in this case is done by the media server, which orchestrates the entire process. The metadata, which is comprised of backup dates, times, data locations, and the like, is then used to create a full backup without requiring data to traverse the network.

Organizations adopting this technology have seen vast improvements in backup times. An example is the University of Montreal. They went from having to shut down production servers to conduct backups for 12 hours each weekend to performing a standard full backup only once a year. Synthetic backups pave the way for improved resource management and flexible backup strategies. This winning combination also provides for substantial cost savings in media and power consumption.

Capacity-optimized Storage (COS):

Generating a great deal of buzz within the storage industry is capacity-optimized storage, commonly referred to as data deduplication and aligned with data reduction practices. Capacity-optimized storage was first introduced to the data center in support of existing backup solutions. More recently it has migrated into a primary storage role. Although this Darwinian evolution is intriguing, the focus of this article will remain on the role of capacity-optimized storage as a complementary backup and recovery technology.

Due in part to the “data tsunami” many organizations are currently experiencing, meeting Recovery Time Objectives (RTOs) is an ongoing challenge. Organizations that are forced to replicate recovery data offsite find it especially challenging. Simply put, the overriding objective of capacity-optimized storage is to reduce the total amount of data housed on a storage medium.

Data deduplication or data reduction hinges on the illumination of patterns to identify redundant data. Redundant data, or data that remains untouched or in its original state, can consume more than 60 percent of an organization's storage capacity. Deduplication technologies simply distinguish data that has not changed from data that has—and then save only the latter. This technology eliminates the redundancy of backing up information that is unchanged.

Next



© 2014 Novell