The technology landscape is littered with trends that are pegged to either revolutionize or augment traditional computing. Two such technologies are Solid State Disk (SSD) and Holographic data storage. The catalyst driving research in these areas is market demand. In 2006, external hard disk drives (HDD) with a capacity of more than 300GB accounted for only 20 percent of the market; in 2007, they account for almost half. Clearly, this trend is on the upswing and exhibits no signs of plateauing. Let’s focus on the storage technologies designed to satisfy this demand as well as outline how they also play in to “green” initiatives.
> Solid State Disk
Solid State Disks are not new; however, they do provide touch points on a number of current market drivers. Notably, it’s the medium of choice where space is at a premium and shock resistance is critical. This makes it especially attractive for portable music devices, PDAs, and handheld GPS devices.
Because this technology aligns with numerous business and ecological trends, leaders such as Micron, SanDisk and Toshiba are clamoring for position. Market catalysts here are similar to that of virtualization. Initiatives such as lower power consumption, increased computing density and data availability, as well as a reduction in the already-submillisecond latency between a request and data presentation are driving demand. Often referred to as flash memory, it is incredibly versatile, stable and durable as well as energy efficient and performance rich.
Flash memory is deeply rooted in the portable, small form factor world; however, this is changing. One of the first mainstream deployments outside the realm of personal devices was in the hybrid drive, introduced by Samsung. The design parameters were simple: design a storage medium that was power-efficient, reliable and provided a speedy boot sequence. Enter the hybrid drive.
Functionally speaking, “incoming data is directly recorded to the chip. When the chip is about full, the hard drive wakes up, takes the data, records it and goes back into idle.”1 This ability to reduce the “spin-time” of the platters accounts for enhanced Mean Time Between Failure rates, or MTBF, as well as improved battery performance.
What technology is SSD based on? Electrically Erasable Read Only Memory is the basis for SSD. The defining characteristic of nonvolatile memory is there is no data loss when the electrical current is removed from the system. There are innate benefits over other storage mediums, namely optical disk, magnetic tape, traditional hard disk, the first of which being the lack of moving parts. Because solid state disk is architected similar with NAND, or Not AND, in mind, there is no need for moving parts such as an actuator arm, read heads or a spinning storage medium.
One of the prevailing flash architectures in use in the SSD is NAND, which refers to the boolean logic used to read data from this medium. The nearest cousin to NAND is NOR which is not widely used in highdensity storage.
In devices such as PC BIOS chips, and lower-end cellular phones, NOR is the dominant standard. The prevailing characteristic which makes it optimum for this use case is its ability to retrieve data in random access. Architecturally speaking, the memory cells which form the basis of storage in NOR flash are connected in parallel. According to Toshiba, a market leader, NOR flash is ideal for code-storage applications. These are best described as situations that require low-density and high-read applications which are predominately read only. Because of the lack of chip density in comparison to NAND, NOR lacks the storage capacity to support storage-focused situations.
Another differentiator between the standards is their lifespan, which is measured by their supported erase cycles. Again speaking of its architecture, flash memory is arranged in blocks of either 128KiB for NOR or 8KiB for NAND. Likewise, what happens when an entire block isn’t written to? Is only that percentage erased? Bluntly, the answer is No. Instead, all of the bits that comprise that block are reset to zero. This process is called wear leveling, which guards against certain bits reaching their lifespan prematurely to others.
The method as to how data is written is equally as important as how to erase it. NAND organizes data into pages which are 512 bytes in size. This is accomplished via an internal buffer that is then written to disk post a write command. NOR on the contrary flushes data to disk individually unlike the ordered group write of NAND.
With all of this architecture and the associated trade offs of each, how do you decide which to use? The good thing is: you don’t have to. The decision has been made for you with the determining factor being what type of device are you using to access this storage medium. Moreover, the vendor has associated the appropriate technology with appropriate technical responsibility.
> Holographic Data Storage
Holographic storage is a technology which has been on the horizon for years. Introduced by Polaroid and developed further by InPhase, it promises reduced power consumption, improved access to data and enhanced reliability. One of the compelling benefits the technology provides is a substantial increase in data capacity captured within a traditional form factor. This increase in capacity, as the name outlines, is achieved by encoding the data in three dimensions.
To explain its architecture, at the center of the technology resides a laser beam, which is split. For starters, let’s focus on the write operation, being that if nothing is recorded on the storage medium, what’s the point in being able to read? The laser is split into two beams: a signal beam and a reference beam. Data is encoded on the signal beam that is passed through the spatial light modulator (SLM). As the signal beam passes through the SLM, it creates a black and white pattern of 0s and 1s.
Simply put, information stored on the signal beam is translated into pixels. This causes an interference pattern which the reference beam intercepts. The ensuing output is a holograph that is then ‘projected’ and stored onto the storage medium via chemical reaction.
To read data back, the single reference beam illuminates the medium. This causes the resulting pattern to be projected onto a detector that reads the entire page of data at once.
Mentioned earlier were the promises of lower power consumption and enhanced reliability. Unlike traditional hard drives whose storage medium is spinning continuously, the storage medium associated with holographic storage rotates only slightly to expose a clean storage surface for writing. Conversely, it also rotates slightly to expose data when a read is taking place. Being that data is written in entire pages and not fragmented this lowers the spin time of the storage medium—a la lower power consumption.
Much work is still to be done to bring these technologies into the mainstream. Between the two, SSD is ahead and is currently used in some laptops and blade servers. Holographic storage is still on the horizon, primarily because of its high cost per gigabyte and file system issues. We are sure to see a reduction in this cost; however, until this happens holographic storage will remain a niche player on the fringes.
Lastly, with power consumption concerns, compliance issues and the need to keep more data available, these and technologies of the like will continue to evolve and migrate to the mainstream data center.