Two Paths to Server Performance
I/O scheduler and file system selection can boost SUSE Linux Enterprise server performance
Written by Matthias G. Eckermann and Bill Tobey
The Novell approach to assembling a SUSE Linux server distribution has always been to provide a wide range of the best packages and tools available from the community. Our goal is to give IT organizations the most flexible and versatile resource set for configuring and optimizing high-performance servers for a complete range of data center applications.
This article will explore two often-overlooked areas where SUSE Linux Enterprise Server provides multiple options that administrators can exploit to enhance server performance: the I/O scheduler and the file system.
Meet Your I/O Scheduler
The I/O scheduler is the part of the kernel that handles read / write access to block storage devices—a USB stick, local disk, NAS filer, SAN, network file system and any other storage environment that holds data in blocks. A scheduler queues and sequences the execution of read-write requests in order to manage mechanical latency (the seek time related to head travel around the disk) and optimize data delivery performance. Its bag of tricks includes three techniques for manipulating the request queue:
- Request merging – Requests for data in adjacent blocks can be combined to improve throughput by reducing both seek time and the total number of syscalls required to service a request.
- Directional (elevator) reordering – Requests can be reordered based on location, to maintain head movement in one direction for as long as possible, using the same control methodology as an elevator to avoid service starvation at the disk peripheries.
- Priority reordering – Requests can be sequenced according to various priority schemes, such as a start-of-execution deadline assigned to each request at time of receipt.
The Four Types of Linux I/O Schedulers
There are four types of Linux I/O schedulers, each of which implements the basic sequencing techniques in different ways and combinations, providing significant variations in I/O performance with different application workloads.
The NOOP scheduler is the simplest of all Linux I/O schedulers. It merges requests to improve throughput but otherwise attempts no other performance optimization. All requests go into a single unprioritized first-in, first-out queue for execution. It’s ideal for storage environments with extensive caching, and those with alternate scheduling mechanisms—a storage area network with multipath access through a switched interconnect, for instance, or virtual machines, where the hyperviser provides I/O backend. It’s also a good choice for systems with solid-state storage, where there is no mechanical latency to be managed.
To activate the NOOP I/O scheduler for use with all applications and storage devices, edit your boot loader configuration settings to pass the kernel parameter: elevator=noop.
The Deadline scheduler applies a service deadline to each incoming request. This sets a cap on per-request latency and ensures good disk throughput. Service queues are prioritized by deadline expiration, making this a good choice for real-time applications, databases and other disk-intensive applications. To activate the Deadline I/O scheduler for use with all applications and storage devices, edit your boot loader configuration settings to pass the kernel parameter: elevator=deadline.
The Anticipatory scheduler does exactly as its name implies. It anticipates that a completed I/O request will be followed by additional requests for adjacent blocks. After completing a read or write, it waits a few milliseconds for subsequent nearby requests before moving on to the next queue item. Service queues are prioritized for proximity, following a strategy that can maximize disk throughput at the risk of a slight increase in latency.
The Anticipatory scheduler delivers best performance with Web and file servers, and desktops with single IDE/SATA disks. It is the default scheduler in the mainline Linux kernel, and can be activated by editing the boot loader configuration file to pass the kernel parameter: elevator=as.
The Completely Fair Queuing (CFQ) scheduler provides a good compromise between throughput and latency by treating all competing processes even-handedly. Each process is given a separate request queue and a dedicated time slice of disk access. CFQ provides the minimal worst-case latency on most reads and writes, making it suitable for a wide range of applications, particularly multi-user systems. Because of our unique desktop-to-data center strategy, CFQ is the default I/O scheduler in SUSE Linux Enterprise Server 11. It can be activated by editing the boot loader configuration file to pass the kernel parameter: elevator=cfq.