The Network Operating System
Now that you have read about data transmission, the OSI model, and the network hardware involved in network communication, you can begin to understand just how complex network communication really is. In order for a network to communicate successfully, all the separate functions of the individual components discussed in the preceding sections must be coordinated. This task is performed by the network operating system (NOS). The NOS is the "brain" of the entire network, acting as the command center and enabling the network hardware and software to function as one cohesive system.
Network operating systems are divided into two categories: peer-to-peer and client-server. Networks based on peer-to-peer NOSs, much like the example we used in the OSI model discussion, involve computers that are basically equal, all with the same networking abilities. On the other hand, networks based on client-server NOSs are comprised of client workstations that access network resources made available through the server. The advantages and disadvantages of each are discussed in the following sections.
Peer-to-peer networks enable networked computers to function as both servers and workstations. In a wired peer-to-peer network the NOS is installed on every networked computer so that any networked computer can provide resources and services to all other networked computers. For example, each networked computer can allow other computers to access its files and use connected printers while it is in use as a workstation. In a wireless peer-to-peer network, each networked device contains a short-range transceiver that interfaces with the transceivers of nearby devices or with APs. Like their wired counterparts, wireless peer-to-peer networks offer file and resource sharing.
Peer-to-peer NOSs provide many of the same resources and services as do client-server NOSs, and in the appropriate environment can deliver acceptable performance. They are also easy to install and are usually inexpensive.
However, peer-to-peer networks provide fewer services than client-server networks. Also, the services they provide are less robust than those provided by mature, full-featured client-server networks. Moreover, the performance of peer-to-peer networks decreases significantly both with heavy use and as the network grows. Maintenance is also often more difficult. Because there is no method of centralized management, there can be many servers to manage (rather than one centralized server), and many people may have the rights to change the configuration of different server computers. In the case of wireless peer-to-peer networks, however, an AP may be one node in the network, allowing users both to share files directly from their hard drives and to access resources from the servers on the LAN.
In a client-server network the NOS runs on a computer called the network server. The server must be a specific type of computer. For example, the most commonly used client-server version of the NetWare NOS runs on Intel-based computers.
A client-server NOS is responsible for coordinating the use of all resources and services available from the server on which it is running.
The client part of a client-server network is any other network device or process that makes requests to use server resources and services. For example, network users at workstations request the use of services and resources though client software, which runs in the workstation and communicates with the NOS in the server by means of a common protocol.
On a NetWare client-server network, you "log on" to the network server from the workstation. To log on, you provide your user name and password—also known as a login—to the server. If your user name and password are valid, the server authenticates you and allows you access to all network services and resources to which you have been granted rights. As long as you have proper network rights, the client-server NOS provides the services or resources requested by the applications running on your workstation.
"Resources" generally refers to physical devices that an application may need to access: hardware such as hard disks, random access memory (RAM), printers, and modems. The network file system is also a server resource. The NOS manages access to all these server resources.
The NOS also provides many "services," which are tasks performed or offered by a server such as coordinating file access and file sharing (including file and record locking), managing server memory, managing data security, scheduling tasks for processing, coordinating printer access, and managing internetwork communications.
Among the most important functions performed by a client-server NOS are ensuring the reliability of data stored on the server and managing server security.
There are many other functions that can and should be performed by a network operating system. Choosing the right operating system is extremely important. NetWare NOSs are robust systems that provide many capabilities not found in less mature systems. NetWare NOSs also provide a level of performance and reliability that exceeds that found in most other NOSs.
Thin Client-Server Networks
A variation on the client-server network is the server-based network or thin client-server network. This kind of network also consists of servers and clients, but the relationship between client and server is different. Thin clients are similar to terminals connected to mainframes: the bulk of the processing is performed by the server and the client presents the interface. Unlike mainframe terminals, however, thin clients are connected to a network, not directly to the server, which means the client does not have to be physically near the server.
The term "thin client" usually refers to a specialized PC that possesses little computing power and is optimized for network connections. Windows-based terminal (WBT) and network computer (NC) are two terms often used interchangeably with thin client. These machines are usually devoid of floppy drives, expansion slots, and hard disks; consequently, the "box" or central processing unit is much smaller than that of a conventional PC.
The "thin" in thin client refers both to the client's reduced processing capabilities and to the amount of traffic generated between client and server. In a typical thin-client environment, only the keystrokes, mouse movements, and screen updates travel across the connection. (The term "thin" is also used generically to describe any computing process or component that uses minimal resources.)
Figure 9: Thin clients range from complete dependence on the server to the autonomous PC, which can both run on its own applications and act as a terminal.
Figure 9 shows where clients fall on the "thinness" continuum: mainframe terminals are the thinnest of all, followed by thin clients and conventional PCs. Thin clients are "fatter" than mainframe terminals because they run some software locally—a scaled-back operating system, a browser, and a network client—but they do not store files or run any other applications. PCs, on the other hand, can either be fully autonomous—running all applications and storing all files locally—or they can run browser or terminal-emulation software to function as thin clients.
Unlike mainframe terminals, which show text-only, platform-specific screens, thin clients display the familiar Windows desktop and icons. Furthermore, the Windows display remains consistent even when using non-Windows applications, so you do not have to learn new interfaces in heterogeneous network environments.
Server-based computing usually involves "server farms", which are groups of interconnected servers that function as one. Thin clients link to the farm instead of a particular server. If a single server fails, the other servers in the farm automatically take over the functions of the failed server so that work is not interrupted and data is not lost.
The two primary protocols for thin-client computing are remote display protocol (RDP) and independent computing architecture (ICA). RDP was developed by Microsoft for its Terminal Server and ICA is Citrix technology. Both protocols separate the application logic from the user interface; that is, they pick out the part of the application that interacts with you such as keyboard and mouse input and screens. Only the user interface is sent to the client, leaving the rest of the application to run on the server. This method drastically reduces network traffic and client hardware requirements. ICA clients, for example, can have processors as slow as an Intel 286 and connection speeds as low as 14.4 kilobits per second (Kbps).
Although RDP is the older protocol, ICA has become the de facto standard for server-based computing. ICA presents some distinct advantages over RDP, not the least of which is ICA's platform independence. ICA transmits the user interface over all standard networking protocols—TCP/IP, IPX, SPX, PPP, NetBEUI, and NetBIOS—whereas RDP supports only TCP/IP. ICA also supports all standard clients from Windows to UNIX to Macintosh, but RDP can be used only with Windows 3.11 and later. Furthermore, RDP is a streaming protocol that continuously uses bandwidth while the client is connected, whereas ICA sends packets over the network only when the mouse or keyboard is in use. As a result, most network administrators run ICA on top of RDP to obtain the best functionality.
Server-based computing is best used in environments where only a few applications are needed or when many people will be using the same machine, such as in shift work. For example, if you use only a spreadsheet, a word processor, and e-mail, a thin client may be an ideal solution. Likewise, if the applications rely on databases and directories that are already server-based, such as with airline reservations or patient charts, thin-client computing might be a good choice. Networks with many different platforms can also benefit from server-based computing: you can directly access UNIX, Macintosh, mainframe, or other non-Windows applications via ICA without the mediation of cumbersome translation applications. If, however, you need to use high-end applications such as desktop publishing, graphics, or computer-aided design, the conventional PC with its local computing power provides the only viable option.
Thin-client computing has several other advantages. Because of their simplicity, thin clients are easier for an IT staff to maintain: users cannot tamper with the settings or introduce flawed or virus-infected software into the system. The server-centric model also allows upgrades to be performed at the server level instead of the client level, which is much less time consuming and costly than updating individual PCs. Thin clients typically do not become obsolete as quickly as their fatter counterparts—the servers will, but they are fewer in number and therefore easier to upgrade or replace. Furthermore, thin clients are less likely to be stolen: because they cannot function without a server, they are useless in a home environment.
Disadvantages of thin clients include reduced computing power, which makes them practical only in limited circumstances, and absolute reliance on the network. With conventional PCs users can run applications locally, so when the network goes down, they do not necessarily experience work stoppage. On the other hand, the slightest power outage can cripple a thin-client network for long time: after power is restored, all the clients request the initial kernel from the server at the same time. Also, it is difficult if not impossible to customize a thin client. If you need to install a scanner or other peripheral device, a thin client cannot accommodate it (printers are supported). Furthermore, you cannot customize the look and feel of your desktop, which for some may be disheartening or frustrating.
Nevertheless, thin-client computing has its place, albeit an ironic one: whereas the PC represented progress beyond the terminal/mainframe paradigm, thin clients represent a return to it (though with considerably better technology). Analysts foresee thin-client computing occupying a significant niche among mobile users and application service providers (ASPs). In the near future, when applications are made available over the Internet, thin-client computing will in some cases supplant the autonomous PC.Return to Primer Index | Next Section