Tiered Electronic Distribution (TED): Improving Information Dissemination
Novell Cool Solutions: Feature
By Rick Cox
Digg This -
Posted: 8 Mar 2000
Updating the servers can present an ongoing challenge in some systems. You can distribute SSPs by burning them to a CD; the disk can then be mailed to each location where an update is required. If you need to update your servers only occasionally, if you need to update only a handful of them, or if your bandwidth limitations restrict the size of distribution you can reasonably send, this method of distribution may be ideal for you.
On the other hand, if you require that updates be performed on a regular basis or to a large number of servers, Tiered Electronic Distribution (TED) is the perfect solution. It's part of the new ZENworks for Servers, and looks to be a tremendous time-saver for lots of you. Here's an excellent discussion of how TED works, from Product Manager Rick Cox.
- Tiered Electronic Distribution (TED)
- Creating Items for Distribution
- Distribution Features
- TED Architecture
Networking may be making the world a smaller place, but the networks themselves are getting larger. Servers, the vital nodes of the network, must be kept up to date to facilitate the flow of information across enterprises. If your network has more than a handful of servers, your network administrators are spending inordinate amounts of time (read: money) configuring servers to deliver the most current information into the hands of your employees. They must visit each server one by one, over and over, in numerous distant locations, and in all likelihood they're falling behind.
Novell's ZENworks for Servers (ZFS) is a "set it and forget it" solution to centralize and automate information and application distribution throughout your network. Especially designed for organizations that need to distribute vital data quickly or according to a schedule, ZfS allows you to automate data distribution from a central location so that administrators do not need to visit each server individually. With its high degree of scalability and Novell Directory Services (NDS) integration, ZfS is a cost-effective way to manage servers individually or in groups regardless of their location on the network.
ZfS consists of three components:
- Server Policies
- Tiered Electronic Distribution (TED)
- Server Software Packages (SSPs)
Server Policies enable you to standardize the configuration of all your network servers. You probably acquired your servers a few at a time, configuring them individually as you got them, and it is likely your server configurations also differ from department to department, from site to site. With ZfS you can create, distribute, and enforce consistent server policies?rules that govern server activity?across the network. Furthermore, if someone changes a server configuration, ZfS will reset the configuration automatically, thereby guaranteeing that your servers stay configured exactly as you set them.
Policies are a proactive measure that help prevent inadvertent network disasters. Imagine that your CEO is giving an important presentation to investors and you bring the server down for routine maintenance in the middle of the slide show. Imagine also that it is the end of the month, payroll is generating paychecks, you down the server, and they can't get the check program running again for several days. ZfS server policies can be set to cancel a down command if a particular user is logged on or if a designated process is running.
TED streamlines software delivery by automating the distribution of information and applications. You can schedule when information is to be delivered to your severs, and special features enable you to deliver that information with minimum wide area network (WAN) traffic.
SSPs simplify server application distribution and installation. You can assemble an application and its installation instructions so that when the package arrives on a server, it will install itself without further intervention. If you need to upgrade a server application (an NLM, for example), you can create an SSP with the upgraded application and distribute and install it on all your servers in a single operation. (And if it fails, ZfS will let you know.)
In this paper we will focus on the second component, TED?a powerful method to distribute data throughout the network from a single location. With TED, the cost of maintaining network servers diminishes dramatically as you improve the circulation of ideas, information, and functionality.
You can use ZfS to package and distribute two types of data: data files and SSPs. Data files can be any file?application documents, support files, policies, drivers, etc. An SSP can be any server application, such as a utility or a user application hosted on a server.
One benefit of sending server files via ZfS is that you can make sure all your servers have the same updates to specific documents. For example, if each branch office of a bank needs the latest interest rates, each can use ZfS to receive a new version of that file every day, every hour?even every minute, if desired.
The advantage to using ZfS to upgrade server applications is that you can automate, schedule, and control exactly how the application installs?from where the application will install to the location of each support file. You can even control which files are extracted depending on which files already exist on the server. In this way, you can customize your application installations according to server characteristics or user needs. For example, you may need to install an application on all your servers, but some run on one network operating system (NOS) version and some on another, such as NetWare 4.11 and NetWare 5.1. You can customize how each application installs based on the requirements of the NOS. You can also customize based on server hardware capacity.
You can distribute SSPs by burning them to a CD; the disk can then be mailed to each location where an update is required. If you need to update your servers only occasionally, if you need to update only a handful of them, or if your bandwidth limitations restrict the size of distribution you can reasonably send, this method of distribution may be ideal for you. On the other hand, if you require that updates be performed on a regular basis or to a large number of servers?or both?TED is the perfect solution.
Figure 1 illustrates the basic components of TED and their relationship to each other.
ZfS designates servers in one of three ways: distributor, proxy, or subscriber. Distributors assemble the data to be distributed, proxies receive the data and pass it along, and subscribers receive and process the data.
Tiered distribution is the means by which a single distributor can send data to a large number of subscribers. A distributor can link directly to a suggested maximum of 40 subscribers, so tiering can be set up to increase the number of subscribers that receive the data. The interposition of the proxy increases the number of subscribers the distributor can service. In Figure 1, the distributor is sending out only three streams but is servicing five subscribers. If you wanted to further increase the number of subscribers in the model shown in Figure 1, you could convert three of the subscribers into proxies, as shown in Figure 2.
Figure 2. Tiering extends the reach of distributors, decreases WAN usage, and distributes the workload between servers. Although this figure shows only three subscribers or proxies per distributor or proxy, up to 40 subscribers can be serviced by one distributor or proxy. Therefore, if you were to hook up 40 proxies to a distributor and connect 40 servers to each proxy, you would be able to reach 1600 servers from one distributor.
See Figure 2 enlarged.
The inclusion of proxies also decreases traffic across WAN links. If the distributor had to send a 340MB database to each of the 13 remote servers, it would need to send more than 4.4GB across the WAN link. Instead, the distributor sends the database across the link only once; the database then fans out from the proxies to each server.
Note: Being designated a distributor or proxy does not leave the server out of the loop when data is being passed around: distributors and proxies can subscribe to the stream they distribute.
Figure 3 illustrates the method by which TED organizes and distributes data.
Figure 3. Distributions contain file groupings or SSPs. The distributions are assigned to one or more channels to which subscribers subscribe.
Two new components?the distributions and the channels?are shown in the above illustration. A distribution is a collection of files or SSPs, and a channel can be compared to a television channel?the viewer selects a TV channel and sees whatever program is being broadcast. In this analogy, a program would represent the up-to-date content being distributed. Likewise, the distributions are TV programs broadcast on each channel, and the TV viewers are subscribing servers. However, unlike a TV viewer, a ZfS subscriber can receive content from multiple channels at once.
ZfS integrates with eDirectory, the most mature and scalable directory in the industry. eDirectory allows for centralized configuration and management: network administrators organize their networks in the eDirectory "tree" or hierarchy, either according to geographic location or company structure. ZfS leverages your eDirectory tree structure to facilitate distribution assignments. Creating new channels, assigning distributions, and adding subscriptions require only a few mouse clicks. Thanks to eDirectory's built-in intelligence, the distribution will automatically be sent to the target servers because each server "inherits" items associated with its container.
Figure 4. ZfS objects in eDirectory are managed through ConsoleOne. If you want to assign a policy to all the servers at Networking, Inc., you would associate the policy with the Networking, Inc. container object.
See Figure 4 enlarged.
Several characteristics native to eDirectory provide important benefits to ZfS:
- Centralized Management ?You can manage eDirectory and ZfS through ConsoleOne, a Java-based management interface. ConsoleOne provides a unified view of the entire eDirectory tree, and you can therefore manage all ZfS functions from a single workstation. One network administrator at your central office (or even at a branch office) can configure and control ZfS for your entire enterprise.
- Extensibility? eDirectory offers a solid basis for building additional management capabilities, such as the policy management and distribution of ZfS. In essence, ZfS "plugs in" to your existing management paradigm. It uses the same infrastructure, security, and management interface. As such this solution is less complicated and easier to deploy than competitive products, which require proprietary infrastructure, security, and management tools.
- Scalability ?eDirectory can scale to more than one billion objects?more than any other directory on the market. Consequently, eDirectory can scale to whatever size needed for full TED deployment on your network.
File groups are created in the distribution object. You can choose any file type imaginable to put in a group, and you can put any combination or quantity of files into a group. All the files and folders in a group will have the same base path. You would distribute the bank's interest rate sheet in a file group.
When sending file groups, only the changed portions of the archive are sent. For example, if a 100MB database file has just one record change, only the changed data will be sent, not the whole file.
One of ZfS's most advanced features is its server software packaging. The ability to automatically send an application to a remote server is convenient, but the application is useless until it is installed. With ZfS you can create customized, detailed installation instructions to launch applications without administrator intervention. This feature is designed for network administrators who are experienced in software installation.
Each SSP consists of one or more components, and each component contains one or more files or folders. The SSP is divided into components so that each can be governed by its own set of configuration parameters. A few of the requirements you can assign to components are listed below:
- Operating system?platform and version
- Memory?greater than, less than, equal to, etc. a predetermined amount
- Disk space?destination volume, space available
- Set parameters?set commands that must be present on the target server
- Registry?entry type (key, name, data)
- File?if exists
- Products.dat?version (contains, begins with, matches), description (contains, begins with, matches)
You can also create installation requirements for the package as a whole. If the target server does not meet these requirements, the components may or may not be installed according to this hierarchy of conditions:
- If the prerequisites for the package are not met, none of the components are installed.
- If the prerequisites for the package are met, the components are eligible to be installed.
- If the prerequisites for a component are not met, that component is not installed.
- If the prerequisites for a component are met, that component is installed.
Because some components can be installed while others are not, a partial installation of the software package is possible. Partial installation may be desirable when some servers require fewer components than others because of NOS, software application, or hardware differences.
Distributions are archived or compressed collections of files. Archiving saves bandwidth and it reduces the probability of files being corrupted in transit. The distribution activity, which runs according to a predefined schedule, runs like this:
- At the designated time, the distribution engine consults eDirectory to see which files or SSPs are assigned to the distribution at hand.
- The distribution engine creates an archive of the designated files or SSPs.
- The distributor alerts the subscribers that the new archive is available.
- When a subscriber (or proxy) announces its availability, the distributor pushes the new archive to the recipient.
- The process is repeated at the next designated interval.
The distribution process is fully automated: no intervention is required from an administrator after the process has been configured.
Distributors can be configured to retain several copies of past distributions. This feature resolves two important problems: large distributions and problems during distribution.
"Patching" is designed to reduce bandwidth use when large distributions do not change much between distribution intervals. Because the distributor maintains a copy of past archives, TED can "difference" the new archive against the previous archive to see what has changed. TED then sends a "patch"?the data that has changed?in the distribution. Therefore, if you use TED to maintain current a 60MB database, TED will send all 60MB only the first time; after that, only the changes would be sent in subsequent distributions.
Patches also have a fault tolerance mechanism built in. When a subscriber is down, it cannot receive a distribution. When the subscriber comes back online, it can request the latest distribution from its nearest source (distributor or proxy). Because a proxy always has a copy of the latest distribution, it can send it to the subscriber without requesting a retransmission from the distributor. The subscriber also keeps a copy of the latest distribution (if marked as a patch), so if it has missed several distributions, it can tell the distributor the version number of the last distribution received. The distributor will then send the subscriber the data it needs to bring its distribution version up to date.
ZfS keeps a real-time record of how much of a distribution is received so that if a distribution is interrupted, the distribution can pick up where it left off. For example, if a subscriber goes down after 99MB of a 100MB distribution, only the last 1MB will be sent when the subscriber is back up and running.
Creating SSPs demands that great care be taken to ensure that every component is properly configured and every element properly named. However, network administrators, being human, will inevitably make mistakes in when creating SSPs. To prevent you from ruining your entire network over a simple error, ZfS provides a rollback feature. Rollback works either for installations gone wrong or for successful installations.
If an SSP installation (or a file grouping extraction) is interrupted, ZfS will restore a server to its pre-installation state, giving you time to discover the error from the logs. You would not need to send the distribution again because the it would already exist in its entirety on the subscriber's hard drive.
If you need to remove a successful installation, you can manually engage rollback and restore a server to its preinstallation state. Rollback will not undo changes made by a script or an executable file, however.
Each ZfS transaction is stored in the same SYBASE database used by ZENworks. From these records you can see which distributions were successful and which failed. ZfS provides four TED reports that you can access from the ZENworks database. Some reports can be accessed directly from the distributor or subscription object.
- Distribution Detail ?Displays a detailed, time-line-style history of package distributions for the selected subscriber.
- Revision History ?Displays a history of a distribution package's versions.
- Revision History Failure ?Displays versions of the distribution that failed during creation.
- Subscriber Detail ?Displays status information for the subscribers that received the distribution.
You can configure ZfS to send e-mail (SMTP) or SNMP alerts. The SNMP trap target and SMTP host are configurable. You can select six levels of granularity for the notification level.
From a Web browser, you can check the status of SSP distributions and also roll back the last successful installation.
Throttling prevents distribution activities from dominating network traffic. When throttling is enabled, a maximum number of bytes per second value is sent to the distributor or proxy that is sending the distribution. The sender will only allow the specified number of bytes per second to be sent to the subscriber or proxy requesting the distribution. This allows slow links to be used in a more conservative manner without flooding the link with data and making it unusable by others. Throttling occurs at the subscriber or proxy level because those servers will be "aware" of which server is their source.
Variables address the fact that not every server has been configured the same way. When you distribute an application to subscribers, you want it to go to the appropriate place, but the name for that appropriate place may differ from server to server. The variable is a name that can be set to resolve to a different value depending on the subscriber. For example: you need to send a particular distribution to 15 subscribers. It is necessary to extract the distribution to a specific volume on each server; however, the volume name is not the same on all the servers: 10 servers are using the DATA volume and five use VOL1. In the distributor object properties you can define a variable named Distribution Volume that resolves to DATA by default. When creating the subscription objects for the five subscribers, you can change the Resolve To values to VOL1 so that when the distribution is extracted, it will go to the correct volume on all 15 servers.
Because it is the staging area for assembling and sending distributions, the distributor is at the core of TED architecture. To designate a server as a distributor, install ZfS on it and a distributor object will automatically be created in eDirectory. From ConsoleOne you can change parameters such as the NCP server domain name where the distributor process will take place, the directory to be used by the distribution system, console prompt, maximum number of concurrent distributions, and seconds before connection times out. You can also set up log files and notification systems.
Of major importance are the scheduling parameters. You decide how long the distribution process will be active?from all the time to just a few minutes, depending on how much server time you want to dedicate to distribution activities. You can also schedule a window of availability for the channels: every day, twice a week, the last day of the month, etc. During this window, distributions will be made available to subscribers. A feature called "random dispatch" can be enabled to vary the distribution's launch during the window of availability. For example, if a window is scheduled from 7 a.m. to 8 a.m., by default the distribution would launch at exactly 7 a.m. With random dispatch enabled, the distribution would launch at random sometime between 7 a.m. and 8 a.m.
Once the distributor object is configured in eDirectory, you can begin setting up distribution objects, which are nested in the distributor object. After naming a distribution object, you can create file groupings or select SSPs that belong to the distribution.
It should be noted that the distribution object does not contain the actual data being distributed: it only refers to the location of the files. If you change the location of a file that is part of a distribution, you will have to reconfigure the distribution to reflect the new location.
Once you create a channel object with ConsoleOne, you can associate it with distributions, as shown in Figure 6.
Figure 7 illustrates the relationships between distributor, file grouping, SSP, distribution, and channel.
In this illustration, three distributions have been set up: Accounting, General, and Marketing. Three channels with the same names have also been set up. The marketing distribution is intended for servers in the marketing department, the accounting distribution will go to the accounting department, and the general distribution is for all servers, including marketing and accounting servers. (See "Subscribers," below, for further explanation of channels.) As you can see, the relationship between distributions and channels is many to many, resulting in the potential for complex combinations of distributions and channels.
Proxies put the "tier" in Tiered Electronic Distribution. They receive transmissions from distributors (or other proxies) and pass them along to subscribers (or other proxies). This tiering accomplishes three primary goals: to increase the speed of transmission to subscribers, to extend the reach of a distributor, and to minimize traffic across WAN links. Figures 8 and 9 show a few ways you can set up proxies to accomplish these goals.
This illustration show 13 servers: one is the distributor, 3 are proxies, and all are subscribers. In this example, proxies form two layers of tiers (the distributor counts as the first, because it can subscribe to itself), enabling the server to distribute data to all 13 servers while sending out only three streams. By employing TED, you can cut down on the amount of server time your distributor uses while increasing the distribution speed to subscribers. For example, if it takes a distributor 30 minutes to send a distribution, the distributor engine would occupy 6.5 hours of server time to distribute to 13 servers. With TED, the distributor will be occupied for only 90 minutes to distribute to the same number. And although the model here shows only two tiers, you can create as many tiers as you need.
In Figure 9, proxies are employed to send as few transmissions as possible across WAN links. In this example, the distributor services subscribers in six different locations (including its own) but sends only one transmission across a WAN link. Likewise, each proxy sends the distribution just once across each link. Were it not for the proxies, the distributor would have to send 14 distributions across the link?once for each server.
The distributor automatically assigns a subscribers to a proxy when the subscriber makes a subscription. Proxies are assigned by their location in the directory, that is, if Proxy A is fewer directory hops away from a subscriber than Proxy B, the subscriber is assigned to Proxy A. The calculation is checked every time a subscription becomes active. If the proxy is no longer closer, a new proxy is assigned. Proxies are load balanced as well: if two proxies are in the same container, the distributor alternates the assignments between the two.
Any server that needs to receive ZfS distributions should be configured as a subscriber. The subscriber is an eDirectory object that holds subscriptions. Subscriptions define the channels from which the subscriber receives data.
Figure 10 shows the relationship between distributions, channels, subscribers, and subscriptions.
Figure 10. You subscribe to channels depending on the distributions you require. For example, the marketing servers subscribe to the marketing channel and to the general channel so they will get both the marketing and general updates.
See Figure 10 enlarged.
In this illustration you can see that each server receives its needed distribution by subscribing to a channel through which the distribution passes. As with the relationship between distributions and channels, the relationship between subscribers and channels is many to many.
Subscribers can be configured with a window of availability to receive software. This prevents them from consuming valuable network resources during peak demand time.
Another useful property of subscribers is that you can determine when the distribution will be applied; that is, when SSPs will be installed or files extracted. In some cases, executing a distribution during the work day may disrupt the network?an application installation requires that the server restart, for instance. You could therefore set a time after working hours for the distribution to execute.
ZENworks for Servers is a powerful tool that enables you to perform critical upgrades to all your enterprise servers without leaving your desk. In particular, Tiered Electronic Distribution provides the means to create a highly scalable system for distributing vital information. By leveraging eDirectory, ZfS becomes easily manageable; by using proxies, archives, and patching to reduce bandwidth usage, it sets a new industry standard for efficiency. Once installed, ZfS will become an indispensable network component that will yield significant dividends both in saved time and improved networking.
Rick Cox is the Product Manager of ZENworks for Servers and ManageWise. He has been with Novell since 1987, with a brief leave while he was with Compaq. He began his career at WordPerfect. Over his fourteen years in the industry he has worked in Customer Support, IS&T, R & D Lab manager, Developer Relations, Fibre Channel development, and Product Management.
Novell Cool Solutions (corporate web communities) are produced by WebWise Solutions. www.webwiseone.com