Using Access Control and Caching to Fight Limited Bandwidth
Novell Cool Solutions: Feature
By Drew Major, Scott Jones
Digg This -
Posted: 3 May 2002
A problem that almost every business faces today is rapidly increasing utilization of limited bandwidth. Unfortunately, frequent widening of pipes is impractical due to the astronomical cost of leading-edge circuit types and their supporting hardware.
Network engineers are often tasked with finding cost-effective strategies to leverage existing technology to improve performance. Practice has shown that access control and caching are the two such strategies with the lowest cost of ownership and the highest return on investment.
Access Control involves regulating Internet activity by user identity. Studies show that as much as 80% of Internet activity can be personal or recreational in nature in a non-regulated environment.
Caching is the storage of frequently accessed information as close as possible to users. This prevents web browsers from going out across the network and through its Internet link every time a page is called. Within workgroups that share common responsibilities, a 60-70 percent cache hit rate can usually be expected, which means a 60-70 percent reduction in web traffic.
Here are some important access control and caching concepts and specific recommendations for the placement and configuration of these services.
During the past decade, the development of PC hardware has generally outpaced software requirements. As a result, even the least expensive PC available today satisfies the needs of most users. Unfortunately the same cannot be said for network infrastructures. While many high bandwidth technologies have emerged, their cost and complexity have resulted in slow adoption. Meanwhile, network utilization has skyrocketed in all environments, from the home to the enterprise. The result, especially on the largest of networks, the public Internet, is perpetual traffic jams that hinder productivity.
Utilization of IP-based networks will increase even further, perhaps exponentially, over the next decade. As new services such as video conferencing and Voice-Over-IP (VOIP) move onto data networks, bandwidth demaeDirectory are forcing network engineers to seek more creative solutions than just "keep making the pipe bigger!"
There are two primary solutions that most organizations can implement at a fraction of the cost of bigger circuits. The first is granular control of network activity. Elimination of unnecessary traffic can be easily implemented and is a huge step toward keeping a network productive. The other is caching. The biggest consumer of bandwidth on most networks is HTTP traffic, the majority of which is redundant. Caching of HTTP can therefore significantly reduce circuit loads. Caching also increases the apparent speed of the network because content is frequently read from a nearby server.
Novell BorderManager is the directory-based product that can provide this extended functionality. These design recommendations are based on the real world experiences of Novell Consulting and, in many cases, Novell's own IS&T department.
Proxy/Cache and Filtering Technology
It is important to distinguish between "access control," which is a function of the proxy services, and "packet filtering," which is a router-level function of NetWare. The following sections discuss the concepts behind each technology.
Proxy/Cache - What It Is and How It Works
Bandwidth - While a misnomer, the term "bandwidth" is conventionally used to describe data throughput rate on network media. Bandwidth is equal to the amount of data that passes down a pipe in a second, typically measured in kilobits per second, or "Kbps."
Proxy/Cache - A proxy is someone or something that you have authorized to act on your behalf. A cache is a storage place for something of value such as information, equipment, or provisions. In the information industry, cache refers to a storage place for frequently accessed data. A proxy/cache is a device which acts in your behalf to retrieve, store, and relay frequently accessed information.
Object Store - The cached data units on a proxy/cache device, including both RAM and disk cache.
Hierarchical Cache - A group of caching devices arranged with parent-child relationships (CERN) or a combination of parent-child and peer relationships (ICP). ICP, or Internet Cache Protocol, is defined in RFC's 2186 and 2187.
Access Control - While all firewall technologies can be described as "access control," the term is here used to describe application-level control that is implemented by the proxy device. It does not include workstation-based products such as NetNanny and SurfWatch.
During the 1960's, computer designers observed that much of the code their systems were executing was highly redundant. Using this insight to their advantage, they began storing the redundant portions of programs in a small, very high-speed block of memory called a cache. The cache was tied closely to the CPU so that its content was immediately available for processing, rather than going through the slower processes of being read from system memory or from disk. As executed code looped through cached portions of the program, the resulting throughput was orders of magnitude greater than in systems operating without cache.
During the mid-eighties, Novell engineers used this same model to benefit from the naturally occurring repetitive use of shared data and services on NetWare file servers. Since that time, designers have developed increasingly sophisticated caching technologies that store redundant code in high-speed cache close to the CPU. Intel-architecture systems use several levels of cache in a hierarchy to keep redundant code and data cached so their fast CPUs are not stalled while the next instructions are fetched.
Today, all computer users benefit from this design in which frequently used data from disk subsystems is cached in main memory, frequently used data in main memory is cached in the Level 2 (L2) cache, and most-recently-used data is stored in the CPU's primary (L1) cache. This design takes advantage of the repetitive nature of application code and also saves users from having to store all of their applications and data in the expensive high-speed RAM that is used to construct high-speed caches.
The Need for Caching on the Internet and Intranet
In light of the huge benefits that caching provides in other areas of computing, it is only natural that caching should play an integral role in optimizing the performance of intranet and Internet services. The Internet services we enjoy as we browse the World Wide Web and our private Intranets were originally designed without caching in mind. Of course, the designers could not have anticipated the grand scale their original technologies would be taken to on the public Internet.
Consider the following characteristics of Internet/intranet activity that make this the perfect environment for reaping the benefits of caching:
- The activity patterns of Internet users resemble the patterns seen by videocassette rental businesses, in which 10 percent of the tapes account for 90 percent of the business. In other words, the overwhelming majority of traffic on the Internet comes from a relatively few but extremely popular sites. If the content of these sites was cached in various locations around the world, access times could be significantly decreased for users around the globe.
- The access patterns of individual organizations and workgroups within organizations are also repetitive in nature. Workgroups tend to share similar responsibilities and interests, and therefore need access to similar sets of information. Corporate networks are thus another ideal environment for caching shared content.
Internet Service Providers (ISPs), corporate webmasters, IS staff, and end-users can all benefit from Internet caching in one form or another.
Once the proxy/cache is operational and populated with content that is repeatedly used by the user community, the load on Internet and WAN connections is reduced. Existing circuits are freed up for the fetching of new content and the servicing of dynamic, non-cacheable requests.
Avoiding Stale Data
The proxy/cache also honors Time-To-Live (TTL) tags that dictate the length of time an object can remain in cache before it must be refreshed from the origin server. When the TTL expires on an object, the proxy/cache can ping the origin server with an "if modified since" request. If the object has been modified, the proxy/cache replaces the old object in its cache with a new one from the origin server. Requests for real-time data and dynamic (or interactive) requests are not cached so that users can properly interface with web-based applications.
Access control supplements caching to further improve the performance of Internet connections and WAN links. While access control mechanisms do add an overhead of their own, this load is minor compared to the benefit. Access control can be implemented in a variety of ways, depending on the proxy/cache product selected. Activity can be managed by service, node and/or network address, URL, time of day, content, user, groups, and a variety of other criteria. If user-level control is desired, it should utilize an existing directory service to avoid doubling the skill set required by network administrators.
Administration of access control is in many ways analogous to administering file system rights in server operating systems. In fact, almost all Internet services are actually file services. An HTTP request, for example, is simply a request for an HTML file and the files it refers to (images, sounds, etc.). Since NetWare has long been the industry leader in extremely fast and manageable file services, Novell is uniquely positioned to provide superior access control and proxy/cache products.
Eliminating Non Business Related Traffic
Independent research estimates that in unregulated workplaces, as much as 80% of Internet traffic is personal or recreational in nature. The most common activities include multimedia services like Napster and RealAudio, instant messaging such as ICQ/AOL Instant Messenger, news and stock tickers, sports sites, humor sites, and adult sites. It is difficult to predict how effective access control will be in a given environment unless activity monitoring and logging has been performed under normal working conditions, without the knowledge of users. Obviously, the more restrictive the access rules, the more likely you are to see a performance gain.
Caching Web Content Locally
Organizations and workgroups that share proxy services tend to share common responsibilities and common Internet needs. These usage patterns translate into a 60-70 percent cache hit rate because workgroups tend to use the same information located on the same web sites 60-70 percent of the time. The remaining 30-40 percent of traffic consists of first-time requests and non-cacheable content such as dynamic CGI requests.
However, dynamic requests also take advantage of caches when the response includes static HTML and graphics. When these static components of the response are cached, only a small part of the response is actually non-cacheable and has to be fetched from the origin server.
The following table was generated using Novell CacheView, a tool that determines what percentage of a web page could have been cached by BorderManager or ICS. It illustrates how even interactive web activity involves a significant amount of redundant, cacheable data.
Microsoft DirectAccess knowledgebase search page
search results from above, "Win98SE general protection fault"
Yahoo! main page
Yahoo! search results, "NetWare"
ABCNews.com main page
ABCNews.com quick poll results
Dell online purchase (8 pages)
Benefits of Hierarchical Cache
A properly designed hierarchical cache system moves content as close to users as possible. It also intelligently communicates between cache devices so they can pull content from each other when possible. A three-level hierarchy is considered optimal. Relationships must be defined so that excessive peer communication does not itself load down narrow network pathways.
- Primary Cache (ICP hierarchy or clustered devices)
- Moves relevant content into the private network
- Eliminates redundant requests from ISP links
- Increases the life span of exchange point infrastructures
Primary proxy/caches in a hierarchical configuration typically show an additive hit rate that ranges from 6% to 18% with a typical average of 12%. This is due to the fact that the secondary and tertiary caches have already processed the more popular objects.
- Moves relevant content into regional proximity with the users
- Eliminates redundant requests from expensive aggregation WAN links
- Increases the life span of intermediate WAN infrastructures
Secondary proxy/caches in a hierarchical configuration typically show an additive hit rate that ranges from 12% to 37% with a typical average of 25%. This is due to the fact that the tertiary caches have already processed the more popular objects.
- Moves relevant content close to the users for faster response times
- Eliminates redundant requests from local users on the local WAN
- Increases the life spans of remote site WAN infrastructures
Tertiary proxy/caches typically experience hit rates between 25% and 75%, with a typical expected average of 50%-55%. This large variance in hit rates is attributable to the homogeneity of users and the size of the user population. Hit rates of 75% have been achieved in education enterprises with a large user population in which the web access is focused primarily on core curricula. The opposite extreme of 25% hit rates has been observed at sites whose user population is both small and whose interests are diverse.
IP packet filtering is the most fundamental mechanism for "firewalling." It occurs at the network and transport layers of the OSI model (layers three and four). As such, filtering cannot be based on user identity. Filters are defined by source and destination interface on the firewall, source and destination node address or network address, source and destination IP port, and transport layer protocol (usually TCP or UDP). Depending on the firewall, the variables may be generalized to "all," or a range of values.
In BorderManager, it is handled by dedicated-function NLM's (FILTSRV.NLM, IPFILT.NLM, and IPFILT31.NLM) that extend the IP routing facility. When IP packet filtering is enabled, the routing facility passes all IP packets to the filtering facility for examination. Packets are then either discarded or sent on to their destination based on the values of the above variables and what filter exceptions have been defined.
For example, assume the following filter exception exists on a router, but all other packets are blocked:
- Source interface: Private
- Destination interface: Public
- Source address: All
- Destination address: 22.214.171.124
- Source IP port: All
- Destination IP port: 80
- Transport layer protocol: TCP
This filter exception would allow all nodes coming into the router through the "Private" interface to get out to www.novell.com (126.96.36.199) via HTTP (which is port 80 on TCP). If the firewall does not support stateful filtering, an additional exception must be entered to allow the responses to return to the workstations. BorderManager, however, supports stateful filters, which should always be used because data entry is simplified and return paths are only opened dynamically as needed.
In BorderManager, the combination of the port and transport protocol variables is called a "packet type". In the above example, the combination of port 80 and the protocol TCP defines the HTTP packet type. A default list of packet types is provided, which may be added to. When defining a new packet type, you must know the ports and protocols used by the application that you will be filtering. Port and protocol information for a specific service or application is listed in its RFC (if an open standard), or the product documentation or support site (if proprietary).
Packet types are combined with interface and address information to form a unique filter exception, as in the above example. The order of the exceptions is irrelevant, since filters and exceptions on a BorderManager server are cumulative. Technically, when using the default setting of "deny all and allow by exception," the filters and exceptions are processed in order from the least restrictive to the most restrictive1. But since there is only one list of "denies" and only one list of "allows", logically the order they are entered in has no net effect.
Since packet filtering occurs at the network and transport layers, it takes precedence over all other activity control mechanisms, including proxy access rules.
Summary of Proxy/Cache and Packet Filtering
Two powerful mechanisms for controlling Internet activity are proxy/cache and packet filtering. BorderManager's proxy/cache is an application layer service that brings content closer to users for increased apparent performance of Internet services. It can also execute access rules based on eDirectory user ID. In contrast, packet filtering occurs at the network and transport layers, and can only be based on source/destination interface and IP address, not identity. Also, packet filtering takes precedence over application layer services.
While each BorderManager service has CPU overhead, it is minimal under normal circumstances. In cases where there are a large number of access rules and/or filter exceptions (i.e., over one hundred of each), and/or very heavy traffic (i.e., you commonly see hundreds or thousands of receive buffers in use), separating these functions out to two separate servers may be required.
General Configuration Recommendations
When selecting servers for BorderManager, the main question is whether just proxy/cache will be used, or whether NAT, VPN, and/or the IP Gateway will be used as well. The latter three services are processor-intensive vice RAM-intensive. Cache, of course, is primarily RAM-intensive. BM proxy servers should have the maximum amount of RAM that the mainboard will support (and certainly not less than 1GB for 5000+ users).
Any single contemporary CPU is adequate. Once NetWare has a multithreaded IP stack and file system, multiple CPU's will offer an advantage. For now, they are an extravagance.
Any BorderManager server should have a minimum of 128MB RAM dedicated to BorderManager. That is, 128MB above the requirements of the OS and other services. 256MB is a recommended minimum. In a hierarchical cache, more RAM should be employed in higher-level cache servers, with the primary cache devices containing the maximum supported by the mainboard.
When selecting server machines, focus on the drive subsystem. The bottleneck in Novell caching solutions is the storage subsystem. When selecting machines as proxy/cache servers, emphasis should be on technologies such as Ultra-3 Wide SCSI, hardware RAID (striping), I2O, and high RPM drives. These high performance storage systems produce a great deal of heat and require sufficient cooling to ensure optimum throughput. Therefore, case design, placement, and adequate data center ventilation are important.
The following is recommended:
- Two drives in a RAID 1 array (purely for fault tolerance) containing the SYS and LOG volumes. 4GB each suits most implementations.
- Two or more drives in a RAID 0 array (for performance enhancement), or four or more drives in a RAID 0/1 array (to combine performance enhancement with fault tolerance), containing the cache volume(s). Do not use RAID 5 for the cache volume(s); a drive failure will bring disk I/O to a crawl.
When configuring the server, the following interrupts should be avoided: 2, 9, and 15. Also, assign interrupts to give priority to the private NIC first, then the public NIC, with host adapters next. The standard IRQ prioritization is: 0, 1, 2/9, 10, 11, 12, 13, 14, 15, 3, 4, 5, 6, 7. However, some mainboards differ so your documentation should be checked.
Disabling all unneeded components (parallel port, etc.) is a good way to free interrupts for use and to reduce the number of variables that may affect software.
NetWare Optimization for Proxy/Cache
Separate volumes should be set up for cache and logging. To improve cache efficiency, use a block size of 8K (for cache volumes less than 9GB) or 16K (for volumes 9GB or more), with compression and block suballocation disabled.
Whenever possible, use static routing on BM servers. It places almost no load on the server, and does not add to the router discovery traffic that may already exist on the network. Since BM servers are usually upstream gateways, one static default route and the minimal network routes can be very simple to enter.
Cache Volume Configuration
NTS does not recommend using NSS volumes at this time. Ideally, use multiple NetWare File System volumes of 4-8GB each. This will reduce the time it take PROXY.NLM to find and read files for cold fills. Note: The cache volumes must be on different physical drives to accomplish this performance improvement. Multiple cache volumes on the same physical drive accomplishes practically nothing. If anything, the overhead of two volumes on the same drive probably makes things slightly slower than if you had one big cache volume.
NTS recommeeDirectory not having significantly more total cache volume space than the amount of data the proxy fills in a given week. Observe the current activity stats on the proxy server and the amount of data filled for each week. If, for example, the total filled for the week is 21GB, then you will probably want to stick with 24-28GB of total cache volume space.
The following NCF file should be used to optimize the BM proxy/cache servers. It should be called in the AUTOEXEC.NCF just after the INITSYS load line. It contains additional optimization tips and should be read carefully.
;; BMEETune.ncf - (c)2000 Novell Consulting
;; *** Add the following to STARTUP.NCF ***
; set maximum packet receive buffers = 25000
; set minimum packet receive buffers = 10000
; set maximum physical receive packet size = 2048
; set reserved buffers below 16 meg = 300
; set filter local loopback packets = off
; set shutdown public interface on log failure = off
; set enable disk read after write verify = off
;; Load nlslsp before initsys.ncf to speed services
;; load up.
;; Increase conlog maximum size to 1000k
;; *** Conditional - Add this load statement after
;; initsys.ncf if there is no UNIX Service Handler
;; object ***
; load netdb /n
;; Kernel settings
set maximum interrupt events = 50
set maximum service processes = 1000
set minimum service processes = 500
set new service process wait time = 0.3 sec
set worker thread execute in a row count = 15
set pseudo preemption count = 200
;; File system settings
set enable hardware write back = on
set immediate purge of deleted files = on
set enable file compression = off
set maximum file locks = 100000
;; Memory/cache settings
set garbage collection interval = 5 min
set directory cache allocation wait time = 0.1 sec
set directory cache buffer nonreferenced delay = 60 min
set maximum number of internal directory handles = 500
set maximum directory cache buffers = 10000
set minimum directory cache buffers = 5000
set maximum concurrent directory cache writes = 125
set maximum concurrent disk cache writes = 750
set dirty disk cache delay time = 0.1 sec
;; Communication settings
set tcp defend syn attacks = on
set new packet receive buffer wait time = 0.1 sec
set maximum pending tcp connection requests = 1024
set tcp ip maximum small ecbs = 65534
;; Conditional settings
set reply to get nearest server = off
set nat dynamic mode to pass thru = on
;set timesync debug = 7
;; Load services as follows:
;; aclcheck /s /g /b0
;; Settings in NWAdmin:
;; maximum hot unreferenced time = 60 (match "directory
;; cache buffer nonreferenced delay" setting)
;; cache hash table size = 256k
;; maximum number of hot nodes = 50000
;; number of cache directories = 128
;; dns transport protocol = udp
The purpose of these set parameters is discussed in TID #'s 2949807 and 10012765. The descriptions in those TID's will help you determine what might need adjustment if observation suggests further tuning.
Although BorderManager is a server-centric product, it uses eDirectory for configuration details and authentication. Almost all application layer configuration is done on the server objects. Therefore, placement of the BM server objects in the tree is important, as well as deciding which replicas the servers will hold.
A BorderManager server should be placed near its users in the eDirectory tree. In a branch office, for example, the BM server should be at the OU for that geographical location. The key is to minimize tree walking and the number of replicas the servers needs. If practical, it should hold all of its users and rules in a local or nearby replica. As with any eDirectory design, administrative roles must also be accommodated. Rule placement is discussed in the next section.
Access Rule Development and Placement
There are two approaches to rule creation, and firewalling in general: "Allow all and deny by exception," or "Deny all and allow by exception." The former allows the greatest flexibility for users and minimizes administration time. The later is impractical for most organizations because too much time would be spent processing requests for access to required web sites.
To simplify implementation and ongoing administration of the proxy/cache system, access rules should be defined by "functional role". This is the same principle used in ZENworks to define policies. A functional role is a profile of a group of users with similar needs. The fewer functional roles that are defined, the simpler the management.
List the functional roles based on work center mission, application requirements, etc. Then determine what access rules need to apply to which functional roles. If the functional roles follow existing eDirectory tree containerization, rules may be assigned to containers. If not, new eDirectory groups can be created for access rules to be applied to.
To further simplify administration in an environment with many different functional roles, match functional roles for proxy/cache access rules with functional roles for ZEN policies whenever possible. That is, while defining functional roles, determine if desktop and application needs can be matched with Internet access needs. This can make it much easier for the same staff to administer both products.
Access rules apply to all users who attempt to go through the BM server. A default rule of DENY ALL is placed on the server object, and gets pushed down (stays at the end of the list) as new rules are created.
Unlike ZENworks policies, BorderManager rules are not user- or container-centric. They are server-centric. Each time a BM server comes online, the server searches the tree for rules, from the server object up to the root. It will only search upward, not across O's or OU's. Each rule found is placed in the ACL of the server. They are ordered as follows:
- rules created at the server object
- rules created at the parent container of the server object
- rules created at other up line containers
- rules created at the O
Each user request is checked against the ACL to determine if a rule exists that applies to that request. If an applicable rule is found, the search stops. Therefore, server-based rules take precedence. For example:
O=C&W (rule=DENY ALL)
|______OU=CW_NY (rule=DENY ALL)
|______OU=HQ (rule=ALLOW ALL)
If users exist in all three containers, you might think that only users in the HQ container would have access through BMFS_Server. But since the server looks at rules from the server object up, it would find an applicable rule for all requests at OU=HQ (rule=ALLOW ALL). So users in all three containers have full access.
Rules implemented at a container can eliminate duplication because they apply to all subordinate BM servers. They can also facilitate administrative roles. Therefore, for functional roles that exist throughout your internetwork, the rules may be created once high in the tree. Since ACLCheck can be configured to look for changes just once every twenty-four hours, traffic generated by tree walking is not so significant that you should not take advantage of this simplified administration model.
Access rules that utilize CyberPatrol can only be placed on server objects. However, when the same rules do need to be placed at multiple locations in the tree, they can be copy-pasted.
ACLCheck and Group Membership
ACLCheck walks the tree to do group membership checks. By default ACLCheck reads groups used in access rules once every hour. If a change is found, it will then re-read the rule. The /B switch allows you to change how often ACLCheck will test for changes in group membership. You may only want ACLCheck to do this every few hours to reduce the number of times per day that the rules are re-read. When using this switch, suffix it with the number of hours ACLCheck will wait before testing for group membership changes. E.g., "/B0" will disable ACLCheck's regular testing for group membership changes, and "/B2" will cause it to test group membership once every 2 hours.
Strongly recommended: The /G switch enables smart group change detection. It requires DS.NLM 7.44 or later on all servers that hold replicas of the users' partitions. By default, when checking for changes to group membership, ACLCheck completely reads all the group memberships from every group that is referenced in each access rule. Then, if a group's membership has changed, it must re-read the rule, which causes ACLCheck to again walk the tree, find the group, read every member, then go to the next group. With the new DS and the /G switch, ACLCheck just checks the timestamp on the group object to know if it has changed, eliminating a huge amount of DS traffic.
To avoid "DNS lookup failure" console messages, load ACLCHECK with the "/s" switch before the BRDSRV line in the autoexec. The error is informative only, and is generated when the proxy cannot do a reverse lookup on an IP address. Many domains are not set up to allow reverse lookups, so this can be a frequent "error."
BorderManager supports transparent proxy, which requires no browser configuration. A transparent proxy works by sitting in the default routing path of a network and intercepting HTTP traffic. This is in contrast to the HTTP application proxy, which requires browsers to be configured with the proxy server's IP address or host name and processes standard proxy API calls.
As long as they are using a current browser (typically 2.x or newer) that supports authentication, any client workstation can utilize the proxy/cache system, regardless of operating system or NOS clients that may or may not be installed. However, NetWare Client 3.0 or better is required to support SSO. To enable SSO, thereby preventing users from having to log in to their browsers on startup, CLTRUST.EXE needs to be running on the workstation. This can be added to login scripts or deployed via a ZEN force run.
eDirectory Objects and Access Rules
There are several factors to consider when deciding where to place BM server objects, functional role group objects, and access rules. These include:
- The need to minimize tree walking and synchronization traffic.
- Each BM server should have a replica of the partition containing itself.
- Each BM server should have fault-tolerant access to [root].
- The logical order in which rules will be executed.
- Is administration of the BM servers to be centralized or decentralized?
- Is administration of access rules to be centralized or decentralized?
- Is administration of group membership to be centralized or decentralized?
- Will some rules apply to many or all BorderManager servers?
- Will rules be defined using CyberPatrol categories?
Placement of Server Objects
Remember that each BM server should have a replica of the partition containing itself. It should also have quick (either local or physically nearby) access to user and group objects.
ACLCheck requires access to the [root] partition when reading rules. If it cannot access [root], the result will be an effective "deny all". To prevent this possibility should a core site lose contact with the rest of the network, keep a local replica of [root]. Where this is not possible (do not create excessive copies of the [root] partition), administrators must know that if ACLCheck tries to read rules and compile an ACL while the server is cut off from the tree, all access will be denied. If this happens, a possible temporary solution is to deselect "Enforce Access Rules" on the BorderManager Setup page. This must be re-checked when the WAN link is restored.
Placement of Group Objects
Administration of group membership will be decentralized wherever possible (where regional or local administrators are sufficiently trained). This, combined with the desire to reduce tree walking by ACLCheck and ZENworks, compels the creation of functional role group objects as close to the users as possible. In many cases, this will be in the partition for the regional core site where the BorderManager server is physically located. In other cases, the group objects may be further down in the tree to allow more localized administration.
Here are four common functional roles:
- Guest - Full access with generic ID.
- Level 1 - Full access.
- Level 2 - Inappropriate sites blocked, but multimedia allowed.
- Level 3 - Inappropriate sites and all multimedia blocked.
The Guest role is for visitors who require Internet access with minimal trouble. They will be accommodated by creating a generic user object in each container with a BorderManager server. The user name and password will be distributed to a minimum number of staff, with the password changing periodically. The Guest user objects will be treated as Level 1 users for rule purposes. Guests may use this account to log in to the proxy service via SSL, which does not require the NetWare Client or any other client-side configuration. The context of the local guest user object must be listed in the authentication configuration of each server so that the user does not need to type the full distinguished name of the user object.
In this example, we would recommend that group objects be created for Level 1 and Level 2 users called BM_LEVEL1 and BM_LEVEL2, respectively. Groups are not necessary for Level 3 access, as this will be the default access. The Level 1 and Level 2 groups will be used to define exceptions to the default.
If you use CyberPatrol as a key piece of your access control strategy, there will be only a handful of access rules that can be placed on containers. The majority of rules (and the most complex ones) must be placed on server objects.
The default access rule posture should be allow all and deny by exception. As such, a rule is needed to "undo" the default "deny all" rule created during product installation. An additional "deny all" is recommended at the bottom of the access rule lists, with logging enabled (the default deny rule does not have logging enabled). This can prove useful during troubleshooting. So there will be two unchanging rules that apply to all servers. These may be placed at O=CW so they do not need to be duplicated throughout the tree. All other rules will be placed on server objects.
Recommended initial access rules are shown here. Following is a description of each, from top to bottom (the order in which they are executed). Remember that when a rule is found that applies to a user request, rule execution stops.
Recommended initial access rules
- Allows Level1 users access to any destination. For Level 1 users, rule execution will always stop at this point.
- Allows Level 2 and 3 users access to a specified list of URL's (for negating a CyberNOT listing that you actually want Level 2 and 3 users to have access to).
- Allows Level 2 users access to a specified list of URL's (for negating a CyberNOT listing that you actually want Level 2 users, but not Level 3 users, to have access to).
- Denies access by Level 2 and 3 users to a specified list of URL's (for blocking sites not in the CyberNOT list).
- Allows Level 2 users access to sites on the Sports & Leisure list in CyberPatrol.
- Denies access by Level 2 and 3 users to sites in selected categories on the CyberNOT list.
- Allows all users access to any destination. Rule execution should never go past this point.
- Duplicates the default "deny all," but with logging enabled.
- The default rule, which cannot be deleted.
The first six rules are on the server object. These can be easily copy-pasted to other servers as they come online. The last three are on the O=CW container.
You should introduce authenticated proxy/cache and access control in a controlled manner. Packet filtering should initially be configured to allow all traffic from PRIVATE to PUBLIC and will be fine-tuned later. This provides a quick rollback path because only one change is made to existing equipment: the default route on the core router is changed to the IP address of the local BM server's private interface.
- Once each server is installed and configured as specified in the next section and Client Trust deployed via login script, a test group of users should be redirected to the BM server as their default route (as in the pilot) to verify the installation and evaluate server performance and stability. Once these things have been verified to the satisfaction of staff, all users can be redirected through the BM server by the core router.
Each server will be installed into the SERVICES container in the appropriate geographic OU in the tree, regardless of whether the site is currently in a separate operational eDirectory tree. For sites not yet in the tree, there are two options:
1) have workstations authenticate to two trees until they are merged in a later project, or
2) create a temporary, generic user for the site to use via SSL login until their tree is merged in.
The latter will delay the full deployment of access rules, as all users will be subject to the same restrictions.
A Guest user object should be created in the SERVICES container to accommodate visitors. For each core site, evaluate at what level group membership administration will occur. Place as few functional role group objects as practical as high in the tree as practical. That is, despite a decentralized administration model, you should also emphasize simplicity.
CLNTRUST.EXE will be copied to the public directories of every login server. Except where SSL login will be temporarily used pending a tree merge, login scripts will be modified to run Client Trust on login.
Fault Tolerance Options
A variety of fault tolerance mechanisms are available, and can be used in combination to fit your needs.
Bypass Failed Device
This can be accomplished by changing the default route for the private network back to the original router address, as discussed earlier. It may also be accomplished by having a pre-configured router in "hot standby" so that patch cords can be manually moved over from the BM server.
Either solution requires human intervention. The latter, at least, does not require infrastructure personnel to reprogram the core router. Any staff member can be shown how to move the patch cables. Either solution will mean the loss of proxy services, and thereby access control. Note that the device that replaces the BorderManager server must allow through all ports that the transparent HTTP proxy was listening on (in this case, 80, 81, and 82); users will be going directly to the public Internet, as they did before BorderManager was deployed.
Backup Proxy Server
Similar to the previous model, this solution requires a backup proxy server in parallel. It may normally be in use for other services; reduced proxy performance is better than no Internet access. If the primary fails, clients could be redirected by a default route change on the core router. A better solution, however, is to have the primary proxy server's IP address configured as the private address of the backup server in NWAdmin. When the backup proxy is invoked, execute an NCF file that first adds the primary server's address as a secondary IP address, then loads the proxy service. This solution also requires human intervention, but maintains access control. Clients using SSL authentication will need to log in to the backup server when it comes up.
Novel Clustering Services works well with BorderManager proxy/cache servers. The result is essentially an automation of the previously described solution. The second server, however, would need to be dedicated to this function, and should not be used for file, print, or other services.
Load Balancing Via Layer 4 Switch
This is the most elegant of the fault tolerance solutions, but is also the most expensive. Two parallel proxy servers are employed, as before. However, Layer 4 switches are capable of following client dialogues with the proxy servers, and can intelligently direct object requests to the server most likely to have the object in cache. Throughput is also used in switching decisions to achieve true load balancing, as well as effective use of the caches.
If one proxy/cache server goes down, this solution also offers a true automated "fail over" in the sense that all traffic then gets directed through the remaining server.
The utilization of Novell's extremely fast, award-winning caching solutions, with directory-enabled access control, will significantly relieve the IP traffic burden on your Internet links. This solution builds on your investment in Novell eDirectory and will extend the usable lifetime of existing data circuits.
Novell Cool Solutions (corporate web communities) are produced by WebWise Solutions. www.webwiseone.com