This section contains examples of how you can design various applications of the Novell BorderManager Proxy Services.
This section describes the three primary ways to use proxy caching:
Web client acceleration (standard proxy cache)
Web server acceleration (reverse proxy cache acceleration or HTTP acceleration)
Network acceleration (ICP hierarchical caching)
This section also provides several examples of how you can use caching. In these examples, CompanyA is implementing several proxy cache solutions to enhance its enterprise network: client acceleration, server acceleration, and network acceleration. For each type of caching, examples are given for both intranet and Internet use.
In Web client acceleration, the proxy server is located between clients and the Internet, as shown in Figure 5-1. The proxy server intercepts requests from clients for Web pages and supplies the requested pages to the client, if cached, at LAN speed. This eliminates the delay that occurs when the origin Web site is accessed and minimizes the traffic between the corporate network and the Internet.
The proxy server makes requests to Web servers for the intranet clients, using appropriate protocols such as HTTP, FTP, and Gopher. The proxy server caches URLs, HTML pages, and FTP files to accelerate subsequent requests to the same objects.
Figure 5-1 Client Accelerator Configuration
When planning the implementation of proxy servers and caching on your network, you must identify which sites would benefit from caching. Look for the following when identifying client acceleration sites:
Sites with multiple clients
In almost all cases, operating these clients through a proxy server dramatically improves performance. This is especially true if groups of employees are accessing the same Internet or intranet Web sites, thereby increasing the probability of cache hits. Caching also uses resources more efficiently, including intranet Web servers.
Sites that require control over client access to the Internet
You can control access to the Internet by establishing easy-to-understand access control rules.
For example, suppose that Company A wants to give its employees access to the wealth of information available on the Internet. However, the company also wants to restrict access only to those Internet Web sites that contribute to the workplace. This results in two requirements:
Restrict Internet access
The company must apply a consistent and manageable Internet access policy to all employees, for example, to deny access to certain Web sites. Because many employees travel extensively, access control must be implemented globally, regardless of the employee location.
Accelerate Web access
The company must reduce the time employees spend waiting for Web pages to load while still giving them full access to the information they need.
To meet both requirements, Company A implements proxy servers as client accelerators in all of its facilities, using access control list rules established in NDS or eDirectory by the network administrator. All employee Web browsers are configured to operate through proxy servers. The proxy cache servers greatly accelerate Web page loading and permit control over Internet access. This configuration is shown in Figure 5-2.
Figure 5-2 Internet Client Acceleration Configuration Example
Various groups within Company A have published extensively on internal Web sites. Some of the published information is company public, or accessible by all employees. Other information is privileged, or accessible only by employees who have a need to know. For example, some advanced development information is available only to certain engineering or management groups. The published information is spread across a large number of internal Web sites.
Company A has two requirements for intranet Web site access by employees:
Restrict access to privileged information
A set of rules must be implemented that specify the intranet Web information that can be accessed. Because many employees travel extensively, access control must be implemented globally, independent of employee location. Otherwise, access management becomes complex.
Reduce network load
The Company A intranet consists of a number of sites that are interconnected by WAN links. Because of the high bandwidth requirements of Web access, particularly those sites with extensive graphics, the transfer of Web pages over WAN links must be minimized.
Because of the flexibility of Proxy Services, the same proxy servers used to restrict Internet Web access in the first example, Web Client Acceleration (Standard Proxy Cache), can also be used to restrict access to intranet Web sites. In addition to storing the Internet access restrictions in eDirectory, the administrator stores intranet Web access rules. This approach gives the administrator centralized and global control of both Internet and intranet access from a single point, greatly simplifying access management.
With Web server, or HTTP, acceleration, the proxy server acts as a front end to one or more Web servers and caches all information that belongs to the Web server, as shown in Figure 5-3. When a client requests information from a Web server, the request is diverted to the proxy server. The proxy server supplies the cached pages to the client at high speed. This method accelerates access and takes the request load off the publishing Web servers, allowing them to handle publishing and dynamic content more efficiently.
Proxy Services can provide acceleration for all popular Web servers in any combination.
Figure 5-3 Web Server Accelerator Configuration
When planning the implementation of proxy servers and caching on your network, you must identify which sites would benefit from caching. Look for the following when identifying server acceleration sites:
Internet or intranet Web servers that have a high level of usage
You can improve performance and capacity significantly by using proxy servers. (Refer to Internet Server Acceleration Example.)
Intranet Web servers that contain both company public and company privileged information
Representing these Web servers on the network with a proxy server simplifies access management, tightens security, and increases performance. (Refer to Intranet Server Acceleration Example.)
Sites with a variety of Internet or intranet Web server platforms
Representing these Web servers on the network with a proxy server consolidates and centralizes access management, tightens security, and increases performance. (Refer to Intranet Server Acceleration Example.)
The public Web site of Company A, http://www.ACo.com, receives millions of hits daily from a worldwide audience. The site was previously serviced by multiple Web servers. Recently, the company set up several proxy servers to serve as front ends to the Web servers, as shown in Figure 5-4. This approach provides three important benefits:
Increased capacity
Company A can expand the content of its Web site and accommodate more site visitors without upgrading hardware. In fact, each proxy server—an economical 200-MHz Pentium* Pro machine with 128 MB of RAM and a 16-GB disk—can handle approximately 250 million hits every 24 hours. Estimating conservatively that only 50 percent of all hits are cached, each proxy server can take 125 million hits every 24 hours. This greatly reduces the load on the Web servers and might reduce the number of Web servers required as well. In this example, one proxy server is sufficient to handle the load for the foreseeable future. However, Company A installed multiple proxy servers for a fault-tolerant solution.
Increased performance
Because of the significant performance boost provided by caching, Web site visitors download pages faster, making their experience more satisfying.
Enhanced security
The proxy servers isolate the Company A Web servers from the Internet, protecting them against unauthorized access.
Figure 5-4 Internet Server Accelerator Configuration Example
Many of the groups in Company A publish information on internal Web servers on the company’s intranet. These servers are scattered around the world and are accessed by employees who are also located around the world. Unlike the information on the public Internet Web site of Company A, much of the information published internally is sensitive and access to it must be restricted. Complicating this situation is that the information resides on a variety of Web server platforms, including NetWare, UNIX Apache, Netscape, and NCSA*, making access management complex and difficult.
The Company A solved the problem by creating front ends to its intranet Web servers with proxy servers at each site. For example, at its headquarters, the company installed 10 proxy servers as front ends to the 50 intranet Web servers at that site, as shown in Figure 5-5. Access control was transferred from the Web servers to the proxy servers. This approach results in the following benefits:
Effective and consistent increased security
The proxy servers isolate the Web servers from the network, increasing their resistance to unauthorized access. By moving security to the proxy servers, Company A can provide the same strong security across all Web servers, regardless of the Web server platform. This makes the security policy easy to implement.
Centralized and simplified access control
Access control is implemented through access control rules stored in eDirectory. As a result, an administrator can manage security for all servers from a single point, regardless of the server platform. This greatly simplifies access management. In addition, because access control is implemented through NDS or eDirectory, it is independent of employee location, and the same access control rules are applied no matter where a user logs in. The access control list used for server access control in this example is synchronized with the access control list used for client access control in Web Server Acceleration (HTTP Acceleration), ensuring uniform access control in both client and server acceleration within the intranet.
Increased performance
The proxy servers increase the speed of Web page access. Employees receive the information they need faster, becoming more productive.
Figure 5-5 Intranet Server Accelerator Configuration Example
With network acceleration, or ICP hierarchical caching, multiple proxy servers are configured in a hierarchical, or mesh, topology, as shown in Figure 5-6. The proxy servers are connected in a parent, child, or peer relationship. When a miss occurs, the proxy contacts the other servers in the mesh to find the requested cached information. The nearest proxy cache that has the requested information forwards it to the requesting proxy server, which in turn forwards it to the requesting client.
ICP hierarchical caching reduces the WAN traffic load and increases valuable bandwidth. In addition, because the requested information is sent from the nearest proxy server, network delays are minimized. This reduces user wait times and increases user productivity.
Figure 5-6 Network Accelerator Configuration
When planning the implementation of proxy servers and caching on your network, you must identify which sites would benefit from caching. Look for the following when identifying network acceleration sites:
Sites with slow links to the Internet
Use hierarchical caching to deliver information to clients at either LAN speeds or over high-speed intranet WAN links.
Sites with multiple LANs
Use multiple proxy servers to partition LAN traffic. For example, you can install a proxy server in each building. This approach reduces backbone traffic, that is, the traffic between the LANs. It also uses your resources more efficiently, accommodates more usage over the same backbone, and speeds up backbones that have become sluggish because of increased traffic.
Congestion and delay problems at LAN points within WANs
A hierarchical mesh of proxy servers can increase the available bandwidth of your WAN by reducing WAN traffic. For example, a company has three sites: a field sales office, a regional office, and corporate headquarters. If a client at the field sales office needs a Web page from corporate headquarters, the client would access the page directly from the Web server and prevent other clients from using two WAN links. However, if the page is cached at the regional office, the client would use only one WAN link, thereby reducing traffic on the other link.
Company A is a large organization with worldwide facilities. As a result, employees and Web servers are widely scattered. Employees must have easy and fast access to internal Web information, regardless of their location or the location of the target Web server. In addition, because of the high cost of network equipment and the even higher cost of managing it, the company must obtain the highest utilization possible from its network resources.
Company A implemented a hierarchical mesh of proxy servers, as shown in Figure 5-7. Hierarchical caching reduces the load on Web servers and reduces WAN traffic by allowing clients to access cached intranet Web information from the closest proxy server.
For example, a Los Angeles-based employee might be in Paris and need to access information from a Web site in Los Angeles. Although no one at the Paris office has recently accessed that information, an employee in the London office has, and the information is cached on the proxy server in London. Instead of routing the client’s request all the way to Los Angeles, the proxy server in Paris can access the information from the proxy server in London. This reduces network delay and eliminates slower, more expensive transatlantic traffic on the network.
Figure 5-7 Intranet Network Accelerator Configuration Example
Just as a hierarchical mesh of proxy servers can be used to accelerate intranet performance, it can be used on a much larger scale to accelerate Internet performance. The National Laboratory for Applied Network Research (NLANR) is working on such a project.
According to a recent NLANR report, the Internet’s sustained explosive growth calls for an architected solution to the problem of scalable wide area information dissemination. While increasing network bandwidths helps, the rapidly growing populace will continue to outstrip network and server capacity as they attempt to access widely popular pools of data throughout the network. The need for more efficient bandwidth and server utilization transcends any single protocol such as FTP, HTTP, or whatever next becomes popular.
The basic Internet client-server model (in which clients connect directly to servers) is wasteful of resources, especially for highly popular information. There are many examples in which server systems have not been able to cope with the demands placed upon them for popular information.
This section contains examples of FTP, FTP reverse proxy, Mail (SMTP), DNS proxy applications, and an example of SOCKS.
The Figure 5-8 shows an example of FTP acceleration using a Novell BorderManager proxy server on the firewall. The browser client can access the FTP server through the proxy server.
Figure 5-8 FTP Acceleration
The Figure 5-9 shows an example of FTP reverse acceleration. In this example, the client accesses the two FTP servers on the intranet through the Novell BorderManager proxy server on the firewall.
Figure 5-9 FTP Reverse Acceleration
The following two figures show two examples of using the Novell BorderManager Mail proxy to connect to an external mail server. The Figure 5-10 shows an example of a small company without an internal mail server. The Novell BorderManager proxy server acts as a mail server, handling all SMTP and POP3 requests from the intranet and the corresponding mail from the external mail server on the Internet. The Figure 5-11 shows a larger company with its own internal mail server. The internal mail server uses the Novell BorderManager proxy server to exchange mail with outside or public mail servers.
Figure 5-10 Mail Proxy without an Internal Mail Server
Figure 5-11 Mail
The Figure 5-12 shows an example of using a DNS proxy. The Novell BorderManager proxy server configured for a DNS proxy handles traffic between the internal DNS name server and the DNS name server on the Internet.
Figure 5-12 DNS Proxy
The Figure 5-13 shows an example of using a Novell BorderManager server behind an existing SOCKS firewall.
Figure 5-13 Novell BorderManager Server behind a SOCKS Firewall