2.1 Hardware Requirements

2.1.1 Sentinel Log Manager Server

Novell Sentinel Log Manager is supported on 64-bit Intel Xeon and AMD Opteron processors, but is not supported on Itanium processors.

The following table lists the hardware requirements that are recommended for a production system that holds 90 days of online data. The recommended requirements are for an average event size of 300 bytes.

Table 2-1 Sentinel Log Manager Hardware Requirements

Requirements

Sentinel Log Manager (500 EPS)

Sentinel Log Manager (2500 EPS)

Sentinel Log Manager (7500 EPS)

Compression

Up to 10:1

Up to 10:1

Up to 10:1

Maximum Event Sources

Up to 1000

Up to 1000

Up to 2000

Maximum Event Rate

500

2500

7500

CPU

One Intel Xeon E5450 3-GHz (4 core) CPU

or

Two Intel Xeon L5240 3-(2 core) CPUs (4 cores total)

One Intel Xeon E5450 3-GHz (4 core) CPU

or

Two Intel Xeon L5240 3-(2 core) CPUs (4 cores total)

Two Intel Xeon X5470 3.33-GHz (4 core) CPUs (8 cores total)

Random Access Memory (RAM)

4 GB

4 GB

8 GB

Local Storage (30 days)

2 x 500 GB, 7.2k RPM drives (Hardware RAID with 256 MB cache, RAID 1)

4 x 1 TB, 7.2k RPM drives (Hardware RAID with 256 MB cache, RAID 10)

16 x 600 GB, 15k RPM drives, (Hardware RAID with 512 MB cache, RAID 10) or an equivalent storage area network (SAN)

Networked Storage (90 days)

600 GB

2 TB

5.8 TB

Follow these guidelines for optimal system performance:

  • The local storage should have enough space to hold at least 5 days worth of data, which includes both event data and raw data. For more details on calculating the data storage requirements, see Section 2.1.3, Data Storage Requirement Estimation.

  • Networked storage contains all 90 days worth of data, including a fully compressed copy of the event data in local storage. A copy of the event data is kept on local storage for search and reporting performance reasons.The local storage size can be decreased if storage size is a concern. However, due to decompression overhead, there will be an estimated 70% decrease in searching and reporting performance on data that would otherwise be in local storage.

  • You must set up the networked storage location to an external multi-drive SAN or network-attached storage (NAS).

  • One machine can include more than one event source. For example, a Windows server can include two Sentinel event sources because you want to collect data from the Windows operating system and also the SQL Server database hosted on that machine.

  • The recommended steady state volume is 80% of the maximum licensed EPS. Novell recommends that you add additional Sentinel Log Manager instances if this limit is reached.

  • Maximum event source limits are not hard limits, but, are recommendations based on the performance testing done by Novell and assume a low average events rate per second per event source (less than 3 EPS). Higher EPS rates result in lower sustainable maximum event sources. You can use the equation (maximum event sources) x (average EPS per event source) = maximum event rate to arrive at the approximate limits for your specific average EPS rate or number of event sources, as long as the maximum number of event sources does not exceed the limit indicated above.

2.1.2 Collector Manager System

  • One Intel Xeon X5570 2.93-GHz (4 CPU cores)

  • 4 GB RAM

  • 10 GB free disk space

2.1.3 Data Storage Requirement Estimation

Sentinel Log Manager is used to retain raw data for a long period of time to comply with legal and other requirements. Sentinel Log Manager employs compression to help you make efficient use of local and networked storage space. However, storage requirements might become significant over a long period of time.

To overcome cost constraint issues with large storage systems, you can use cost-effective data storage systems to store the data for a long term. Tape-based storage systems are the most common and cost-effective solution. However, tape does not allow random access to the stored data, which is necessary to perform quick searches. Because of this, a hybrid approach to long-term data storage is desirable, where the data you need to search is available on a random-access storage system and data you need to retain, but not search, is kept on a cost-effective alternative, such as tape. For instructions on employing this hybrid approach, see Using Sequential-Access Storage for Long Term Data Storage in the Sentinel Log Manager 1.2.2 Administration Guide.

To determine the amount of random-access storage space required for Sentinel Log Manager, first estimate how many days of data you need to regularly perform searches or run reports on. You should have enough hard drive space either locally on the Sentinel Log Manager machine, or remotely on the Server Message Block (SMB) protocol or CIFS protocol, the network file system (NFS), or a SAN for Sentinel Log Manager to use for archiving data.

You should also have the following additional hard drive space beyond your minimum requirements:

  • To account for data rates that are higher than expected.

  • To copy data from tape and back into the Sentinel Log Manager in order to perform searching and reporting on historical data.

Use the following formulas to estimate the amount of space required to store data:

  • Local event storage (partially compressed): {average byte size per event} x {number of days} x {events per second} x 0.00007 = Total GB storage required

    Event sizes typically range from 300-1000 bytes.

  • Networked event storage (fully compressed): {average byte size per event} x {number of days} x {events per second} x 0.00002 = Total GB storage required

  • Raw Data Storage (fully compressed on both local and networked storage): {average byte size per raw data record} x {number of days} x {events per second} x 0.000012 = Total GB storage required

    A typical average raw data size for syslog messages is 200 bytes.

  • Total local storage size (with networked storage enabled): {Local event storage size for desired number of days} + {Raw data storage size for one day) = Total GB storage required

    If networked storage is enabled, event data is moved to networked storage when local storage fills up. Raw data, however, is located on the local storage only temporarily before it is moved to networked storage. It typically takes less than a day to move the raw data from local storage to networked storage.

  • Total local storage size (with networked storage disabled): {Local event storage size for retention time} + {Raw data storage size for retention time) = Total GB storage required

  • Total networked storage size: {Networked event storage size for retention time} + {Raw data storage size for retention time} = Total GB storage required

NOTE:

  • The coefficients in each formula represent ((seconds per day) x (GB per byte) x compression ratio).

  • These numbers are only estimates and depend on the size of the event data as well as on the size of compressed data.

  • Partially compressed means that the data is compressed, but the index of the data is not compressed. Fully compressed means that both the event data and index data is compressed. Event Data compression rates are typically 10:1. Index compression rates are typically 5:1. The index is used to optimize searching through the data.

You can also use the above formulas to determine how much storage space is required for a long-term data storage system such as tape.

2.1.4 Disk I/O Utilization Estimation

Use the following formulas to estimate the amount of disk utilization on the server at various EPS rates.

  • Data written to Disk (Kilobytes per second): (average event size in bytes + average raw data size in bytes) x (events per second) x .004 compression coefficient = data written per second to disk

    For example, at 500 EPS, for an average event size of 464 bytes and an average raw data size of 300 bytes in the log file, data written to disk is determined as follows:

    (464 bytes + 300 bytes) x 500 EPS x .004 = 1558 KB

  • Number of I/O request to the Disk (transfers per second): (average event size in bytes + average raw data size in bytes) x (events per second) x .00007 compression coefficient = I/O requests per second to disk

    For example, at 500 EPS, for an average event size of 464 bytes and an average raw data size of 300 bytes in the log file, number of I/O requests per second to the disk is determined as follows:

    (464 bytes + 300 bytes) x 500 EPS x .00007 = 26 tps

  • Number of blocks written per second to the disk: (average event size in bytes + average raw data size in bytes) x (events per second) x .008 compression coefficient = Blocks written per second to disk

    For example, at 500 EPS, for an average event size of 464 bytes and an average raw data size of 300 bytes in the log file, number of blocks written per second to the disk is determined as follows:

    (464 bytes + 300 bytes) x 500 EPS x .008 = 3056

  • Data read per second from disk when performing a Search: (average event size in bytes + average raw data size in bytes) x (number of events matching query in millions) x .013 compression coefficient = kilobytes read per second from disk

    For example, at 3 millions of events matching the search query, for an average event size of 464 bytes and an average raw data size of 300 bytes in the log file, data read per second from disk is determined as follows:

    (464 bytes + 300 bytes) x 3 x .013 = 300 KB

2.1.5 Network Bandwidth Utilization Estimation

Use the following formulas to estimate the network bandwidth utilization on the server at various EPS rates:

{average event size in bytes + average raw data size in bytes} x {events per second} x .0003 compression coefficient = network bandwidth in Kbps (kilobits per second)

For example, at 500 EPS for an average event size of 464 bytes and an average raw data size of 300 bytes in the log file, the network bandwidth utilization is determined as follows:

(464 bytes + 300 bytes} x 500 EPS x .0003 = 115 Kbps

2.1.6 Virtual Environment

Sentinel Log Manager is extensively tested and fully supported on a VMware ESX server. To achieve comparable performance results to the physical-machine testing results on ESX or in any other virtual environment, the virtual environment should provide the same memory, CPUs, disks pace, and I/O as the physical machine recommendations.

For information on physical machine recommendations, see Section 2.1, Hardware Requirements.