Novell Home

Synchronizing with eDirectory

Novell Cool Solutions: Feature

Digg This - Slashdot This

Posted: 8 Dec 2004
 

Synchronization plays a major role in eDirectory. This article provides a look at five different kinds of synchronization you typically encounter in eDirectory, including what they are, how they behave, and what to watch for:

  • External reference
  • Obituary
  • Schema
  • Agent configuration
  • Object

For the full BrainShare presentation (TUT356), click here.

External Reference Synchronization

An External Reference is a name needing to be tracked by the local DIB. It may contain a partial cache of attributes from the real object and/or results of local operations.

An External Reference can be created by any of the following conditions or events:

  • Authentication
  • It is referenced by another eDirectory object
  • File rights or other OS dependency
  • eDirectory itself has a dependency

External Reference Maintenance

External References are maintained by the Reference Check Process (Backlinker and DRL processor). On real replicas, this process maintains the Uses, Used By, and Back Link attributes.

What is maintained depends on the object and the version of eDirectory. The base class, name, and certain attributes are maintained. Some examples of attributes include Public Key and GUID for User objects, Replica for Partition Root objects, and Status and NDS Version for NCP Server objects



Figure 1 - Entry information (Larger image)



Figure 2 - Back link list (Larger image)

Obituary synchronization

An obituary ensures referential integrity during delete, move, and renames operations. The most common types are:

  • Primary: Dead, Restored, Moved, Inhibit Move, New RDN
  • Secondary: Back Link, Used By

The Obituary Process uses Purge Vector. It is not manually scheduled; it is automatically scheduled at the end of an inbound synchronization cycle for the just-synchronized partition. Obituaries are moved through a series of states (Notified, OK To Purge, Purgeable, etc.) before they are purged from the system. This ensures that all interested parties are notified and can make the proper adjustments and modifications.

Back link obituaries ar moved by the Master replica. Used By obituaries are moved by the replica that created them. If that replica no longer exists, then the Master takes ownership. As obituaries are moved through their various states, those changes are synchronized (using the Obituary Index) to agents holding a replica of the effected objects.

For troubleshooting obituaries, you can use the NDS iMonitor?Agent Process Status or the NDS iMonitor?Obituary Report.



Figure 3 - Obituary status (Larger image)

Schema synchronization

All agents have a copy of the schema. This copy will be a cached copy (not modifiable) unless the agent also holds a writeable copy of the tree root partition.

The schema propagates through the schema synchronization process, sideways and down, never upwards, based on replica depth. Replica depth is based on the replica with the fewest number of containers held by a given agent. The NDS iMonitor?Server Information Report, shows replica depth for all agents.

When the first replica is added to an agent, the agent resets the schema and adds itself to a poll list to receive a new copy of the schema. This operation must complete before the replica add can proceed.

To troubleshoot schema synchronization, you can use the NDS iMonitor?Agent Process Status, NDS iMonitor?Schema Browse, or Schema Root Browse.



Figure 4 - Schema root synchronization (Larger image)



Figure 5 - Server Schema Synch List (Larger image)

Agent Configuration Synchronization

Not every agent in an eDirectory environment will hold a copy of its own NCP Server object. This object contains information needed to control the agent's behavior. To have this information available and reduce network bandwidth, a local partial cache of the NCP Server object is maintained. Also, Agent Configuration Synchronization updates the NCP Server object with changes that occur on the local agent (ie. Network Address, NCP Server Name).

The Agent Configuration is stored in the Pseudo Server. Every agent has its own copy of this object for storing its own agent configuration, regardless of whether it holds any replicas.



Figure 6 - Pseudo-server configuration (Larger image)

The process that performs Agent Configuration Synchronization is called Limber (also known as Connectivity Check). Limber may trigger other processes such as LDAP to refresh configuration. Limber is scheduled to run every 4 hours by default (configurable in iMonitor). It will also run when requested or when a change needs to be synchronized out.

Limber maintains the following kinds of data:

  • Network Address
  • Index Definitions
  • NCP Server's RDN and location in the tree
  • Tree Name
  • Permanent Configuration Parameter
  • s

For troubleshooting Agent Configuration Synchronization, you can use the NDS iMonitor?Agent Process Status or the NDS iMonitor?Pseudo Server Browse (eDirectory 8.6 or greater).

Object Synchronization Concepts

General directory terms and methods:

  • Convergence The act of making an object consistent among all instances of that object
  • Hierarchy A graded series of objects in which each element may contain other objects
  • Replication Styles Single Instance, Master-Slave, Multi-Master, Hybrids
  • Replication Methods None, Copy and Replace, Change Log, State Based, Hybrids

Novell eDirectory terms and methods:

  • Replica Types - Master, Read\Write, Read Only, Filtered
  • Replica Ring - The set of eDirectory agents that hold a replica .of a given partition and, therefore, participate in the replication of objects contained with that partition
  • Deltas - Time-based differences between copies of a given object. Change Cache - The collection of objects with deltas for a given replica
  • Transitive Synchronization - A method for providing convergence of data without requiring an eDirectory agent with changes (i.e. state- based deltas) to directly contact and replicate those changes with every other agent in the replica ring

Partition root object attributes

  • Replica - A synchronizing multi-valued attribute where each attribute value represents an eDirectory agent that is participating in replication of this partition and how it can be contacted
  • Transitive Vector - A synchronizing multi-valued attribute where each attribute value represents the state in time that the specified eDirectory agent has received changes up to
  • Local Received Up To - A non-synchronizing single valued attribute whose value represents the current state in time that the local eDirectory agent has received changes up to
  • Purge Vector - A non-synchronizing single valued attribute whose value represents the oldest state in time that has been seen by each eDirectory agent in the replica ring. It uses the Transitive Vector to find the oldest state seen by all the agents in the ring for each replica in the ring.

Synchronization Improvements in eDirectory 8.6 and Later

  • Incremental replication of changes - All changes for the entire state difference between replicas of a given partition are still required, but a progress marker (?synchronization point?) is kept so that work is not lost and redone in the event of an error (usually communication) during a synchronization cycle.
  • Multi-Threaded Outbound - The outbounding eDirectory agent can update more than one agent for more than one partition at a time.
  • Reduced "chattiness" - Communication of the Transitive Vector between replicas is no longer delayed until each replica's outbound synchronization cycle. The destination replica's Transitive Vectors are exchanged with the source replica at the end of a replication cycle.
  • Per Replica Attribute Time Stamps - These no longer cause extra needless synchronization attempts.
  • More efficient object analysis - There is improved handling of large multi-valued attributes, inbound and outbound.


Novell Cool Solutions (corporate web communities) are produced by WebWise Solutions. www.webwiseone.com

© 2014 Novell