Authors : Shreyas Rao
Suresh Kumar GP
With the introduction of Novell Domain Services for Windows (DSfW), it is now possible for users to log in and authenticate to both eDirectory and Active directory from a Windows workstation without requiring multiple logins or having the Novell Client, and access the NSS volumes without the client.
This AppNote provides a step by step procedure to configure and access the clustered NSS Volume on a DSfW server and access the volumes from Windows without using the Novell client.
Introduction to some of terms/technologies used in this AppNote:
NCS: Novell Cluster Services is a server clustering system that ensures high availability and manageability of critical network resources including data, applications, and services. It is a multinode clustering product for Linux that is enabled for Novell eDirectory and supports failover, fail back, and migration of individually managed cluster resources.
DSfW: Novell Domain Services for Windows (DSfW), a component of Open Enterprise Server (OES) 2 SP1, creates seamless cross-authentication capabilities between Windows/Active Directory and Novell OES 2 Linux/eDirectory servers. This suite of technologies allows Novell customers with Windows networking environments to set up one or more "virtual" Active Directory domains in an eDirectory tree. Users can then log in and authenticate to both eDirectory and Active Directory from a Windows workstation without requiring multiple logins or having the Novell Client for Windows installed With Domain Services for Windows. eDirectory users can use familiar Windows desktop operations to access file services regardless of the platform/operating system where the service resides. Users can access Novell Storage Services (NSS) volumes on Linux servers by using Samba shares or NTFS files on Windows servers that use CIFS shares. eDirectory users can also access shares in trusted Active Directory forests.
More information on these is available at: http://www.novell.com/documentation/oes2/
Let us consider an example of a DSfW server set up where there is a Master eDirectory replica - which is a eDirectory server running on a OES 2 SP1 server.
This Tree has an OES 2 SP1 server attached as a FRD server in the domain 'frd.com'.
There is also an ADC server attached at this level to the server.
We will refer these servers are Master server, FRD server and ADC server henceforth in this document.
A Windows client is added to the FRD domain.
Forest Root Domain (FRD): The domain that provides the base (foundation) directory forest. It is usually the first domain that you create in your directory forest and is known as the default forest root domain.
Additional Domain Controller (ADC): It is an added server used to improve the availability and reliability of network services. If you have an additional domain controller, it helps in fault tolerance and balances the load of existing domain controllers. It also provides additional infrastructure support to the sites. An ADC will have all the parameters same its peer in that domain.
The Tree view is as shown below.
In the above shown screen shot,
The server 'systst-ts-124' is the Master server.
The server 'systst-ts-123' is the FRD server.
The server 'systst-ts-126' is the ADC server.
The installation of the FRD server and ADC server is elaborately mentioned in the novell documentation:
Once the above mentioned DSfW setup is configured, the following operations are done to set up a Clustered NSS Volume in a DSfW Environment:
- Set up NCS on the on the FRD server and the ADC server.
- Create a shared pool and two volumes on one of the nodes.
- Create two users and home directories for the same.
- Assigned trustee rights to their respective home directories.
- Create a samba share for the two cluster shared volume
- Configure the samba and NCS scripts to cluster enable the NSS volume for clustering
- Testing the configuration
- Set up NCS on the on the FRD server and the ADC server
- If you are planning to set up iSCSI based NCS cluster, then we need to set up iSCSI before setting up the NCS. This involves configuring the FRD server and the ADC server as the iSCSI initiators, initializing a disk and then sharing that disk for NCS.
The configuration of the iSCSI is explained:
We can skip the above mentioned if your using a Shared SAN box between FRD server and ADC server.
- After the iSCSI configuration on the FRD server and ADC server, we need to initialize the shared disk and mark it sharable for clustering.
- On FRD server, open a terminal window, type nssmu and press enter.
- From devices option, select the shared disk corresponding to our above created iSCSI disk and press F3 to initialize the disk. Initialization will prepare the disk for our use. Press Y to confirm initialization and then press F6 to mark it shareable for clustering.
- To exit the NSS utility press Esc until you reach back to terminal prompt.
- On this terminal window type "yast2 ncs" to start NCS configuration wizard.
- Enter the password of tree admin user when asked during wizard and click ok.
- After authentication, you would be presented with main "NCS" configuration screen.
- On the main NCS configuration screen, select "New cluster" and enter FDN of the new cluster you wish to build in "cluster FDN" field.
- In the screenshot above , cluster_dsfw is the name of cluster, o=novell is where the cluster object would be created in eDirectory
- Enter an IP address in "cluster IP address" field. This is Master IP address resource and will be bound to the master node and will remain with the master node regardless of which server is the master node.
- From the next drop down list, select the shared media corresponding to your iSCSI or SAN shared disk and click next.
- Confirm in the next window that NCS configuration wizard has correctly identified your current node and click finish.
- The wizard will end after the configuration is finished.
- We now have a single node cluster. To add another node to the cluster, login to the second machine with OES2 SP1.
- Add another node to already existing cluster
- On the ADC server, launch "yast2 ncs" from a terminal window.
- Wait for the wizard to launch and enter the eDirectory admin password in the required field and click ok.
- You would be presented with a similar NCS configuration window as were presented during configuration of NCS on our first node (FCS).
- To add this machine to the already created cluster, on the main "Novell Cluster Services Configuration" window select "existing cluster" and enter the FDN detail of the cluster we wish to add this node to (cluster cluster_dsfw we created above). All other fields would get disabled.
- Click next and finish buttons to add this machine as node to the cluster.
- To verify that out NCS configuration is correct, open a terminal window on booth NCS nodes (FRD server and ADC) one by one and execute command "cluster view". If you see information similar to the screenshot below on both the machines, your configuration is correct.
- Create a shared pool and a volume on one of the nodes.
- Login to the FRD server as root and in a terminal window run nssmu to launch NSS Management Utility.
- Select Pools from the Main menu and press Enter key.
- Press Insert key to create a new pool.
- Enter the Pool name, say "CLUSTPOOL" and press Enter key.
- Select the iSCSI or SAN shared device to create a new cluster pool.
- Specify the size of the pool and press Enter
- Assign a IP address for the newly created pool, Here The IP given is 184.108.40.206
- Apply and then press Enter to complete the creation of the pool on the shared device.
- Press Esc key to come back to main menu of NSS Management Utility.
- Select Volumes from the main menu and press Enter.
- Press Insert key.
- Enter a name for the volume say "CLUSTVOL1".
We now have a Clustered NSS Volume on a DSfW server set up.
User creation, Rights assignments and Share the cluster volume using samba.
- Create a User in the FRD Domain
- In the iManager, create a user, say user1.
- These users are created in the FRD.com domain.
- While creating the users, make sure Home directories are created for these users.
- Assign trustee rights to their respective home directories:
- All users will have all rights to their respective home directory by default. Suppose you want to give access to the NSS Volume as well, you can give it in the iManager
- Traverse along – Roles and tasks,
- Select 'Files and Folders'
- Select 'Properties'
- Select the eDirectory object for the CLUSTVOL1
- Click on 'Rights' Tab
- Add the user 'user1' and give necessary rights.
- Create samba share for the cluster shared volume
- In iManager, Traverse along the 'Roles and Tasks'
- Select 'File Protocols'
- Select samba
- Open the eDirectory object for the cluster resource
- Check if Samba is running
- Click on the 'Shares' Tab
- Select new share
- Enter a name for the samba share of the Clustered Volume
- Enter the Path - /media/nss/CLUSTVOL1
- Click ok
- Configure the samba scripts
- Create a new Directory 'samba' on the cluster volume CLUSTVOL1
- Create 3 subdirectories – log ,etc and locks in the samba folder
- Copy the existing smb.conf file from the /etc/smb/ folder of one of the machines to the etc folder on the Cluster volume.
cp /etc/smb.conf /media/nss/CLUSTVOL1/samba/etc/
- Edit the smb.conf in the cluster volume, delete all other existing local shares info and add or modify only the below mentioned under '[global]' section:
- Modify the existing netbios name to a different name , here we changed to 'DSFWCLUSTER
- Add the following entries:
bind interfaces only = yes
interfaces = 220.127.116.11
pid directory = /media/nss/CLUSTVOL1/samba/locks
Note: the IP address entered there is the cluster shared pool resource IP
- Add the line 'browseable =yes' under the cluster share
path = /media/nss/CLUSTVOL1
read only = No
browseable = yes
inherit acls = yes
- Configure the NCS Scripts
Add the following lines in the NSS resource load scripts SAMBA_ROOT=/media/nss/CLUSTVOL1/samba
exit_on_error /usr/sbin/nmbd -l $SAMBA_ROOT/log -s $SAMBA_ROOT/etc/smb.conf
exit_on_error /usr/sbin/smbd -l $SAMBA_ROOT/log -s $SAMBA_ROOT/etc/smb.conf
Add the following lines in the NSS resource unload scripts SAMBA_ROOT=/media/nss/CLUSTVOL1/samba
ignore_error killproc -p $SAMBA_ROOT/locks/nmbd-smb.conf.pid /usr/sbin/nmbd
ignore_error killproc -p $SAMBA_ROOT/locks/smbd-smb.conf.pid /usr/sbin/smbd
- Offline and Online resource. Scripts will get effective.
Note: During the Migration, the smbd and nmbd will be restarted. If you have any local samba shares, and still don't want the smbd and nmbd process to be restarted or killed during cluster resource migration, then edit the local /etc/samba/smb.conf files of two nodes and add these entries to the global section.
bind interfaces only = yes
interfaces = 18.104.22.168 <- local server IP address or the cluster node IP address.
Now different instances of smbd and nmbd process will be run for local and cluster configuration scripts.
- Testing the Clustered NSS Volume in a DSfW Environment works fine after migration:
- On the Windows client, Login as the user1
- On the Explorer, click on 'My network places' *
- In 'My network places', You should be able to see the domain 'FRD', click on it.
- Inside the FRD Domain, netbios name DSFWCLUSTER will be available
- Go to any folder on the share, you should be able to do it.
- Keep the Folder open.
- Migrate the Clustered Volume from the FRD server to the ADC server in iManager
- Once the Cluster Migration is complete, a refresh on the Client should suffice to continue accessing the files in the Folder.
Thus the user on the Client will be able to access the files seamlessly even if the Volume will have migrated.
*There are others ways to map to the cluster volumes as well like Mapping Network drives using NSS Pool resource IP or by netbios name.
Disclaimer: As with everything else at Cool Solutions, this content is definitely not supported by Novell (so don't even think of calling Support if you try something and it blows up).
It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test, test, test before you do anything drastic with it.