Cool Solutions

Migrating NCS Clustered Data to a Smaller Volume


January 10, 2014 3:22 pm





In the majority of cases, the size of user storage will only continue to grow. However there are a few instances where a NSS volume is too large and the data on it needs to be moved to a smaller location. There has not been a direct method for shrinking a NSS volume or pool, so I will share the method I used to move data from one clustered Pool/Volume to another, without having to create a new cluster resource.

In my case, the problem arose during a migration of my shared SAN storage. The vendor’s LUNs on the new SAN were smaller than on the old, even though the physical drives were the same size. When configuring the NCS cluster the first time, I had simply made my NSS pools the size of the entire LUN, rather than reserving some space for overhead. Lesson learned. An easier solution would have been to simply add more drives to each LUN, however that was seen as a waste of space, as the space actually in use by the Pool/Volume on each LUN was not even close to the full size of the LUN.

For safety’s sake, prior to working with the pools/volumes, I moved all resources off one of the nodes in my cluster and off-lined the resource I was going to copy from. It may be possible to do this with cluster services stopped (rcnovell-ncs stop) but my way doesn’t require that.

In the following steps, either NLVM, NSSMU, or iManager are all acceptable methods for performing the steps. Personally I used NSSMU for this experiment.

On the physical node:

  1. Create a new pool. In this example I will use source_pool_new.
    1. Give it no IP
    2. Do not advertise using NCP, AFP or CIFS
    3. Do not share it
    4. Do not make it active on creation
  2. Activate the source_pool_new pool.
  3. Create a new volume within source_pool_new. I called mine source_volume_new.
  4. Optional – create a volume quota, to make sure if the volume gets filled, the pool does not.
  5. Mount the source_volume_new volume.
  6. Activate the source_pool holding the volume to be copied.
  7. Mount the source_volume holding the data to be copied.
  8. Copy the data from source_volume to source_volume_new. The way this is done is dependent on the type of data on the volume. If it is a GroupWise post office, DBCopy might need to be used. If it is user files, JRB Netcopy might be useful. In any case, get the data to the new location, in as similar a format to the old location as possible.
  9. Rename source_volume to source_volume_old
  10. Rename source_volume_new to source_volume
  11. Rename source_pool to source_pool_old
  12. Rename source_pool_new to source_pool
  13. Online the cluster resource. (cluster online resource_name) If it goes online, or comatose, that’s fine. Immediately offline it again (cluster offline resource_name).
  14. Using iManager, modify the load, unload and monitor scripts for the cluster resource, you have just off-lined again. Remove all traces of _old in each script.
  15. Double check your work and online the resource (cluster online resource_name). It should pick up the name changes and be using the new location.
  16. Delete the source_pool_old which will delete the volume as well.

Congratulations, you’ve successfully migrated data to a smaller NSS clustered volume.

1 vote, average: 5.00 out of 51 vote, average: 5.00 out of 51 vote, average: 5.00 out of 51 vote, average: 5.00 out of 51 vote, average: 5.00 out of 5 (1 votes, average: 5.00 out of 5)
You need to be a registered member to rate this post.

Tags: ,
Categories: Business Continuity, Cluster Services, File Services and Management, High Availability & Disaster Recovery, IT Operations Management, Open Enterprise Server, Technical


Disclaimer: This content is not supported by Micro Focus. It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test it thoroughly before using it in a production environment.