mountpoint1 Regular mysharevol1 mysharedg NOT MOUNTED crw MOUNT POINT TYPE SHARED VOLUME DISK GROUP STATUS MOUNT OPTIONS mountpoint2 added to the cluster-configurationĭisplay the CFS mount configurations in the cluster. ServerA # cfsmntadm add mysharedg mysharevol2 /mountpoint2 mountpoint1 added to the cluster-configuration ServerA # cfsmntadm add mysharedg mysharevol1 /mountpoint1 Mountpoints will be automatically created. Then add these volumes/filesystems to the cluster configuration so they can be mounted on any or all nodes. ServerA # mkfs -F vxfs /dev/vx/rdsk/mysharedg/mysharevol2 ServerA # mkfs -F vxfs /dev/vx/rdsk/mysharedg/mysharevol1 ServerA # vxassist -g mysharedg make mysharevol2 100g ServerA # vxassist -g mysharedg make mysharevol1 100g We can now create volumes and filesystems within the shared diskgroup. ServerA # grep mysharedg /etc/VRTSvcs/conf/config/main.cf Verify that the cluster configuration has been updated. Giving all nodes in the cluster an option for Shared Write (sw).ĭisk Group is being added to cluster configuration. Now let's add that new disk group in our cluster configuration. ServerA # vxdg -s init mysharedg mysharedg01=EMC0_1 mysharedg02=EMC0_2 ServerA # vxdisksetup -if EMC0_2 format=cdsdiskĬreate a shared disk group with the disks you just initialized. ServerA # vxdisksetup -if EMC0_1 format=cdsdisk You may optionally specify the disk format. Make sure they are attached to all the cluster nodes. Initialization destroys any existing data on the disk.īefore you begin, make sure the disks that you add to the shared-disk group must be directly attached to all the cluster nodes.įirst, make sure you are on the master node: When you place a disk under Volume Manager control, the disk is initialized. Disks must be placed in disk groups before they can be used by the Volume Manager. This procedure creates a shared disk group for use in a cluster environment. Group System Probed AutoDisabled StateĬreating a Shared Disk Group and Volumes/Filesystems No mount point registered with cluster configuration CVM is required to be online before we can bring up any clustered filesystem on the nodes. And notice that there is now a new service group cvm. Now let's check the status of the cluster. Waiting for the new configuration to be added.Ĭluster File System Configuration is in progress.Ĭfscluster: CFS Cluster Configured Successfully Following is the summary of the information:. IP addresses for all the nodes in the cluster. If you choose gab messaging then you will not have
Specify whether you would like to use GAB messaging or TCP/UDP You will now be prompted to enter the information pertaining The cluster configuration information as read from cluster Out of cluster: No mapping information availableĭuring configuration, veritas will pick up all information that is set on your cluster configuration. On these examples, CVM/CFS are not configured yet.Įrror: V-35-41: Cluster not configured for data sharing application Here are some ways to check the status of your cluster.
Make sure vxfencing driver is active on all nodes (even if it is in disabled mode).
Make sure you have a license installed for Veritas CFS on all nodes.Ĥ. VRTScavf Veritas cfs and cvm agents by Symantecģ. Make sure these packages are installed on all nodes: Make sure you have an established Cluster and running properly.Ģ. Diskgroups and volumes will be created and shared across all nodes.ġ. A new 4 node cluster with no resources defined.ģ. Based on VCS 5.x but should also work on 4.xĢ.
A distributed locking mechanism, called GLM, is used for metadata and cache coherency among the multiple nodes.ġ. CFS caches the metadata in memory, typically in the memory buffer cache or the vnode cache. Though any node can initiate an operation to create, delete, or resize data, the master node carries out the actual operation. The CFS is designed with master/slave architecture. CFS allows the same file system to be simultaneously mounted on multiple nodes in the cluster.