Total Pageviews

Friday, 22 August 2014

Veritas Encapsulation !!!

Veritas Encapsulation is the concept of bringing rootdisk under veritas control.We can perform this at any moment, as it preserves the existing data.

Similar to ufs we cannot grow or shrink rootvol and swapvol.We can encapsulate our rootdisk in sliced format and have to leave atleast 2 spare slices for public and private regions.

Now let's go with practical ,

root@mysrv1 #
root@mysrv1 # bash
root@mysrv1 #
root@mysrv1 # df -kh
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c1t0d0s0       30G    23G   6.3G    79%    /              
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                    29G   1.7M    29G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap2.so.1
                        30G    23G   6.3G    79%    /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
                        30G    23G   6.3G    79%    /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                    29G    32K    29G     1%    /tmp
swap                    29G    40K    29G     1%    /var/run
swap                    29G     0K    29G     0%    /dev/vx/dmp
swap                    29G     0K    29G     0%    /dev/vx/rdmp
root@mysrv1 #

From above output, we know that currently rootdisk is in normal c#t#d# format.
Let's check in /etc/vfstab also.....

root@mysrv1 # cat /etc/vfstab
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/dsk/c1t0d0s1       -       -       swap    -       no      -
/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0      /       ufs     1       no      -
/devices        -       /devices        devfs   -       no      -
sharefs -       /etc/dfs/sharetab       sharefs -       no      -
ctfs    -       /system/contract        ctfs    -       no      -
objfs   -       /system/object  objfs   -       no      -
swap    -       /tmp    tmpfs   -       yes     -
#/dev/dsk/c1t0d0s1      -       -       swap    -       no      -
root@mysrv1 #

Now we can proceed with our steps for encapsulation.....

root@mysrv1 # vxdiskadm

Volume Manager Support Operations
Menu: VolumeManager/Disk

 1      Add or initialize one or more disks
 2      Encapsulate one or more disks
 3      Remove a disk
 4      Remove a disk for replacement
 5      Replace a failed or removed disk
 6      Mirror volumes on a disk
 7      Move volumes from a disk
 8      Enable access to (import) a disk group
 9      Remove access to (deport) a disk group
 10     Enable (online) a disk device
 11     Disable (offline) a disk device
 12     Mark a disk as a spare for a disk group
 13     Turn off the spare flag on a disk
 14     Unrelocate subdisks back to a disk
 15     Exclude a disk from hot-relocation use
 16     Make a disk available for hot-relocation use
 17     Prevent multipathing/Suppress devices from VxVM's view
 18     Allow multipathing/Unsuppress devices from VxVM's view
 19     List currently suppressed/non-multipathed devices
 20     Change the disk naming scheme
 21     Get the newly connected/zoned disks in VxVM view
 22     Change/Display the default disk layouts
 23     Mark a disk as allocator-reserved for a disk group
 24     Turn off the allocator-reserved flag on a disk
 list   List disk information

 ?      Display help about menu
 ??     Display help about the menuing system
 q      Exit from menus

Select an operation to perform: 2

Encapsulate one or more disks
Menu: VolumeManager/Disk/Encapsulate
  Use this operation to convert one or more disks to use the Volume Manager.
  This adds the disks to a disk group and replaces existing partitions
  with volumes.  Disk encapsulation requires a reboot for the changes
  to take effect.

  More than one disk or pattern may be entered at the prompt.  Here are
  some disk selection examples:

  all:          all disks
  c3 c4t2:      all disks on both controller 3 and controller 4, target 2
  c3t4d2:       a single disk (in the c#t#d# naming scheme)
  xyz_0 :       a single disk (in the enclosure based naming scheme)
  xyz_ :        all disks on the enclosure whose name is xyz

Select disk devices to encapsulate:
[<pattern-list>,all,list,q,?] list

DEVICE       DISK         GROUP        STATUS
c1t0d0       -            -            online invalid
c1t1d0       -            -            error

Select disk devices to encapsulate:
[<pattern-list>,all,list,q,?] c1t0d0
  Here is the disk selected.  Output format: [Device_Name]

  c1t0d0

Continue operation? [y,n,q,?] (default: y)
  You can choose to add this disk to an existing disk group or to
  a new disk group.  To create a new disk group, select a disk group
  name that does not yet exist.

Which disk group [<group>,list,q,?] rootdg

Create a new group named rootdg? [y,n,q,?] (default: y)

Use a default disk name for the disk? [y,n,q,?] (default: y)
  A new disk group will be created named rootdg and the selected
  disks will be encapsulated and added to this disk group with
  default disk names.

  c1t0d0

Continue with operation? [y,n,q,?] (default: y)
  The following disk has been selected for encapsulation.
  Output format: [Device_Name]

  c1t0d0

Continue with encapsulation? [y,n,q,?] (default: y)
  A new disk group rootdg will be created and the disk device c1t0d0 will
  be encapsulated and added to the disk group with the disk name rootdg01.

Enter desired private region length
[<privlen>,q,?] (default: 65536)
  The c1t0d0 disk has been configured for encapsulation.
  The first stage of encapsulation has completed successfully.  You
  should now reboot your system at the earliest possible opportunity.
  The encapsulation will require two or three reboots which will happen
  automatically after the next reboot.  To reboot execute the command:

shutdown -g0 -y -i6

  This will update the /etc/vfstab file so that volume devices are
  used to mount the file systems on this disk device.  You will need
  to update any other references such as backup scripts, databases,
  or manually created swap devices.

Encapsulate other disks? [y,n,q,?] (default: n)

Volume Manager Support Operations
Menu: VolumeManager/Disk

 1      Add or initialize one or more disks
 2      Encapsulate one or more disks
 3      Remove a disk
 4      Remove a disk for replacement
 5      Replace a failed or removed disk
 6      Mirror volumes on a disk
 7      Move volumes from a disk
 8      Enable access to (import) a disk group
 9      Remove access to (deport) a disk group
 10     Enable (online) a disk device
 11     Disable (offline) a disk device
 12     Mark a disk as a spare for a disk group
 13     Turn off the spare flag on a disk
 14     Unrelocate subdisks back to a disk
 15     Exclude a disk from hot-relocation use
 16     Make a disk available for hot-relocation use
 17     Prevent multipathing/Suppress devices from VxVM's view
 18     Allow multipathing/Unsuppress devices from VxVM's view
 19     List currently suppressed/non-multipathed devices
 20     Change the disk naming scheme
 21     Get the newly connected/zoned disks in VxVM view
 22     Change/Display the default disk layouts
 23     Mark a disk as allocator-reserved for a disk group
 24     Turn off the allocator-reserved flag on a disk
 list   List disk information

 ?      Display help about menu
 ??     Display help about the menuing system
 q      Exit from menus

Select an operation to perform: q

Goodbye.
root@mysrv1 #
root@mysrv1 #

By this we performed veritas encapsulation on our rootdisk....A reboot is needed to reflect these changes.

root@mysrv1 #
root@mysrv1 # reboot -- -v
login as: root
Using keyboard-interactive authentication.
Password:
Access denied
Using keyboard-interactive authentication.
Password:
Last login: Fri Aug 22 11:49:35 2014 from 10.20.10.50
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
You have new mail.
Sourcing //.profile-EIS.....
root@mysrv1 #
root@mysrv1 #

We successfully rebooted after encapsulating the rootdisk, now observe the changes.....

root@mysrv1 #
root@mysrv1 # df -kh
Filesystem             size   used  avail capacity  Mounted on
/dev/vx/dsk/bootdg/rootvol
                        30G    23G   6.3G    79%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                    29G   1.6M    29G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap2.so.1
                        30G    23G   6.3G    79%    /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
                        30G    23G   6.3G    79%    /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                    29G    40K    29G     1%    /tmp
swap                    29G    40K    29G     1%    /var/run
swap                    29G     0K    29G     0%    /dev/vx/dmp
swap                    29G     0K    29G     0%    /dev/vx/rdmp
root@mysrv1 #
root@mysrv1 #
root@mysrv1 # cat /etc/vfstab
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/vx/dsk/bootdg/swapvol      -       -       swap    -       no      -
/dev/vx/dsk/bootdg/rootvol      /dev/vx/rdsk/bootdg/rootvol     /       ufs     1       no      -
/devices        -       /devices        devfs   -       no      -
sharefs -       /etc/dfs/sharetab       sharefs -       no      -
ctfs    -       /system/contract        ctfs    -       no      -
objfs   -       /system/object  objfs   -       no      -
swap    -       /tmp    tmpfs   -       yes     -
#/dev/dsk/c1t0d0s1      -       -       swap    -       no      -
#NOTE: volume rootvol (/) encapsulated partition c1t0d0s0
#NOTE: volume swapvol (swap) encapsulated partition c1t0d0s1
root@mysrv1 #
root@mysrv1 # swap -l
swapfile             dev  swaplo blocks   free
/dev/vx/dsk/bootdg/swapvol 323,26000     16 33560432 33560432
root@mysrv1 #
root@mysrv1 #
root@mysrv1 # vxdg list
NAME         STATE           ID
rootdg       enabled              1408688804.6.mysrv1
root@mysrv1 #
root@mysrv1 #
root@mysrv1 # vxprint -ht
Disk group: rootdg

dg rootdg       default      default  26000    1408688804.6.mysrv1
dm rootdg01     c1t0d0s2     auto     81407    286617216 -

v  rootvol      -            ENABLED  ACTIVE   62928384 ROUND     -        root
pl rootvol-01   rootvol      ENABLED  ACTIVE   62928384 CONCAT    -        RW
sd rootdg01-B0  rootvol-01   rootdg01 286617215 1       0         c1t0d0   ENA
sd rootdg01-02  rootvol-01   rootdg01 0        62928383 1         c1t0d0   ENA

v  swapvol      -            ENABLED  ACTIVE   33560448 ROUND     -        swap
pl swapvol-01   swapvol      ENABLED  ACTIVE   33560448 CONCAT    -        RW
sd rootdg01-01  swapvol-01   rootdg01 62928383 33560448 0         c1t0d0   ENA
root@mysrv1 #

#############################################

Similarly if we want to achieve mirroring for rootvol, simply we need to add the 2nd disk to our diskgroup "rootdg".

# vxdisksetup -i c1t1d0 format=sliced
# vxdg -g rootdg adddisk rootmirror=c1t1d0

# vxdiskadm

...... output truncated ....

-----
-----
 4      Remove a disk for replacement
 5      Replace a failed or removed disk
 6      Mirror volumes on a disk
 7      Move volumes from a disk
-----
-----
Select an operation to perform: 6

Mirror volumes on a disk
Menu: VolumeManager/Disk/Mirror
  This operation can be used to mirror volumes on a disk.  These
  volumes can be be mirrored onto another disk or onto any
  available disk space.  Volumes will not be mirrored if they are
  already mirrored.  Also, volumes that are comprised of more than
  one subdisk will not be mirrored.

  Mirroring volumes from the boot disk will produce a disk that
  can be used as an alternate boot disk.

  At the prompt below, supply the name of the disk containing the
  volumes to be mirrored.

Enter disk name [,list,q,?] list

Enter disk name [,list,q,?] rootdisk

Enter destination disk [,list,q,?]  (default: any) rootmirror

Continue with operation? [y,n,q,?]  (default: y) y
VxVM vxmirror INFO V-5-2-22   Mirror volume swapvol ...
VxVM vxmirror INFO V-5-2-22   Mirror volume rootvol ...

  VxVM  INFO V-5-2-674 Mirroring of disk rootdisk is complete.

Mirror volumes on another disk? [y,n,q,?]  (default: n)n

root@mysrv1 #
root@mysrv1 #

#############################################

Now whenever we want to unencapsulate the disk, it is too simple and can achieved by a single command.

root@mysrv1 #
root@mysrv1 # vxunroot
  VxVM vxunroot NOTICE V-5-2-1564
This operation will convert the following file systems from
  volumes to regular partitions:

        rootvol swapvol
  VxVM vxunroot INFO V-5-2-2011
Replacing volumes in root disk to partitions will require a system
  reboot. If you choose to continue with this operation, system
  configuration will be updated to discontinue use of the volume
  manager for your root and swap devices.

Do you wish to do this now [y,n,q,?] (default: y)
  VxVM vxunroot INFO V-5-2-287 Restoring kernel configuration...
  VxVM vxunroot INFO V-5-2-78
A shutdown is now required to install the new kernel.
  You can choose to shutdown now, or you can shutdown later, at your
  convenience.

Do you wish to shutdown now [y,n,q,?] (default: n)
  VxVM vxunroot INFO V-5-2-258
Please shutdown before you perform any additional volume manager
  or disk reconfiguration.  To shutdown your system cd to / and type

        shutdown -g0 -y -i6

root@mysrv1 #
root@mysrv1 #
root@mysrv1 # reboot -- -v                        --------- Now just a reboot to reflect this....

login as: root
Using keyboard-interactive authentication.
Password:
Last login: Fri Aug 22 12:09:00 2014 from 10.20.10.50
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
You have new mail.
Sourcing //.profile-EIS.....
root@mysrv1 #

We unencapsulated our rootdisk and so we brought our rootdisk back to c#t#d# format.

root@mysrv1 #
root@mysrv1 # df -kh
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c1t0d0s0       30G    23G   6.2G    79%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                    29G   1.7M    29G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap2.so.1
                        30G    23G   6.2G    79%    /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
                        30G    23G   6.2G    79%    /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                    29G    32K    29G     1%    /tmp
swap                    29G    40K    29G     1%    /var/run
swap                    29G     0K    29G     0%    /dev/vx/dmp
swap                    29G     0K    29G     0%    /dev/vx/rdmp
root@mysrv1 #
root@mysrv1 # vxdg list               -------- DG will be destroyed automatically
NAME         STATE           ID
root@mysrv1 #
root@mysrv1 # swap -l
swapfile             dev  swaplo blocks   free
/dev/dsk/c1t0d0s1   118,9      16 33560432 33560432
root@mysrv1 #

No need edit the entries in vfstab manually...
Changes will be reflected in /etc/vfstab file automatically....

root@mysrv1 # cat /etc/vfstab
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/dsk/c1t0d0s1       -       -       swap    -       no      -
/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0      /       ufs     1       no      -
/devices        -       /devices        devfs   -       no      -
sharefs -       /etc/dfs/sharetab       sharefs -       no      -
ctfs    -       /system/contract        ctfs    -       no      -
objfs   -       /system/object  objfs   -       no      -
swap    -       /tmp    tmpfs   -       yes     -
#/dev/dsk/c1t0d0s1      -       -       swap    -       no      -
root@mysrv1 #

################################################################################

Veritas Cluster Commands !!!

In this post, we are going to see veritas cluster (ha) commands.
Soon after installing VCS it is time to configure it. For configuring a cluster, we can play with GUI clicks or ha commands.

Though we can have fun playing with GUI, it is better to grab some knowledge on commands as we are admins. So let me start this.....

root@solaris:~# hares -list | grep solaris
root@solaris:~#                                               
root@solaris2:~# hagrp -list
cvm                      solaris2
cvm                      solaris
vxfen                   solaris2
vxfen                   solaris
root@solaris:~#

Till now there are no resources or service groups , so let's start creating one....
cvm and vxfen are default comes with cluster software (cvm-- cluster volume manager and vxfen -- for I/O fencing)

CMD : for a group        ---- hagrp -add <grpname>
CMD : for a resource   ---- hares -add <resource name> <resource type> <service group>

root@solaris2:~#
root@solaris2:~# hagrp -add TESTDB           ----- Adding Service Group
root@solaris2:~#
root@solaris2:~# hagrp -add TESTCI
root@solaris2:~#
root@solaris2:~# hagrp -list
TESTDB               solaris2
TESTDB               solaris
TESTCI                 solaris2
TESTCI                 solaris
cvm                      solaris2
cvm                      solaris
vxfen                   solaris2
vxfen                   solaris
root@solaris2:~#
root@solaris:~# hares -add dg_test DiskGroup TESTDB       ----- Adding resource
root@solaris:~#
root@solaris:~# hares -modify dg_test DiskGroup testdg
root@solaris:~#
root@solaris:~# hares -list | grep solaris
dg_test   solaris
root@solaris:~#

Soon after creating a resource for DiskGroup, we need a volume resource for it.

root@solaris:~#
root@solaris:~# hares -add vol_testvol  Volume TESTDB
root@solaris:~#
root@solaris:~# hares -modify vol_testvol Volume testvol
root@solaris:~#
root@solaris:~# hares -modify vol_testvol DiskGroup testdg
root@solaris:~#
root@solaris:~# hares -list | grep solaris
dg_test   solaris
vol_testvol  solaris
root@solaris:~#

Then a mount point,

root@solaris:~#
root@solaris:~# hares -add mnt_test Mount TESTDB
root@solaris:~#
root@solaris:~# hares -modify mnt_test MountPoint test
root@solaris:~#
root@solaris:~# hares -modify mnt_test BlockDevice /dev/vx/dsk/testdg/testvol
root@solaris:~#
root@solaris:~# hares -modify mnt_test FSType vxfs
root@solaris:~#
root@solaris:~# hares -modify mnt_test FsckOpt -y
root@solaris:~#
root@solaris:~# hares -list | grep solaris
dg_test   solaris
vol_testvol  solaris
mnt_test   solaris
root@solaris2:~#

We have to create resources only on one node they will be visible for both the nodes.

root@solaris:~# hares -list
dg_test   solaris
dg_test   solaris2
vol_testvol  solaris
vol_testvol  solaris2
mnt_test   solaris
mnt_test   solaris2
root@solaris:~#

Now let us go with creating a resource for ORACLE, so that we can switch oracle between the nodes and make it started just with clicks...
oracle comes with different attributes like sid(system id), owner, oracle_home, profile_file.

root@solaris:~# hares -add test_ora Oracle TESTDB
root@solaris:~#
root@solaris:~# hares -modify test_ora Sid SID
root@solaris:~#
root@solaris:~# hares -modify test_ora Owner oraSID
root@solaris:~#
root@solaris:~# hares -modify test_ora Home /oracle/SID/112_64
root@solaris:~#
root@solaris:~# hares -modify test_ora Pfile /oracle/SID/112_64/dbs/initSID.ora
root@solaris:~#

Listener is the thing which acts as a mediator for SAP and DB, hence we need a resource for this too...

root@solaris:~# hares -add  LSNR_test Netlsnr TESTDB
root@solaris:~#
root@solaris:~# hares -modify LSNR_test Owner oraSID
root@solaris:~#
root@solaris:~# hares -modify LSNR_test Home /oracle/SID/112_64
root@solaris:~#

As we use a IP for DB and SAP server, we need a IP resource for both so that we can switch DB or CI to any node. For IP, we need a NIC resource too..... Attributes like IP, Netmask and NIC.

root@solaris:~# hares -add NIC_test NIC TESTDB
root@solaris:~#
root@solaris:~# hares -modify NIC_test Device ipmp0
root@solaris:~#
root@solaris:~# hares -add VIP_test IP TESTDB
root@solaris:~#
root@solaris:~# hares -modify VIP_test Device ipmp0
root@solaris:~#
root@solaris:~# hares -modify VIP_test Address 10.20.10.21
root@solaris:~#
root@solaris:~# hares -modify VIP_test NetMask 255.255.255.0

Our ORACLE resource is ready, now let us gor SAP resource....
For SAP we need attributes like Instance type & name, sap_admin, sap_sid, profile_file .......

root@solaris:~# hares -add SID_SAP SAPNW04 TESTCI
root@solaris:~#
root@solaris:~# hares -modify SID_SAP InstType ENQUEUE 
root@solaris:~#
root@solaris:~# hares -modify SID_SAP EnvFile /home/SIDadm/.cshrc
root@solaris:~#
root@solaris:~# hares -modify SID_SAP InstName 00
root@solaris:~#
root@solaris:~# hares -modify SID_SAP ProcMon ms en pr
root@solaris:~#
root@solaris:~# hares -modify SID_SAP SAPAdmin SIDadm
root@solaris:~#
root@solaris:~# hares -modify SID_SAP SAPSID SID
root@solaris:~#
root@solaris:~# hares -modify SID_SAP StartProfile /usr/sap/SID/SYS/profile/pfl.flname

Similarly we have to create resources like NFSrestart , share and proxy if we are sharing any filesystems from a node....

root@solaris:~# hares -add  NFSrestart_test  NFSRestart TESTCI
root@solaris:~#
root@solaris:~# hares -modify NFSrestart_test LocksPathName /sapmnt/SID/nfslocks
root@solaris:~#

Proxy helps cluster to wait until the target resource is online.
Example : Will be helpful when we use nfs share. It means unless until the filesystem gets mounted on the node it cannot be shared. This scenario is handled by proxy.

root@solaris:~# hares -add  proxy_test  Proxy TESTCI
root@solaris:~#
root@solaris:~# hares -modify proxy_test TargetResName mnt_test1
root@solaris:~#
root@solaris:~# hares -add  share_test  Share TESTCI
root@solaris:~#
root@solaris:~# hares -modify share_test PathName /test1
root@solaris:~#
root@solaris:~# hares -modify share_test Options -o anon=0
root@solaris:~#

In a cluster dependency is the main thing and will be achieved by proper linking. We need to create a link between any two resources carefully and thus we define the relationship of child and parent resource respectively.

CMD : hares -link <parent res> <child res>

hares -link dg_test vol_testvol
hares -link vol_testvol mnt_test

similarly to unlink this relation :

CMD : hares -unlink <parent group> <child group>

If we want to specify dependency while any service group getting online, we can use links.
           ****    For example, we will create service group for both SAP and DB. But DB should be started before CI so in such cases, we can use link to create dependency which states unless and until DB service group is online SAP service group should not be online.

CMD :  hagrp -link <parent group> <child group>

hagrp -link TESTDB TESTCI

Similarly some more useful and regular commands are,

To bring services online and offline

hagrp -online service_group -sys system_name
hagrp -offline service_group -sys system_name

For Freezing/unfreezing service groups

hagrp -freeze group_name [-persistent]
hagrp -unfreeze group_name [-persistent]

To switch services between any nodes of cluster

hagrp -switch group_name -clus <cluster>

To bring resources online and offline

hares -online resource_name -sys system_name
hares -offline resource_name -sys system_name

Since cluster is a ocean, we have so many other commands too. But it is always easier if we go with GUI option. This post gives some basic idea regarding commands which were internally run when we click and perform some action in GUI.

################################################################################

Saturday, 16 August 2014

VXSFCFSHA 6.1 Installation !!!

In this post, let us discuss the Veritas Cluster Installation procedure including I/O fencing.Now we are going to see installation of VXSFCFSHA (Veritas Storage Foundation Cluster Filesystem with High Availability) .

Version : VXSFHA 6.1
Cluster : 2 node

Unzip the cluster software and start the installation :

root@solaris:/tmp/dvd1-sol_sparc/sol11_sparc#
root@solaris:/tmp/dvd1-sol_sparc/sol11_sparc# ./installer

                     Symantec Storage Foundation and High Availability Solutions 6.1 Install Program

Copyright (c) 2013 Symantec Corporation. All rights reserved.  Symantec, the Symantec Logo are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners.

The Licensed Software and Documentation are deemed to be "commercial computer software" and "commercial computer software documentation" as defined in FAR Sections 12.212 and DFARS Section 227.7202.

Logs are being written to /var/tmp/installer-201405031548hnG while installer is in progress.

                     Symantec Storage Foundation and High Availability Solutions 6.1 Install Program

Symantec Product                       Version Installed on solaris    Licensed
=======================================================================
Symantec Licensing Utilities (VRTSvlic) are not installed due to which products and licenses are not discovered. Use the menu below to continue.

Task Menu:

    P) Perform a Pre-Installation Check       I) Install a Product
    C) Configure an Installed Product         G) Upgrade a Product
    O) Perform a Post-Installation Check      U) Uninstall a Product
    L) License a Product                      S) Start a Product
    D) View Product Descriptions              X) Stop a Product
    R) View Product Requirements              ?) Help

Enter a Task: [P,I,C,G,O,U,L,S,D,X,R,?] i

                     Symantec Storage Foundation and High Availability Solutions 6.1 Install Program

     1)  Symantec Dynamic Multi-Pathing (DMP)
     2)  Symantec Cluster Server (VCS)
     3)  Symantec Storage Foundation (SF)
     4)  Symantec Storage Foundation and High Availability (SFHA)
     5)  Symantec Storage Foundation Cluster File System HA (SFCFSHA)
     6)  Symantec Storage Foundation for Oracle RAC (SF Oracle RAC)
     b)  Back to previous menu

Select a product to install: [1-6,b,q] 5

This Symantec product may contain open source and other third party materials that are subject to a separate license. See the applicable Third-Party Notice at http://www.symantec.com/about/profile/policies/eulas

Do you agree with the terms of the End User License Agreement as specified in the
storage_foundation_cluster_file_system_ha/EULA/en/EULA_SFHA_Ux_6.1.pdf file present on media? [y,n,q,?]

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program

     1)  Install minimal required packages - 661 MB required
     2)  Install recommended packages - 800 MB required
     3)  Install all packages - 829 MB required
     4)  Display packages to be installed for each option

Select the packages to be installed on all systems? [1-4,q,?] (2) 3

Enter the Solaris 11 Sparc system names separated by spaces: [q,?] solaris solaris2

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

Logs are being written to /var/tmp/installer-201405031548hnG while installer is in progress

    Verifying systems: 100%

    Estimated time remaining: (mm:ss) 0:00                                                                      8 of 8

    Checking system communication ............................................................................... Done
    Checking release compatibility .............................................................................. Done
    Checking installed product .................................................................................. Done
    Checking prerequisite patches and packages .................................................................. Done
    Checking platform version ................................................................................... Done
    Checking file system free space ............................................................................. Done
    Checking product licensing .................................................................................. Done
    Performing product prechecks ................................................................................ Done

System verification checks completed successfully

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

The following Symantec Storage Foundation Cluster File System HA packages will be installed on all systems:

Package           Version              Package Description
VRTSllt           6.1.0.0              Low Latency Transport
VRTSgab           6.1.0.0              Group Membership and Atomic Broadcast
VRTSvxfen         6.1.0.0              I/O Fencing
VRTSamf           6.1.0.0              Asynchronous Monitoring Framework
VRTSvcs           6.1.0.0              Cluster Server
VRTScps           6.1.0.0              Cluster Server - Coordination Point Server
VRTSvcsag         6.1.0.0              Cluster Server Bundled Agents
VRTSvcsea         6.1.0.0              Cluster Server Enterprise Agents
VRTSglm           6.1.0.0              Group Lock Manager
VRTScavf          6.1.0.0              Cluster Server Agents for Cluster File System
VRTSgms           6.1.0.0              Group Messaging Services
VRTSvbs           6.1.0.0              Virtual Business Service
VRTSvcswiz        6.1.0.0              Cluster Server Wizards

The following Symantec Storage Foundation Cluster File System HA packages will be installed on solaris:

Package           Version              Package Description
VRTSperl          5.16.1.6             Perl Redistribution
VRTSvlic          3.2.61.10            Licensing
VRTSsfcpi61       6.1.0.0              Storage Foundation Installer
VRTSspt           6.1.0.0              Software Support Tools
VRTSvxvm          6.1.0.0              Volume Manager Binaries
VRTSaslapm        6.1.0.0              Volume Manager - ASL/APM
VRTSsfmh          6.0.0.0              Storage Foundation Managed Host
VRTSvxfs          6.1.0.0              File System
VRTSfsadv         6.1.0.0              File System Advanced Solutions
VRTSfssdk         6.1.0.0              File System Software Developer Kit
VRTSdbed          6.1.0.0              Storage Foundation Databases

Press [Enter] to continue:
VRTSodm           6.1.0.0              Oracle Disk Manager

Press [Enter] to continue:

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

Logs are being written to /var/tmp/installer-201405031548hnG while installer is in progress

    Installing SFCFSHA: 100%

    Estimated time remaining: (mm:ss) 0:00                                                                      3 of 3

    Performing SFCFSHA preinstall tasks ......................................................................... Done
    Installing SFCFSHA packages ................................................................................. Done
    Performing SFCFSHA postinstall tasks ........................................................................ Done

Symantec Storage Foundation Cluster File System HA Install completed successfully

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

To comply with the terms of Symantec's End User License Agreement, you have 60 days to either:

* Enter a valid license key matching the functionality in use on the systems
* Enable keyless licensing and manage the systems with a Management Server. For more details visit http://go.symantec.com/sfhakeyless. The product is fully functional during these 60 days.

     1)  Enter a valid license key
     2)  Enable keyless licensing and complete system licensing later

How would you like to license the systems? [1-2,q] (2) 1

Checking system licensing

SFCFSHA is not licensed on solaris

SFCFSHA is not licensed on solaris2

SFCFSHA is unlicensed on all systems

Enter a SFCFSHA license key: [b,q,?] AJDE-WVQC-NP2Z-FDTB-GTJZ-8APH-R680-404C-P

Storage Foundation for Cluster File System successfully registered on solaris
File System successfully registered on solaris2

Storage Foundation for Cluster File System successfully registered on solaris2
Do you wish to enter additional licenses? [y,n,q,b] (n) n

Would you like to configure SFCFSHA on solaris solaris2? [y,n,q] (n) y

I/O Fencing

It needs to be determined at this time if you plan to configure I/O Fencing in enabled or disabled mode, as well as help in determining the number of network interconnects (NICS) required on your systems. If you configure I/O Fencing in enabled mode, only a single NIC is required, though at least two are recommended.

A split brain can occur if servers within the cluster become unable to communicate for any number of reasons. If I/O Fencing is not enabled, you run the risk of data corruption should a split brain occur. Therefore, to avoid data corruption due to split brain in CFS environments, I/O Fencing has to be enabled.

If you do not enable I/O Fencing, you do so at your own risk

See the Administrator's Guide for more information on I/O Fencing

Do you want to configure I/O Fencing in enabled mode? [y,n,q,?] (y) y

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

To configure VCS, answer the set of questions on the next screen.

When [b] is presented after a question, 'b' may be entered to go back to the first question of the configuration set.

When [?] is presented after a question, '?' may be entered for help or additional information about the question.

Following each set of questions, the information you have entered will be presented for confirmation.  To repeat the set of questions and correct any previous errors, enter 'n' at the confirmation prompt.

No configuration changes are made to the systems until all configuration questions are completed and confirmed.

Press [Enter] to continue:

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

To configure VCS for SFCFSHA the following information is required:

  A unique cluster name
  Two or more NICs per system used for heartbeat links
  A unique cluster ID number between 0-65535

  One or more heartbeat links are configured as private links
  You can configure one heartbeat link as a low-priority link

All systems are being configured to create one cluster.

Enter the unique cluster name: [q,?] MYSOL

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

     1)  Configure the heartbeat links using LLT over Ethernet
     2)  Configure the heartbeat links using LLT over UDP
     3)  Automatically detect configuration for LLT over Ethernet
     b)  Back to previous menu

How would you like to configure heartbeat links? [1-3,b,q,?] (3) 1

Discovering NICs on solaris ............................................. Discovered net0 net1 net2 net3 net4

Enter the NIC for the first private heartbeat link on solaris: [b,q,?] (net0) net3

Would you like to configure a second private heartbeat link? [y,n,q,b,?] (n) y

Enter the NIC for the second private heartbeat link on solaris: [b,q,?] (net0) net4

Would you like to configure a third private heartbeat link? [y,n,q,b,?] (n)

Do you want to configure an additional low-priority heartbeat link? [y,n,q,b,?] (n) y

Enter the NIC for the low-priority heartbeat link on solaris: [b,q,?] (net2) net0

Are you using the same NICs for private heartbeat links on all systems? [y,n,q,b,?] (y)
    Checking media speed for net3 on solaris .............................. Not Applicable (Virtual Device)
    Checking media speed for net4 on solaris .............................. Not Applicable (Virtual Device)
    Checking media speed for net3 on solaris2 ............................ Not Applicable (Virtual Device)
    Checking media speed for net4 on solaris2 ............................ Not Applicable (Virtual Device)

Enter a unique cluster ID number between 0-65535: [b,q,?] (50864) 64325

The cluster cannot be configured if the cluster ID 64325 is in use by another cluster. Installer can perform a check to determine if the cluster ID is duplicate. The check will take less than a minute to complete.

Would you like to check if the cluster ID is in use by another cluster? [y,n,q] (y) y

    Checking cluster ID ......................................................................................... Done

Duplicated cluster ID detection passed. The cluster ID 64325 can be used for the cluster.

Press [Enter] to continue:

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

Cluster information verification:

        Cluster Name:      MYSOL
        Cluster ID Number: 64325

        Private Heartbeat NICs for solaris:
                link1=net0
        Low-Priority Heartbeat NIC for solaris:
                link-lowpri1=net1

        Private Heartbeat NICs for solaris2:
                link1=net4
        Low-Priority Heartbeat NIC for solaris2:
                link-lowpri1=net2

Is this information correct? [y,n,q,?] (y) y

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

The following data is required to configure the Virtual IP of the Cluster:

        A public NIC used by each system in the cluster
        A Virtual IP address and netmask

Do you want to configure the Virtual IP? [y,n,q,?] (n) n

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

Symantec Cluster Server can be configured in secure mode

Running VCS in Secure Mode guarantees that all inter-system communication is encrypted, and users are verified with security credentials.

When running VCS in Secure Mode, NIS and system usernames and passwords are used to verify identity. VCS usernames and passwords are no longer utilized when a cluster is running in Secure Mode.

Would you like to configure the VCS cluster in secure mode? [y,n,q,?] (n) n

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

The following information is required to add VCS users:

        A user name
        A password for the user
        User privileges (Administrator, Operator, or Guest)

Do you wish to accept the default cluster credentials of 'admin/password'? [y,n,q] (y) y

Do you want to add another user to the cluster? [y,n,q] (n) n

                       Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

VCS User verification:

        User: admin         Privilege: Administrators
        Passwords are not displayed

Is this information correct? [y,n,q] (y) y

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

The following information is required to configure SMTP notification:

        The domain-based hostname of the SMTP server
        The email address of each SMTP recipient
        A minimum severity level of messages to send to each recipient

Do you want to configure SMTP notification? [y,n,q,?] (n) n

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

The following information is required to configure SNMP notification:

        System names of SNMP consoles to receive VCS trap messages 
        SNMP trap daemon port numbers for each console
        A minimum severity level of messages to send to each console

Do you want to configure SNMP notification? [y,n,q,?] (n) n

All SFCFSHA processes that are currently running must be stopped

Do you want to stop SFCFSHA processes now? [y,n,q,?] (y) y

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

Logs are being written to /var/tmp/installer-201405031548hnG while installer is in progress

    Stopping SFCFSHA: 100%

    Estimated time remaining: (mm:ss) 0:00                                                                    10 of 10

    Performing SFCFSHA prestop tasks ............................................................................ Done
    Stopping vxgms .............................................................................................. Done
    Stopping vxglm .............................................................................................. Done
    Stopping vxcpserv ........................................................................................... Done
    Stopping had ................................................................................................ Done
    Stopping CmdServer .......................................................................................... Done
    Stopping amf ................................................................................................ Done
    Stopping vxfen .............................................................................................. Done
    Stopping gab ................................................................................................ Done
    Stopping llt ................................................................................................ Done

Symantec Storage Foundation Cluster File System HA Shutdown completed successfully

                          Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                     solaris solaris2

Logs are being written to /var/tmp/installer-201405031548hnG while installer is in progress

    Starting SFCFSHA: 100%

    Estimated time remaining: (mm:ss) 0:00                                                                    23 of 23

    Starting vxio ............................................................................................... Done
    Starting vxspec ............................................................................................. Done
    Starting vxconfigd .......................................................................................... Done
    Starting vxesd .............................................................................................. Done
    Starting vxrelocd ........................................................................................... Done
    Starting vxcached ........................................................................................... Done
    Starting vxconfigbackupd .................................................................................... Done
    Starting vxattachd .......................................................................................... Done
    Starting vxportal ........................................................................................... Done
    Starting fdd ................................................................................................ Done
    Starting llt ................................................................................................ Done
    Starting gab ................................................................................................ Done
    Starting vxfen .............................................................................................. Done
    Starting amf ................................................................................................ Done
    Starting vxglm .............................................................................................. Done
    Starting had ................................................................................................ Done
    Starting CmdServer .......................................................................................... Done
    Starting vxdbd .............................................................................................. Done
    Starting vxgms .............................................................................................. Done
    Starting odm ................................................................................................ Done
    Performing SFCFSHA poststart tasks .......................................................................... Done

Symantec Storage Foundation Cluster File System HA Startup completed successfully

                                Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                                            solaris solaris2

Fencing configuration
     1)  Configure Coordination Point client based fencing
     2)  Configure disk based fencing

Select the fencing mechanism to be configured in this Application Cluster: [1-2,q,?] 2

This I/O fencing configuration option requires a restart of VCS. Installer will stop VCS at a later stage in this run. Note that the service groups will be online only on the systems that are in the 'AutoStartList' after restarting VCS. Do you want to continue? [y,n,q,b,?] y

Do you have SCSI3 PR enabled disks? [y,n,q,b,?] (y)

Since you have selected to configure disk based fencing, you need to provide the existing disk group to be used as coordinator or create a new disk group for it.

Select one of the options below for fencing disk group:
     1)  Create a new disk group
     2)  Using an existing disk group
     b)  Back to previous menu

Enter the choice for a disk group: [1-2,b,q] 1

List of available disks to create a new disk group
A new disk group cannot be created as the number of available free VxVM CDS disks is 0 which is less than three. If there are disks available which are not under VxVM control, use the command vxdisksetup or use the installer to initialize them as VxVM disks.

Do you want to initialize more disks as VxVM disks? [y,n,q,b] (y)

List of disks which can be initialized as VxVM disks:
     1)  c3d0s2
     2)  c3d1s2
     3)  emc0_1b47
     4)  emc0_1b48
     5)  emc0_1b49
     b)  Back to previous menu

Enter the disk options, separated by spaces: [1-5,b,q] 3 4 5
    Intializing disk emc0_1b47 on solaris ................................................................. Done
    Intializing disk emc0_1b48 on solaris ................................................................. Done
    Intializing disk emc0_1b49 on solaris ................................................................. Done

     1)  emc0_1b47
     2)  emc0_1b48
     3)  emc0_1b49
     b)  Back to previous menu

Select odd number of disks and at least three disks to form a disk group. Enter the disk options, separated by spaces: [1-3,b,q] 1 2 3

Enter the new disk group name: [b] TEST_QUORUM
Created disk group TEST_QUORUM

Before you continue with configuration, Symantec recommends that you run the vxfentsthdw utility (I/O fencing test hardware utility), in a separate console, to test whether the shared storage supports I/O fencing.  You can access the utility at '/opt/VRTSvcs/vxfen/bin/vxfentsthdw'.

As per the 'vxfentsthdw' run you performed, do you want to continue with this disk group? [y,n,q] (y) May  5 16:05:05 QA10 sendmail[26588]: [ID 702911 mail.alert] daemon MTA: problem creating SMTP socket
May  5 16:05:55 QA10 last message repeated 10 times
May  5 16:05:55 QA10 sendmail[26588]: [ID 801593 mail.alert] NOQUEUE: SYSERR(root): opendaemonsocket: daemon MTA: server SMTP socket wedged: exiting


Using disk group TEST_QUORUM

Enter disk policy for the disk(s) (raw/dmp): [b,q,?] raw

                       Symantec Storage Foundation Cluster File System HA 6.1 Install Program
                                                                            solaris solaris2

I/O fencing configuration verification

        Disk Group: TEST_QUORUM
        Fencing disk policy: raw

Is this information correct? [y,n,q] (y)

Installer will stop VCS before applying fencing configuration. To make sure VCS shuts down successfully, unfreeze any frozen service group and unmount the mounted filesystems in the cluster.

Are you ready to stop VCS and apply fencing configuration on all nodes at this time? [y,n,q] (y)

    Stopping VCS on solaris2 ............................................................................... Done
    Stopping Fencing on solaris2 .......................................................................... Done
    Stopping VCS on solaris ................................................................................. Done
    Stopping Fencing on solaris ............................................................................ Done
    Starting Fencing on solaris ............................................................................. Done
    Starting Fencing on solaris2 ........................................................................... Done
    Updating main.cf with fencing ........................................................................ Done
    Starting VCS on solaris .................................................................................. Done
    Starting VCS on solaris2 ................................................................................ Done

The Coordination Point Agent monitors the registrations on the coordination points.
Do you want to configure Coordination Point Agent on the client cluster? [y,n,q] (y)
Enter a non-existing name for the service group for Coordination Point Agent: [b] (vxfen)

Additionally the Coordination Point Agent can also monitor changes to the Coordinator Disk Group constitution such as a disk being accidently deleted from the Coordinator Disk Group. The frequency of this detailed monitoring can be tuned with the LevelTwoMonitorFreq attribute.

For example, if you set this attribute to 5, the agent will monitor the Coordinator Disk Group constitution every five monitor cycles. If LevelTwoMonitorFreq attribute is not set, the agent will not monitor any changes to the Coordinator Disk Group.

Do you want to set LevelTwoMonitorFreq? [y,n,q] (y)
Enter the value of the LevelTwoMonitorFreq attribute(0 to 65535): [b,q,?] (5) 50

 Adding Coordination Point Agent via solaris ........................................................... Done

 I/O Fencing configuration ......................................................................................... Done

I/O Fencing configuration completed successfully

The updates to VRTSaslapm package are released via the Symantec SORT web page: https://sort.symantec.com/asl. To make sure you have the latest version of VRTSaslapm
(for up to date ASLs and APMs), download and install the latest package from the SORT web page.

Checking online updates for Symantec Storage Foundation Cluster File System HA 6.1

    A connection attempt to https://sort.symantec.com to check for product updates failed.
    Visit https://sort.symantec.com to check for available product updates and information.

installer log files, summary file, and response file are saved at:

        /opt/VRTS/install/logs/installer-201405051543ein

Would you like to view the summary file? [y,n,q] (n)
root@solaris:/tmp/dvd1-sol_sparc/sol11_sparc#
root@solaris:/tmp/dvd1-sol_sparc/sol11_sparc#
root@solaris:/tmp/dvd1-sol_sparc/sol11_sparc#
root@solaris:/tmp/dvd1-sol_sparc/sol11_sparc#
root@solaris:/tmp/dvd1-sol_sparc/sol11_sparc# pkg info VRTSvcs
          Name: VRTSvcs
       Summary: Veritas Cluster Server by Symantec
   Description: The package contains Veritas Cluster Server by Symantec
      Category: Applications/System Utilities
         State: Installed
     Publisher: Symantec
       Version: 6.1.0.0
 Build Release: 5.11
        Branch: None
Packaging Date: October 21, 2013 08:38:13 PM
          Size: 236.57 MB
          FMRI: pkg://Symantec/VRTSvcs@6.1.0.0,5.11:20131021T203813Z
root@solaris:/tmp/dvd1-sol_sparc/sol11_sparc#

By this we completed the installation of Cluster Software and now it is time to rock n roll with Cluster configuration in Veritas Cluster Console (VCC), GUI.

###############################################################################