Total Pageviews

Saturday, 16 August 2014

Veritas Cluster Concept !!!

Veritas Cluster is responsible to provide high availability for a Application with a minimum downtime. High availability clusters (HAC) improve application availability by failing them over or switching them over in a group of systems.

A cluster can be build with 2 nodes atleast and a maximum of 32nodes. While using VCS we need to create Service groups and in which we need to create Resources.

For a resource we will define its attributes respectively. We need to define on which node,this particular service group should be online. Sometimes according to requirement need to make same service group on both the nodes online at a time.

For this sharing, we use CFS (Cluster FileSystem). Latest Veritas Cluster version : 6.1 VXSFCFSHA.

Three types of clusters:

1) Failover
2) Parallel
3) Hybrid

Failover is one in which when any of the node gets down then we switch particular service group onto the other node. In this case, we say when primary node is down we switch application to failover node.

Parallel is when all the service groups are online on all nodes. Application will be online all the time with zero downtime.

Hybrid is a combination of both Failover and Parallel. It means some service groups will be shared and online on both nodes and some will be switched across nodes whenever a failover occurs.

Three types of Resources:

1) ON-Only
2) ON-OFF
3) Persistent

On-Only

We can only start these resources through VCS, but does not stop them.
For example, VCS requires NFS daemons to be running to export a file system. 
VCS starts the daemons if required, but does not stop them if the associated service group is taken offline.

On-Off

We can start and stop On-Off resources as required. For example, VCS imports a disk group when required and deports it when it is no longer needed.

Persistent

These resources cannot be brought online or taken offline. 
For example, a network interface card cannot be started or stopped, but it is required to configure an IP address. Failure of a Persistent resource triggers a service group failover.

Attribute and Resource Type:

A resource type is one which states the purpose of a resource by its naming.
For example, Mount Volume Diskgroup Oracle SAPNW04 NFSRestart IP NIC are resources.

Attribute is the value which helps a resource to act according to its type.
For example, resource type Mount will have Attributes like mount point,fstyp,fsck options,block device path.

Low Latency Transmit Protocol (LLT) :

The main purpose of LLT is to transmit heartbeats. It checks the heartbeats between
the nodes in a cluster at time intervals (0.5 sec on high link and 1 sec on low link).
/etc/llthosts file is responsible to specify the hostnames of both nodes.

Start/Stop LLT

# lltconfig -c       -> start LLT
# lltconfig -U       -> stop LLT (GAB needs to stopped first)

Global Atomic Broadcast (GAB) :

Stands for Group membership services and atomic broadcast.

Group membership services : It tracks the heartbeats sent over LLT. If any nodes fails to send the heartbeat over LLT the GAB module send the information to I/O fencing module to take further action to avoid any split brain condition. 

Atomic Broadcast : atomic broadcast ensures that every node in the cluster has same information about every resource and service group in the cluster.

# cat /etc/gabtab
/sbin/gabconfig -c -n 2       ==== command to start the GAB.  " -n 2 "  -minimum no of nodes required to communicate before starting VCS.

Start/Stop GAB

# gabconfig -c        -> start GAB
# gabconfig -U       -> stop GAB

High Availability Daemon (HAD) :

HAD, high availability daemon is the main daemon which manages the agents and service group.
hashadow daemon is responsible for this. HAD maintains the resource configuration and state information.

Start/Stop HAD

# hastart            -> start HAD
# hastop             -> stop HAD   comes with many options, "-all" stops HAD in all nodes. "-local" for a single node.

Jeopardy and Split Brain Condition :

When a node in the cluster has only the last LLT link intact, the node forms a Jeopardy membership with that node and regular membership with nodes which has more than one LLT. Hence we achieve a regular membership among all nodes.

Coming to splitbrain condition,

Split brain occurs when all the LLT links fails simultaneously. A particular node fail to identify whether it is a system failure or an interconnect failure. 

Each node thinks that it is the only node which is active at the moment and tries to start the service groups on the other node which he think is down.

Same thing happens to the other node and this may lead to a simultaneous access to the storage and can cause data corruption.

I/O Fencing :

In Splitbrain condition to avoid data corruption, we use I/O fencing concept. I/O fencing driver uses SCSI-3 PGR (persistent group reservations) to avoid the data corruption.

In case of a possible split brain scenario, each node tries to access storage. I/O fencing helps to avoid this, and provides the given disks (Quorum disks) to both nodes for writing its data. 

###############################################################################

Friday, 15 August 2014

Veritas Netbackup Commands !!!

It is better to gain some knowledge in Veritas Netbackup at command level.
Though majority of the work in VNB carried out through GUI, it will be helpful if we have some command knowledge.

Few basic and commonly used commands of Netbackup are :


1. available_media      ------ > To view availability of media
2. robtest                       ------ > To instruct robot manually
3. bpmedialist -p <poolname>   ------ > To view medias assigned to a pool
4. bpexpdate -m <media> -d 0    ------ > To scratch a media
5. bpmedia -unfreeze/freeze <media>  ------ > To unfreeze/freeze a media
6. bpdbjobs -report      ------ > To list all netbackups jobs
7. vmpool -listall           ------ > To list all pools
8. vmquery -m <media>    ----- > To list tape volume details
9. vmchange -exp 12/31/06 23:59:58 -m <media ID>   ----- > Change a tapes expiry date 
10.vmchange -p <pool number> -m <media ID>   ----- > Change a tape's media pool 

All the above listed commands runs from Master Server only.Command to start particular backup from a particular media server should be run in Media Server.

bpbackup -p "policyname" -s "UserBackup" -L "progress log location" -S "master server"

Periodically we need to test our robotic arm, for this we use robtest command and through this command we can manually move slots to drives and slots from drives.

root@MSTSRVR # robtest
1Configured robots with local control supporting test utilities:
  TLD(0)     robotic path = /dev/sg/c0tw500104f0009b5ea3l0

Robot Selection

---------------
  1)  TLD 0
  2)  none/quit
Enter choice: 1                      ------- To access tape library

Robot selected: TLD(0)   robotic path = /dev/sg/c0tw500104f0009b5ea3l0


Invoking robotic test utility:

/usr/openv/volmgr/bin/tldtest -rn 0 -r /dev/sg/c0tw500104f0009b5ea3l0

Opening /dev/sg/c0tw500104f0009b5ea3l0

MODE_SENSE complete
Enter tld commands (? returns help information)
s d
drive 1 (addr 500) access = 0 Contains Cartridge = yes
Source address = 1093 (slot 94)
Barcode = RS1012
drive 2 (addr 501) access = 0 Contains Cartridge = yes
Source address = 1032 (slot 33)
Barcode = L21768
drive 3 (addr 502) access = 0 Contains Cartridge = yes
Source address = 1200 (slot 201)
Barcode = FJ0301
drive 4 (addr 503) access = 0 Contains Cartridge = yes
Source address = 1034 (slot 35)
Barcode = L21752
drive 5 (addr 504) access = 1 Contains Cartridge = no
drive 6 (addr 505) access = 1 Contains Cartridge = no
drive 7 (addr 506) access = 1 Contains Cartridge = no
drive 8 (addr 507) access = 1 Contains Cartridge = no
drive 9 (addr 508) access = 1 Contains Cartridge = no
drive 10 (addr 509) access = 1 Contains Cartridge = no
READ_ELEMENT_STATUS complete
q

Robot Selection

---------------
  1)  TLD 0
  2)  none/quit
Enter choice:
root@MSTSRVR #

To check the availability of media, displays all pools media.

root@MSTSRVR # available_media | more
media   media   robot   robot   robot   side/   ret    size     status/
 ID     type    type      #     slot    face    level  KBytes    multiplexed
----------------------------------------------------------------------------
MYSAPSRV_M02_FRI pool

CI0688  HCART3   TLD      0      111      -       0   63628256     ACTIVE


MYSAPSRV_M02_MON pool


CI0684  HCART3   TLD      0       53      -       0   63636416     ACTIVE


MYSAPSRV_M02_SAT pool


CI0689  HCART3   TLD      0      110      -       0   63602944     ACTIVE


MYSAPSRV_M02_SUN pool


CI0690  HCART3   TLD      0      109      -       0   63646112     ACTIVE


MYSAPSRV_M02_THU pool


CI0687  HCART3   TLD      0      112      -       0   63611488     ACTIVE


MYSAPSRV_M02_TUE pool


CI0685  HCART3   TLD      0       52      -       0   63601472     ACTIVE


MYSAPSRV_M02_WED pool


--More--          OUTPUT TRUNCATED

root@MSTSRVR #


To view medias assigned to a particular pool.

root@MSTSRVR # bpmedialist -p Oraclesrvr_M01_DAILY
Server Host = MED1SRVR

 id     rl  images   allocated        last updated      density  kbytes restores

           vimages   expiration       last read         <------- STATUS ------->
           On Hold
--------------------------------------------------------------------------------
077100   0      2   11/26/2014 08:10  11/26/2014 08:10  hcart3  1237892608     0
                2   12/03/2014 08:10        N/A         FULL
           0

FJ0302   0      2   11/23/2014 14:04  11/23/2014 14:04  hcart3  1234659328     0

                2   11/30/2014 14:04        N/A         FULL
           0

FJ0303   0      3   11/24/2014 18:39  11/25/2014 11:26  hcart3  1208266624     0

                3   12/02/2014 11:26        N/A         FULL
           0

L21752   0      0   11/29/2014 08:38  11/29/2014 08:38  hcart3           0     0

                0   12/06/2014 08:38        N/A       
           0

L21753   0      2   11/24/2014 18:39  11/24/2014 18:39  hcart3  1349768704     0

                2   12/01/2014 18:39        N/A         FULL
           0

$$$$$$$$     &&&     OUTPUT TRUNCATED     &&&   $$$$$$$$$$$$


RS1012   0      0   11/29/2014 08:38  11/29/2014 08:38  hcart3           0     0

                0   12/06/2014 08:38        N/A       
           0
root@MSTSRVR #

Another command which displays media details in other form, here we can get info for a particular schedule. Below is my server backup friday schedule's media info :

root@MSTSRVR # vmquery -pn MYSAPSRV_M02_FRI
================================================================================
media ID:              CI0688
media type:            1/2" cartridge tape 3 (24)
barcode:               CI0688
media description:     Added by Media Manager
volume pool:           MYSAPSRV_M02_FRI (91)
robot type:            TLD - Tape Library DLT (8)
robot number:          0
robot slot:            111
robot control host:    MSTSRVR
volume group:          000_00000_TLD
vault name:            ---
vault sent date:       ---
vault return date:     ---
vault slot:            ---
vault session id:      ---
vault container id:    -
created:               Wed Oct 08 19:58:15 2008
assigned:              Fri Nov 28 02:56:24 2014
last mounted:          Fri Nov 28 02:57:21 2014
first mount:           Sat Mar 28 00:37:54 2009
expiration date:       ---
number of mounts:      298
max mounts allowed:    ---
status:                0x0
================================================================================
root@MSTSRVR #

Media info by giving its id :


root@MSTSRVR # bpmedialist -ev CI0688
Server Host = MED2SRVR

 id     rl  images   allocated        last updated      density  kbytes restores

           vimages   expiration       last read         <------- STATUS ------->
           On Hold
--------------------------------------------------------------------------------
CI0688   0      1   11/28/2014 02:56  11/28/2014 02:56  hcart3    63628256     0
                1   12/05/2014 02:56        N/A       
           0

root@MSTSRVR #


To view reports, like which backups are in progress and which are completed, even provides which were failed and in queue.

root@MSTSRVR # bpdbjobs -report | more
JobID         Type  State Status              Policy               Schedule     Client Dest Media Svr
Active PID FATPipe
74692       Backup Queued              MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR               
                 
74691       Backup Active              MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR     MED1SRVR
     11998      No
74690       Backup Active              MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR     MED1SRVR
     11974      No
74689       Backup Active              MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR     MED1SRVR
     11973      No
74688       Backup Active              MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR     MED1SRVR
     11957      No
74687       Backup Active              MYSRV1_Daily                     - MED1SRVR     MED1SRVR
                No
74686 Image Delete   Done      1                                                                     
      6939       
74685       Backup   Done      0  MYSAPSRV_BCV_MED02 MYSAPSRV_BCV_MED02_SAT MED2SRVR     MED2SRVR
     25690      No
74684       Backup   Done      0  MYSAPSRV_BCV_MED02                      - MED2SRVR     MED2SRVR
                No
74683 Image Delete   Done      1                                                                     
      8993       
74682 Image Delete   Done      1                                                                     
     13972       
74681       Backup   Done      0       MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR     MED1SRVR
     23514      No
74680       Backup   Done      0       MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR     MED1SRVR
     15476      No
74679       Backup   Done      0       MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR     MED1SRVR
--More--

root@MSTSRVR #

root@MSTSRVR # bpdbjobs | grep -i active           ------- we can use grep so that we will get only Active backups report.
JobID         Type  State Status              Policy               Schedule     Client Dest Media Svr Active PID FATPipe
74691       Backup Active              MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR     MED1SRVR      11998      No
74690       Backup Active              MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR     MED1SRVR      11974      No
74689       Backup Active              MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR     MED1SRVR      11973      No
74688       Backup Active              MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR     MED1SRVR      11957      No
74687       Backup Active              MYSRV1_Daily                     - MED1SRVR     MED1SRVR                 No
root@MSTSRVR #
root@MSTSRVR #
root@MSTSRVR # bpdbjobs | grep -i done | more
74686 Image Delete   Done      1                                                                     
      6939       
74685       Backup   Done      0  MYSAPSRV_BCV_MED02 MYSAPSRV_BCV_MED02_SAT MED2SRVR     MED2SRVR
     25690      No
74684       Backup   Done      0  MYSAPSRV_BCV_MED02                      - MED2SRVR     MED2SRVR
                No
74683 Image Delete   Done      1                                                                     
      8993       
74682 Image Delete   Done      1                                                                     
     13972       
74681       Backup   Done      0        MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR     MED1SRVR
     23514      No
74680       Backup   Done      0        MYSRV1_Daily    MYSRV1_MED01_DAILY MED1SRVR     MED1SRVR
     15476      No
74674       Backup   Done      0   MYORA_MED02_DAILY    MYORA_MED02_DAILY  MED2SRVR     MED2SRVR
     11837      No
74673       Backup   Done      0   MYORA_MED02_DAILY    MYORA_MED02_DAILY  MED2SRVR     MED2SRVR
     10218      No

root@MSTSRVR #


To start backup of a particular policy, this can be done in media server only.
Below is the scenario like, my servers backup full backup is scheduled in Media-1 server, so I will run the command from Media-1 server.

SYNTAX : bpbackup -p "policyname" -s "UserBackup" -L "progress log location" -S "master server"

root@MED1SRVR # bpbackup -p Oraclesrvr_Daily -s Oraclesrvr_MED1_Daily -S MSTSRVR
root@MED1SRVR #

Here my policy name is " Oraclesrvr_Daily "
My Schedule name is " Oraclesrvr_MED1_Daily " .... Let's check is it started or not

root@MSTSRVR # bpdbjobs | grep -i active      
JobID         Type  State Status              Policy                 Schedule     Client Dest Media Svr Active PID FATPipe
74691       Backup Active           Oraclesrvr_Daily    Oraclesrvr_MED1_Daily   MED1SRVR       MED1SRVR      11998      No
root@MSTSRVR #
root@MSTSRVR #

If we want to move all db like name of pools,schedules,volumes to new media server from existing media server we can achieve this with one command :

SYNTAX : bpmedia -movedb -allvolumes -newserver <media server> -oldserver <media server>

Similarly if we want to move allocated medias ,


SYNTAX : bpmedia -movedb -m <media_id> -newserver <media server>

These are few netbackup commands which I wanted to know , so that we can use in our daily work. Sometimes our Netbackup console will not open properly so I felt like to learn these.


#################################################################################

Wednesday, 6 August 2014

Veritas Snapshots !!!

A Veritas Snapshot is used to create the snap of a particular volume.A snap represents the data exists in a volume at a given point of time.Thus using snapshot we can even rollback the current situation of a DG and we can also create a copy of the filesystem at that particular point of time.

# bash
bash-3.2#
bash-3.2#
bash-3.2# df -kh
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/s10s_u11wos_24a
                        15G   5.3G   5.8G    48%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   7.3G   464K   7.3G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
swap                   7.3G     0K   7.3G     0%    /dev/vx/dmp
swap                   7.3G     0K   7.3G     0%    /dev/vx/rdmp
/platform/SUNW,SPARC-Enterprise-T5120/lib/libc_psr/libc_psr_hwcap2.so.1
                        11G   5.3G   5.8G    48%    /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,SPARC-Enterprise-T5120/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
                        11G   5.3G   5.8G    48%    /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   7.3G    32K   7.3G     1%    /tmp
swap                   7.3G    40K   7.3G     1%    /var/run
rpool/export            15G    32K   5.8G     1%    /export
rpool/export/home       15G    31K   5.8G     1%    /export/home
rpool                   15G   106K   5.8G     1%    /rpool
/dev/odm                 0K     0K     0K     0%    /dev/odm
/dev/vx/dsk/datadg/vol1
                        20G   2.4G    16G    13%    /mysap

bash-3.2#
bash-3.2# cd /mysap
bash-3.2#
bash-3.2# ls -lrth
total 4194320
drwxr-xr-x   7 root     root          96 Sep  4  2013 ASCS03
drwxr-xr-x   2 root     root          96 Jul 19 05:52 lost+found
-rw-r--r--   1 root     root        2.0G Jul 21 10:31 pacct
bash-3.2#
bash-3.2# vxprint -ht
Disk group: datadg

dg datadg       default      default  10000    1405729182.10.test1

dm disk1        emc_clariion0_192 auto 65535   142524320 -
dm disk2        emc_clariion0_194 auto 65535   142524320 -

v  vol1         -            ENABLED  ACTIVE   41943040 SELECT    -        fsgen
pl vol1-01      vol1         ENABLED  ACTIVE   41943040 CONCAT    -        RW
sd disk2-01     vol1-01      disk2    0        41943040 0         emc_clariion0_194 ENA
bash-3.2#

For the snapshot , first of all we need a SNAP of volume then only we can start a SNAPSHOT.
Now let us start the snap.....


bash-3.2# vxassist -g datadg snapstart vol1
bash-3.2#
bash-3.2#
bash-3.2# vxprint -ht
Disk group: datadg

dg datadg       default      default  10000    1405729182.10.test1

dm disk1        emc_clariion0_192 auto 65535   142524320 -
dm disk2        emc_clariion0_194 auto 65535   142524320 -

v  vol1         -            ENABLED  ACTIVE   41943040 SELECT    -        fsgen
pl vol1-01      vol1         ENABLED  ACTIVE   41943040 CONCAT    -        RW
sd disk2-01     vol1-01      disk2    0        41943040 0         emc_clariion0_194 ENA
pl vol1-02      vol1         ENABLED  SNAPDONE 41943040 CONCAT    -        WO
sd disk1-01     vol1-02      disk1    0        41943040 0         emc_clariion0_192 ENA
bash-3.2#

In above output we can observe that a SNAP is DONE, now we are ready to start a snapshot from the SNAP which we took already.

bash-3.2#
bash-3.2#
bash-3.2# vxassist -g datadg snapshot vol1 snap-vol1
bash-3.2#
bash-3.2#
bash-3.2# vxprint -ht
Disk group: datadg

dg datadg       default      default  10000    1405729182.10.test1

dm disk1        emc_clariion0_192 auto 65535   142524320 -
dm disk2        emc_clariion0_194 auto 65535   142524320 -

v  snap-vol1    -            ENABLED  ACTIVE   41943040 ROUND     -        fsgen
pl vol1-02      snap-vol1    ENABLED  ACTIVE   41943040 CONCAT    -        RW
sd disk1-01     vol1-02      disk1    0        41943040 0         emc_clariion0_192 ENA


v  vol1         -            ENABLED  ACTIVE   41943040 SELECT    -        fsgen
pl vol1-01      vol1         ENABLED  ACTIVE   41943040 CONCAT    -        RW
sd disk2-01     vol1-01      disk2    0        41943040 0         emc_clariion0_194 ENA
bash-3.2#

By this we completed the snapshot of the volume, now this particular snapshot acts as an individual volume.We can even mount this volume as a Filesystem.


bash-3.2#
bash-3.2# mount -F vxfs /dev/vx/dsk/datadg/snap-vol1 /mnt
bash-3.2#
bash-3.2#
bash-3.2# df -kh
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/s10s_u11wos_24a
                        15G   5.3G   5.8G    48%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   7.3G   464K   7.3G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
swap                   7.3G     0K   7.3G     0%    /dev/vx/dmp
swap                   7.3G     0K   7.3G     0%    /dev/vx/rdmp
/platform/SUNW,SPARC-Enterprise-T5120/lib/libc_psr/libc_psr_hwcap2.so.1
                        11G   5.3G   5.8G    48%    /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,SPARC-Enterprise-T5120/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
                        11G   5.3G   5.8G    48%    /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   7.3G    32K   7.3G     1%    /tmp
swap                   7.3G    40K   7.3G     1%    /var/run
rpool/export            15G    32K   5.8G     1%    /export
rpool/export/home       15G    31K   5.8G     1%    /export/home
rpool                   15G   106K   5.8G     1%    /rpool
/dev/odm                 0K     0K     0K     0%    /dev/odm
/dev/vx/dsk/datadg/vol1
                        20G   2.4G    16G    13%    /mysap
/dev/vx/dsk/datadg/snap-vol1
                        20G   2.4G    16G    13%    /mnt

bash-3.2#

Some more useful commands related to vxsnaps:

To take snapshot of all volumes of a DG....


bash-3.2# vxassist -g datadg -o allvols snapshot

To clear a snap.... (Remember it is clearing a snap, but not the snapshot)


bash-3.2# vxassist -g datadg snapclear snap-vol1

Friday, 18 July 2014

DATA MIGRATION IN VERITAS !!!

It is time to move on.All the old hardware needs to be replaced with new and latest hardware including storage boxes.
.......... Need of Data Migration ........
In such case we need to move our data from old setup to new setup.For this purpose we have two well known methodologies in Veritas.

1) Using vxevac command.
2) Creating mirror to exiting volume.

vxevac command :

vxevac is used to evacuate data on one disk to another disk.

Example : vxevac -g dgname olddisk newdisk

Now let us go with steps to gain brief knowledge :

root@mydev1 # vxdisk -oalldgs list
DEVICE       TYPE            DISK         GROUP        STATUS
disk_0       auto:ZFS        -            -            ZFS
emcpower0    auto:cdsdisk    -            (vxfendg)    online
emcpower1    auto:cdsdisk    -            (vxfendg)    online
emcpower2    auto:cdsdisk    -            (vxfendg)    online
emcpower3    auto:cdsdisk    datadg01     datadg       online
emcpower4    auto:cdsdisk    datadg02     mydg         online
root@mydev1 # 
root@mydev1 # 

In above output, my existing datadg contains a subdisk. So to move this data to our new storage we need to bring new lun under this dg control. Add the new luns to existing DG.

root@mydev1 #
root@mydev1 # vxdg -g datadg adddisk datadg03=emcpower5
root@mydev1 #
root@mydev1 # vxdg -g mydg adddisk datadg04=emcpower6
root@mydev1 # 

We will use mydg for 2nd method.

root@mydev1 # 
root@mydev1 # vxdisk -oalldgs list
DEVICE       TYPE            DISK         GROUP        STATUS
disk_0       auto:ZFS        -            -            ZFS
emcpower0    auto:cdsdisk    -            (vxfendg)    online
emcpower1    auto:cdsdisk    -            (vxfendg)    online
emcpower2    auto:cdsdisk    -            (vxfendg)    online
emcpower3    auto:cdsdisk    datadg01     datadg       online
emcpower4    auto:cdsdisk    datadg02     mydg         online
emcpower5    auto:cdsdisk    datadg03     datadg       online
emcpower6    auto:cdsdisk    datadg04     mydg         online
root@mydev1 #  
root@mydev1 # 
root@mydev1 # vxprint -htg datadg
Disk group: datadg

dg datadg       default      default  1000     1336573086.38.mydev1

dm datadg01     emcpower3    auto     65536    2027264  -
dm datadg03     emcpower5    auto     65536    2027264  -

v  vol1         -            ENABLED  ACTIVE   204800   SELECT    -          fsgen
pl vol1-01      vol1         ENABLED  ACTIVE   204800   CONCAT    -          RW
sd datadg01-01  vol1-01      datadg01 102400   204800   0         emcpower3  ENA

v  vol2         -            ENABLED  ACTIVE   204800   SELECT    -          fsgen
pl vol2-01      vol2         ENABLED  ACTIVE   204800   CONCAT    -          RW
sd datadg01-02  vol2-01      datadg01 307200   204800   0         emcpower3  ENA
root@mydev1 #  

So to evacuate , vxevac -g datadg datadg01 datadg03

root@mydev1 # 
root@mydev1 # vxevac -g datadg datadg01 datadg03
root@mydev1 # 

Now check the status of DG :

root@mydev1 # vxprint -htg datadg
Disk group: datadg

dg datadg       default      default  1000     1336573086.38.mydev1

dm datadg01     emcpower3    auto     65536    2027264  -
dm datadg03     emcpower5    auto     65536    2027264  -

v  vol1         -            ENABLED  ACTIVE   204800   SELECT    -          fsgen
pl vol1-01      vol1         ENABLED  ACTIVE   204800   CONCAT    -          RW
sd datadg03-01  vol1-01      datadg03 102400   204800   0         emcpower5  ENA

v  vol2         -            ENABLED  ACTIVE   204800   SELECT    -          fsgen
pl vol2-01      vol2         ENABLED  ACTIVE   204800   CONCAT    -          RW
sd datadg03-02  vol2-01      datadg03 307200   204800   0         emcpower5  ENA
root@mydev1 # 
root@mydev1 # 

Now we can the remove the old lun from our datadg.

Mirroring a volume :

By this method using vxassist we need to create mirror of a volume and remove the old plex.
For this method let us use our 2nd DG mydg.

root@mydev1 # vxdisk -oalldgs list
DEVICE       TYPE            DISK         GROUP        STATUS
disk_0       auto:ZFS        -            -            ZFS
emcpower0    auto:cdsdisk    -            (vxfendg)    online
emcpower1    auto:cdsdisk    -            (vxfendg)    online
emcpower2    auto:cdsdisk    -            (vxfendg)    online
emcpower3    auto:cdsdisk    datadg01     datadg       online
emcpower4    auto:cdsdisk    datadg02     mydg         online
emcpower5    auto:cdsdisk    datadg03     datadg       online
emcpower6    auto:cdsdisk    datadg04     mydg         online
root@mydev1 # 

We already added the new lun emcpower6 (datadg04) earlier.So straight away we can create a mirror vol...

root@mydev1 # vxprint -htg mydg
Disk group: mydg

dg datadg       default      default  1000     1336573086.38.Server101

dm datadg02     emcpower4    auto     65536    2027264  -
dm datadg04     emcpower6    auto     65536    2027264  -

v  locks         -            ENABLED  ACTIVE   102400   SELECT    -          fsgen
pl locks-01      locks        ENABLED  ACTIVE   102400   CONCAT    -          RW
sd datadg02-01   locks-01     datadg02 0        102400   0         emcpower4  ENA 
root@mydev1 # 

Now we have to create mirror to the existing volume...

root@mydev1 # 
root@mydev1 #  vxassist -b -g mydg mirror vol alloc=datadg04
root@mydev1 # 
root@mydev1 #  vxprint -htg mydg
Disk group: mydg
dg datadg       default      default  1000     1336573086.38.mydev1
dm datadg02     emcpower4       auto     65536    2027264  -
dm datadg04     emcpower6       auto     65536    2027264  -
v  locks        -            ENABLED  ACTIVE   102400   SELECT    -          fsgen
pl locks-02     locks        ENABLED  ACTIVE   102400   CONCAT    -          RW
sd datadg04-01  locks-02     datadg04 0        102400   0         emcpower6  ENA
pl locks-01     locks        ENABLED  ACTIVE   102400   CONCAT    -          RW
sd datadg02-01  locks-01     datadg02 0        102400   0         emcpower4  ENA
root@mydev1 # 
root@mydev1 #

We can use vxtask to check the status of sync.

root@mydev1 #
root@mydev1 # vxtask list
TASKID  PTID TYPE/STATE    PCT   PROGRESS
   164     -     ATCOPY/R 35.00% 0/819200/286720 PLXATT engvol engvol-02 mydg smartmove auto-throttled
root@mydev1 #
root@mydev1 # vxtask list
TASKID  PTID TYPE/STATE    PCT   PROGRESS
   164     -     ATCOPY/R 48.00% 0/819200/393216 PLXATT engvol engvol-02 mydg smartmove auto-throttled
root@mydev1 #
root@mydev1 #

Soon after completion of 100% sync, we can remove old plex so that data is mved to new lun.

root@mydev1 # vxplex -g mydg -o rm dis locks-01
root@mydev1 #

Thus we can perform data migration from older lun to new lun through Veritas.

################################################################################

VXCONFIGBACKUP & VXCONFIGRESTORE !!!

Always it is better to take vxconfigbackup for critical systems at regular time intervals.
When config data (like private region,disk headers) of a DG gets corrupted, we can restore it from vxconfigbackup file using vxconfigrestore.

Let me start with vxconfigbackup :

vxconfigbackup takes backup of entire configuration information for one or more disk groups.

Full Path : /etc/vx/bin/vxconfigbackup
Default location of backup : /etc/vx/cbr/bk/

EXAMPLE : 
vxconfigbackup  --- Takes configbackup of all DGs and saves in default loaction.
vxconfigbackup dgname ---  Takes configbackup of a particular DG and saves in default loaction.
vxconfigbackup -l directory --- Takes configbackup of all DGs and saves in given location.
vxconfigbackup -l directory dgname/dgid --- Takes configbackup of a particular DG and saves in given location.

vxconfigbackup will takes backup of configuration files only, nowhere related with backup of data.

root@mydev #
root@mydev # df -kh
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/MYDEV         34G    17G    17G    51%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   118G   1.9M   118G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap2.so.1
                        34G    17G    17G    51%    /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
                        34G    17G    17G    51%    /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   118G   144K   118G     1%    /tmp
swap                   118G    48K   118G     1%    /var/run
swap                   118G     0K   118G     0%    /dev/vx/dmp
swap                   118G     0K   118G     0%    /dev/vx/rdmp
/dev/vx/dsk/MYDEVDATA/oracle
                       1.1T   291M   1.1T     1%    /oracle
/dev/vx/dsk/MYDEVARCH/oraarch
                        68G    82M    64G     1%    /oracle/MYDEV/oraarch
/dev/vx/dsk/MYDEVCI/usrsap
                        30G    73M    28G     1%    /usr/sap
/dev/vx/dsk/MYDEVCI/sapmnt
                        25G    71M    23G     1%    /sapmnt
root@mydev #
root@mydev #
root@mydev # vxdg list
NAME         STATE           ID
MYDEVDATA      enabled,cds          1405420307.56.mydev
MYDEVARCH      enabled,cds          1405420362.58.mydev
MYDEVCI        enabled,cds          1405420374.60.mydev
root@mydev #
root@mydev #
root@mydev # vxconfigbackup -l /tmp/cidg MYDEVCI
Start backing up diskgroup MYDEVCI to /tmp/cidg/MYDEVCI.1405420374.60.mydev ...

VxVM  NOTICE V-5-2-3100 Backup complete for diskgroup: MYDEVCI
root@mydev #

So as I told, If we want for all dg's   : vxconfigbackup -l /tmp/cidg
and if we want in default location     : vxconfigbackup -l
and if we want MYDEVCI in default location : vxconfigbackup -l MYDEVCI

root@mydev #
root@mydev # cd /tmp
root@mydev #
root@mydev # ls -lrth
total 80
drwxr-xr-x   2 noaccess noaccess     178 Jul 15 17:46 hsperfdata_noaccess
drwxr-xr-x   2 root     root         117 Jul 15 17:46 hsperfdata_root
drwx------   2 root     root         117 Jul 15 17:46 vx.025246.163531.034000.3760
drwxr-xr-x   3 root     root         199 Jul 15 18:19 cidg
drwx------   2 root     root         117 Jul 15 18:19 vx.043741.033410.164400.7489
root@mydev #
root@mydev #
root@mydev # cd cidg      ---- Directory which I specified to where the backup is to be saved.
root@mydev #
root@mydev # ls -lrth
total 16
drwxr-xr-x   2 root     root         454 Jul 15 18:19 MYDEVCI.1405420374.60.mydev
root@mydev #
root@mydev # cd MYDEVCI.1405420374.60.mydev --- Directory which contains entire config backup of my given DG.
root@mydev #
root@mydev # ls -lrth
total 48192
-rw-r--r--   1 root     root        1.3K Jul 15 18:19 1405420374.60.mydev.diskinfo     |
-rw-r--r--   1 root     root        7.5K Jul 15 18:19 1405420374.60.mydev.cfgrec        | files related to
-rw-r--r--   1 root     root         24M Jul 15 18:19 1405420374.60.mydev.binconfig  |  config info.
-rw-r--r--   1 root     root        4.1K Jul 15 18:19 1405420374.60.mydev.dginfo        |
root@mydev #
root@mydev #

Thus backup of config info taken successfully....

If there is any situation like config files got corrupted and DG is not getting imported, then we can restore from vxconfigbackup.

vxconfigrestore comes with different options which helps us to choose what corrupted things need to get restored and what else need to be omitted.

root@mydev # vxconfigrestore
VxVM vxconfigrestore ERROR V-5-2-3450 Usage: vxconfigrestore [ -c | -d | -n | -p ] [ -l directory ] {dgname | dgid}
root@mydev #

-c  (Commit) To commit changes to the disks permanently.

-d  (Decommit) At precommit stage we can abandon the restore operation .

-n  (Precommit with no installation of VxVM disk header) Loads the disk group configuration only. It specifies that no corrupted private region headers should not be reinstalled.

-l  (directory) To Specify a directory other than the default (/etc/vx/cbr/bk) where the backup configuration files need to be saved.

-p  (Precommit: load) Loads DG configuration along with corrupted disk headers (reinstalled).

The actual DG configuration is not permanently restored until we choose to commit the changes.

root@mydev # vxdg import MYDEVCI
VxVM vxdg ERROR V-5-1-10978 Disk group MYDEVCI: import failed: 
Disk group has no valid configuration copies
root@mydev #

Let's try importing forcefully...

root@mydev # vxdg -Cf import MYDEVCI
VxVM vxdg ERROR V-5-1-10978 Disk group MYDEVCI: import failed: 
Disk group has no valid configuration copies
root@mydev #

Now let us restore the config from the backup which we took and saved in /tmp/cidg.....

root@mydev #
root@mydev # vxconfigrestore -p -l /tmp/cidg MYDEVCI
Diskgroup MYDEVCI configuration restoration started ......

Installing volume manager disk header for emcpower46 ...

MYDEVCI diskgroup configuration is restored (in precommit state).
Diskgroup can be accessed in read only and can be examined using
vxprint in this state.

Run:
  vxconfigrestore -c MYDEVCI ==> to commit the restoration.
  vxconfigrestore -d MYDEVCI ==> to abort the restoration.

root@mydev #
root@mydev #
root@mydev # vxconfigrestore -c MYDEVCI             ---- To commit the restoration.
Committing configuration restoration for diskgroup MYDEVCI ....

MYDEVCI diskgroup configuration restoration is committed.
root@mydev #
root@mydev #
root@mydev #
root@mydev # vxdg list
NAME         STATE           ID
MYDEVDATA      enabled,cds          1405420307.56.mydev
MYDEVARCH      enabled,cds          1405420362.58.mydev
MYDEVCI        enabled,cds          1405420374.60.mydev
root@mydev #
root@mydev #
root@mydev # vxdg deport MYDEVCI       ---- Now try to deport and re-import the DG once.
root@mydev #
root@mydev # vxdg list
NAME         STATE           ID
MYDEVDATA      enabled,cds          1405420307.56.mydev
MYDEVARCH      enabled,cds          1405420362.58.mydev
root@mydev #
root@mydev #
root@mydev # vxdg import MYDEVCI
root@mydev #
root@mydev # vxdg list
NAME         STATE           ID
MYDEVDATA      enabled,cds          1405420307.56.mydev
MYDEVARCH      enabled,cds          1405420362.58.mydev
MYDEVCI        enabled,cds          1405420374.60.mydev
root@mydev #

Hence the DG got imported successfully after restoring the config from backup file.

################################################################################