Total Pageviews

Sunday, 13 April 2014

ZFS (Zetabyte Filesystem) and zpool creation !!!

ZFS is a combined file system and logical volume manager. ZFS provides protection against data corruption, Hence ZFS is self-healing. ZFS includes the concepts of filesystem and volume management, snapshots and clones.

It supports high storage capacities, maximum file size can be 16 exbibytes (2^64 bytes) and maximum volume size of 256 zebibytes (2^78 bytes).

In ZFS we create zpools and on zpool we create filesystems. Compared to UFS, ZFS administration is much more easy and simple.

Creation of filesystem is so easy no need of partitions and mount points.
Simply we can create zpool by adding disks to pool. Since this is 1st post of ZFS let's go with simple tasks, just creation of pools.

root@mysrv1 #
root@mysrv1 # echo |format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c0d0 <SUN-DiskSlice-10GB cyl 2062 alt 2 hd 24 sec 424>
         
/virtual-devices@100/channel-devices@200/disk@0
Specify disk (enter its number): Specify disk (enter its number):
root@mysrv1
#
root@mysrv1
#
root@mysrv1 # mkfile 100m c5d{1..5}
root@mysrv1 #
root@mysrv1 #
root@mysrv1 #
root@mysrv1 # ls -lrth /dev/dsk/c5*
-rw------T   1 root     root        100M Apr 14 03:54 /dev/dsk/c5d1
-rw------T   1 root     root        100M Apr 14 03:54 /dev/dsk/c5d2
-rw------T   1 root     root        100M Apr 14 03:54 /dev/dsk/c5d3
-rw------T   1 root     root        100M Apr 14 03:54 /dev/dsk/c5d4
-rw------T   1 root     root        100M Apr 14 03:54 /dev/dsk/c5d5

root@mysrv1 #
root@mysrv1 #
root@mysrv1 # zpool list       ------ to view all zpools (Here rpool means root pool, default pool comes with zfs)NAME    SIZE  ALLOC   FREE  CAP  HEALTH  ALTROOT
rpool  9.94G  6.12G  3.81G  61%  ONLINE  -

root@mysrv1 #

Now let us create a new zpool ,

root@mysrv1 # zpool create neo c5d1 c5d2
root@mysrv1 #
root@mysrv1 # zpool list
NAME    SIZE  ALLOC   FREE  CAP  HEALTH  ALTROOT
neo     190M    71K   190M   0%  ONLINE  -
rpool  9.94G  6.12G  3.81G  61%  ONLINE  -

root@mysrv1 #
root@mysrv1 # df -kh /neo          ------ default mount point will be created as per pool name..
Filesystem             size   used  avail capacity  Mounted on
neo                    158M    31K   158M     1%    /neo

root@mysrv1 # root@mysrv1 #
root@mysrv1 #
root@mysrv1 # zpool status neo            -------- To view status of neo pool 
 pool: neo
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        neo         ONLINE       0     0     0
          c5d1      ONLINE       0     0     0
          c5d2      ONLINE       0     0     0

errors: No known data errors
root@mysrv1 #

Adding a new disk to your zpool ,

root@mysrv1 # zpool add neo c5d3
root@mysrv1 #

root@mysrv1 #
root@mysrv1 # zpool status neo
  pool: neo
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        neo         ONLINE       0     0     0
          c5d1      ONLINE       0     0     0
          c5d2      ONLINE       0     0     0
          c5d3      ONLINE       0     0     0

errors: No known data errors
root@mysrv1 #

root@mysrv1 #

If you want to create a pool with your own mount point , we can mention with -m option.

root@mysrv1 #
root@mysrv1 #
zpool create -m /pl1 pool c5d4
root@mysrv1 #
root@mysrv1 # df -kh
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/s10s_u11wos_24a
                       9.8G   6.1G   3.7G    63%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   5.0G   504K   5.0G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/platform/SUNW,Sun-Fire-T200/lib/libc_psr/libc_psr_hwcap1.so.1
                       9.8G   6.1G   3.7G    63%    /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,Sun-Fire-T200/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
                       9.8G   6.1G   3.7G    63%    /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   5.0G    32K   5.0G     1%    /tmp
swap                   5.0G    48K   5.0G     1%    /var/run
rpool/export           9.8G    32K   3.7G     1%    /export
rpool/export/home      9.8G    31K   3.7G     1%    /export/home
rpool                  9.8G   106K   3.7G     1%    /rpool

neo                    253M    31K   253M     1%    /neo
pool                    63M    31K    63M     1%    /pl1

root@mysrv1 #
root@mysrv1 #


In the above df -kh output , we can see that our poolname is pool and the mount point is /pl1.

root@mysrv1 #
root@mysrv1 #
zpool list
NAME    SIZE  ALLOC   FREE  CAP  HEALTH  ALTROOT
neo     285M  95.5K   285M   0%  ONLINE  -
pool     95M   116K  94.9M   0%  ONLINE  -
rpool  9.94G  6.12G  3.81G  61%  ONLINE  -
root@mysrv1
#
root@mysrv1
#
root@mysrv1
# zpool status
  pool: neo state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        neo         ONLINE       0     0     0
          c5d1      ONLINE       0     0     0
          c5d2      ONLINE       0     0     0
          c5d3      ONLINE       0     0     0

errors: No known data errors
  pool: pool state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        pool        ONLINE       0     0     0
          c5d4      ONLINE       0     0     0

errors: No known data errors
  pool: rpool state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c0d0s0    ONLINE       0     0     0

errors: No known data errors
root@mysrv1
#
root@mysrv1 #


To destroy a pool ,

root@mysrv1 #
root@mysrv1 #
zpool list
NAME    SIZE  ALLOC   FREE  CAP  HEALTH  ALTROOT
neo     285M  95.5K   285M   0%  ONLINE  -
pool     95M    94K  94.9M   0%  ONLINE  -
rpool  9.94G  6.12G  3.81G  61%  ONLINE  -
root@mysrv1
#
root@mysrv1 #
 zpool destroy pool
root@mysrv1 #
                                                              --- "pool" is destroyed...
root@mysrv1 #
root@mysrv1 #
 zpool list
NAME    SIZE  ALLOC   FREE  CAP  HEALTH  ALTROOT
neo     285M  95.5K   285M   0%  ONLINE  -
rpool  9.94G  6.12G  3.81G  61%  ONLINE  -
root@mysrv1
#
root@mysrv1 #


###################################################################################

No comments:

Post a Comment