In this post, I am going to show how to configure and install a branded zone. In my case I am taking my Solaris 11 host as global zone and installing Solaris 10 in it.
We need a branded zone concept when base machine or global zone is of different OS version and if we are in need of another OS version zone within it.
A branded zone should be installed with separate OS unlike for a whole root zone where all packages were copied from global zone and sparse root zone where the global zone packages were shared.
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
root@Solaris11:~#
root@Solaris11:~# As you see my Solaris 11 host is brand new with no non global zones...
root@Solaris11:~#
root@Solaris11:~# mkdir -p /export/home/zone1 ----- home directory for our Zone...
root@Solaris11:~#
root@Solaris11:~# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
net1 Ethernet unknown 0 unknown e1000g1
net2 Ethernet unknown 0 unknown e1000g2
net0 Ethernet up 1000 full e1000g0
root@Solaris11:~#
root@Solaris11:~# zonecfg -z zone1
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create -b to state it is a branded zone....
zonecfg:zone1>
zonecfg:zone1>
zonecfg:zone1> set brand=solaris10
zonecfg:zone1>
zonecfg:zone1>
zonecfg:zone1> set zonepath=/export/home/zone1
zonecfg:zone1>
zonecfg:zone1> set autoboot=true
zonecfg:zone1> add net
zonecfg:zone1:net> set address=10.0.0.175/24
zonecfg:zone1:net> set physical=net0
zonecfg:zone1:net> info
net 0:
address: 10.0.0.175/24
allowed-address not specified
configure-allowed-address: true
physical: net0
defrouter not specified
zonecfg:zone1:net>
zonecfg:zone1:net>
zonecfg:zone1:net> end
zonecfg:zone1>
zonecfg:zone1> info
zonename: zone1
zonepath: /export/home/zone1
brand: solaris10
autoboot: true
autoshutdown: shutdown
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: exclusive
hostid:
fs-allowed:
net 0:
address: 10.0.0.175/24
allowed-address not specified
configure-allowed-address: true
physical: net0
defrouter not specified
zonecfg:zone1>
zonecfg:zone1>
zonecfg:zone1> set ip-type=shared
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- zone1 configured /export/home/zone1 solaris10 shared
root@Solaris11:~#
Let me login to my Solaris 10 host to create Flash Archive...
root@Solaris11:~#
root@Solaris11:~# ssh 10.0.0.55
The authenticity of host '10.0.0.55 (10.0.0.55)' can't be established.
RSA key fingerprint is 83:07:bd:a2:f2:02:46:df:67:3d:01:af:ed:d8:9a:cf.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.55' (RSA) to the list of known hosts.
Password:
Last login: Tue Oct 4 22:02:59 2016 from 10.0.0.43
Oracle Corporation SunOS 5.10 Generic Patch January 2005
#
# bash
bash-3.2# uname -a
SunOS Solaris10 5.10 Generic_147148-26 i86pc i386 i86pc
bash-3.2# df -kh /
Filesystem size used avail capacity Mounted on
/dev/dsk/c0d0s0 8.1G 3.9G 4.1G 49% /
bash-3.2#
bash-3.2#
bash-3.2# flarcreate -n arch1 -c archive1.flar --- cmd to create flash archive....
Full Flash
Checking integrity...
Integrity OK.
Running precreation scripts...
Precreation scripts done.
Determining the size of the archive...
7896142 blocks
The archive will be approximately 2.13GB.
Creating the archive...
7896144 blocks
Archive creation complete.
Running postcreation scripts...
Postcreation scripts done.
Running pre-exit scripts...
Pre-exit scripts done.
bash-3.2#
bash-3.2# pwd
/
bash-3.2#
bash-3.2# ls
TT_DB boot etc kernel mnt platform system var
archive1.flar dev export lib net proc tmp vol
bin devices home lost+found opt sbin usr
bash-3.2#
bash-3.2#
bash-3.2# ls -lrth archive1.flar
-rw-r--r-- 1 root root 2.1G Oct 4 22:31 archive1.flar
bash-3.2#
bash-3.2# df -kh /
Filesystem size used avail capacity Mounted on
/dev/dsk/c0d0s0 8.1G 6.0G 2.0G 76% /
bash-3.2#
bash-3.2# exit
# Connection to 10.0.0.55 closed.
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~# scp 10.0.0.55:/archive1.flar /export/home/
Password:
archive1.flar 100% |***********************************************************************| 2188 MB 06:35
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- zone1 configured /export/home/zone1 solaris10 shared
root@Solaris11:~#
Let us install OS in our new Zone....
root@Solaris11:~#
root@Solaris11:~# zoneadm -z zone1 install -a /export/home/archive1.flar -p
The following ZFS file system(s) have been created:
rpool/export/home/zone1
Progress being logged to /var/log/zones/zoneadm.20161004T232607Z.zone1.install
Installing: This may take several minutes...
Postprocessing: This may take a while...
Postprocess: Updating the image to run within a zone
Postprocess: Migrating data
from: rpool/export/home/zone1/rpool/ROOT/zbe-0
to: rpool/export/home/zone1/rpool/export
Postprocess: A backup copy of /export is stored at /export.backup.20161004T233804Z.
It can be deleted after verifying it was migrated correctly.
Result: Installation completed successfully.
Log saved in non-global zone as /export/home/zone1/root/var/log/zones/zoneadm.20161004T232607Z.zone1.install
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- zone1 installed /export/home/zone1 solaris10 shared
root@Solaris11:~#
Next step is to bring our ZONE to "ready" state....
root@Solaris11:~#
root@Solaris11:~# zoneadm -z zone1 ready
root@Solaris11:~#
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 zone1 ready /export/home/zone1 solaris10 shared
root@Solaris11:~#
Now boot the zone...
root@Solaris11:~# zoneadm -z zone1 boot
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 zone1 running /export/home/zone1 solaris10 shared
root@Solaris11:~#
Thats it, done with configuration and installation part in which we came across different states of a Zone...
root@Solaris11:~# zlogin -C zone1
[Connected to zone 'zone1' console]
Solaris10 console login: root
Password:
Last login: Tue Oct 4 22:10:05 from 10.0.0.75
Oct 4 19:51:17 Solaris10 login: ROOT LOGIN /dev/console
Oracle Corporation SunOS 5.10 Generic Patch January 2005
#
#
# bash
bash-3.2#
bash-3.2# ifconfig -a
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
net0:1: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2
inet 10.0.0.175 netmask ff000000 broadcast 10.255.255.255
lo0:1: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
bash-3.2#
bash-3.2#
bash-3.2# who -C is for Console login....
root console Oct 4 19:51
bash-3.2#
bash-3.2#
bash-3.2# df -kh
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/zbe-0 6.0G 3.8G 2.1G 65% /
rpool/ROOT/zbe-0/var 6.0G 79M 2.1G 4% /var
/.SUNWnative/lib 4.9G 2.8G 2.1G 57% /.SUNWnative/lib
/.SUNWnative/platform
4.9G 2.8G 2.1G 57% /.SUNWnative/platform
/.SUNWnative/sbin 4.9G 2.8G 2.1G 57% /.SUNWnative/sbin
/.SUNWnative/usr 4.9G 2.8G 2.1G 57% /.SUNWnative/usr
/dev 0K 0K 0K 0% /dev
proc 0K 0K 0K 0% /proc
ctfs 0K 0K 0K 0% /system/contract
mnttab 0K 0K 0K 0% /etc/mnttab
objfs 0K 0K 0K 0% /system/object
swap 1.2G 344K 1.2G 1% /etc/svc/volatile
/usr/lib/libc/libc_hwcap1.so.1
5.9G 3.8G 2.1G 65% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 1.2G 32K 1.2G 1% /tmp
/etc/svc/volatile/ 1.2G 344K 1.2G 1% /var/run
rpool/export 6.0G 32K 2.1G 1% /export
rpool/export/home 6.0G 32K 2.1G 1% /export/home
rpool 6.0G 31K 2.1G 1% /rpool
bash-3.2#
bash-3.2# ~.
[Connection to zone 'zone1' console closed]
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 zone1 running /export/home/zone1 solaris10 shared
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~# ping 10.0.0.175
10.0.0.175 is alive
root@Solaris11:~#
Easy and simple steps to follow while creating a branded zone, just need a flash archive of required OS version...
#####################################################################################
We need a branded zone concept when base machine or global zone is of different OS version and if we are in need of another OS version zone within it.
A branded zone should be installed with separate OS unlike for a whole root zone where all packages were copied from global zone and sparse root zone where the global zone packages were shared.
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
root@Solaris11:~#
root@Solaris11:~# As you see my Solaris 11 host is brand new with no non global zones...
root@Solaris11:~#
root@Solaris11:~# mkdir -p /export/home/zone1 ----- home directory for our Zone...
root@Solaris11:~#
root@Solaris11:~# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
net1 Ethernet unknown 0 unknown e1000g1
net2 Ethernet unknown 0 unknown e1000g2
net0 Ethernet up 1000 full e1000g0
root@Solaris11:~#
root@Solaris11:~# zonecfg -z zone1
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create -b to state it is a branded zone....
zonecfg:zone1>
zonecfg:zone1>
zonecfg:zone1> set brand=solaris10
zonecfg:zone1>
zonecfg:zone1>
zonecfg:zone1> set zonepath=/export/home/zone1
zonecfg:zone1>
zonecfg:zone1> set autoboot=true
zonecfg:zone1> add net
zonecfg:zone1:net> set address=10.0.0.175/24
zonecfg:zone1:net> set physical=net0
zonecfg:zone1:net> info
net 0:
address: 10.0.0.175/24
allowed-address not specified
configure-allowed-address: true
physical: net0
defrouter not specified
zonecfg:zone1:net>
zonecfg:zone1:net>
zonecfg:zone1:net> end
zonecfg:zone1>
zonecfg:zone1> info
zonename: zone1
zonepath: /export/home/zone1
brand: solaris10
autoboot: true
autoshutdown: shutdown
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: exclusive
hostid:
fs-allowed:
net 0:
address: 10.0.0.175/24
allowed-address not specified
configure-allowed-address: true
physical: net0
defrouter not specified
zonecfg:zone1>
zonecfg:zone1>
zonecfg:zone1> set ip-type=shared
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- zone1 configured /export/home/zone1 solaris10 shared
root@Solaris11:~#
Let me login to my Solaris 10 host to create Flash Archive...
root@Solaris11:~#
root@Solaris11:~# ssh 10.0.0.55
The authenticity of host '10.0.0.55 (10.0.0.55)' can't be established.
RSA key fingerprint is 83:07:bd:a2:f2:02:46:df:67:3d:01:af:ed:d8:9a:cf.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.55' (RSA) to the list of known hosts.
Password:
Last login: Tue Oct 4 22:02:59 2016 from 10.0.0.43
Oracle Corporation SunOS 5.10 Generic Patch January 2005
#
# bash
bash-3.2# uname -a
SunOS Solaris10 5.10 Generic_147148-26 i86pc i386 i86pc
bash-3.2# df -kh /
Filesystem size used avail capacity Mounted on
/dev/dsk/c0d0s0 8.1G 3.9G 4.1G 49% /
bash-3.2#
bash-3.2#
bash-3.2# flarcreate -n arch1 -c archive1.flar --- cmd to create flash archive....
Full Flash
Checking integrity...
Integrity OK.
Running precreation scripts...
Precreation scripts done.
Determining the size of the archive...
7896142 blocks
The archive will be approximately 2.13GB.
Creating the archive...
7896144 blocks
Archive creation complete.
Running postcreation scripts...
Postcreation scripts done.
Running pre-exit scripts...
Pre-exit scripts done.
bash-3.2#
bash-3.2# pwd
/
bash-3.2#
bash-3.2# ls
TT_DB boot etc kernel mnt platform system var
archive1.flar dev export lib net proc tmp vol
bin devices home lost+found opt sbin usr
bash-3.2#
bash-3.2#
bash-3.2# ls -lrth archive1.flar
-rw-r--r-- 1 root root 2.1G Oct 4 22:31 archive1.flar
bash-3.2#
bash-3.2# df -kh /
Filesystem size used avail capacity Mounted on
/dev/dsk/c0d0s0 8.1G 6.0G 2.0G 76% /
bash-3.2#
bash-3.2# exit
# Connection to 10.0.0.55 closed.
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~# scp 10.0.0.55:/archive1.flar /export/home/
Password:
archive1.flar 100% |***********************************************************************| 2188 MB 06:35
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- zone1 configured /export/home/zone1 solaris10 shared
root@Solaris11:~#
Let us install OS in our new Zone....
root@Solaris11:~#
root@Solaris11:~# zoneadm -z zone1 install -a /export/home/archive1.flar -p
The following ZFS file system(s) have been created:
rpool/export/home/zone1
Progress being logged to /var/log/zones/zoneadm.20161004T232607Z.zone1.install
Installing: This may take several minutes...
Postprocessing: This may take a while...
Postprocess: Updating the image to run within a zone
Postprocess: Migrating data
from: rpool/export/home/zone1/rpool/ROOT/zbe-0
to: rpool/export/home/zone1/rpool/export
Postprocess: A backup copy of /export is stored at /export.backup.20161004T233804Z.
It can be deleted after verifying it was migrated correctly.
Result: Installation completed successfully.
Log saved in non-global zone as /export/home/zone1/root/var/log/zones/zoneadm.20161004T232607Z.zone1.install
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- zone1 installed /export/home/zone1 solaris10 shared
root@Solaris11:~#
Next step is to bring our ZONE to "ready" state....
root@Solaris11:~#
root@Solaris11:~# zoneadm -z zone1 ready
root@Solaris11:~#
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 zone1 ready /export/home/zone1 solaris10 shared
root@Solaris11:~#
Now boot the zone...
root@Solaris11:~# zoneadm -z zone1 boot
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 zone1 running /export/home/zone1 solaris10 shared
root@Solaris11:~#
Thats it, done with configuration and installation part in which we came across different states of a Zone...
root@Solaris11:~# zlogin -C zone1
[Connected to zone 'zone1' console]
Solaris10 console login: root
Password:
Last login: Tue Oct 4 22:10:05 from 10.0.0.75
Oct 4 19:51:17 Solaris10 login: ROOT LOGIN /dev/console
Oracle Corporation SunOS 5.10 Generic Patch January 2005
#
#
# bash
bash-3.2#
bash-3.2# ifconfig -a
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
net0:1: flags=100001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,PHYSRUNNING> mtu 1500 index 2
inet 10.0.0.175 netmask ff000000 broadcast 10.255.255.255
lo0:1: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
bash-3.2#
bash-3.2#
bash-3.2# who -C is for Console login....
root console Oct 4 19:51
bash-3.2#
bash-3.2#
bash-3.2# df -kh
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/zbe-0 6.0G 3.8G 2.1G 65% /
rpool/ROOT/zbe-0/var 6.0G 79M 2.1G 4% /var
/.SUNWnative/lib 4.9G 2.8G 2.1G 57% /.SUNWnative/lib
/.SUNWnative/platform
4.9G 2.8G 2.1G 57% /.SUNWnative/platform
/.SUNWnative/sbin 4.9G 2.8G 2.1G 57% /.SUNWnative/sbin
/.SUNWnative/usr 4.9G 2.8G 2.1G 57% /.SUNWnative/usr
/dev 0K 0K 0K 0% /dev
proc 0K 0K 0K 0% /proc
ctfs 0K 0K 0K 0% /system/contract
mnttab 0K 0K 0K 0% /etc/mnttab
objfs 0K 0K 0K 0% /system/object
swap 1.2G 344K 1.2G 1% /etc/svc/volatile
/usr/lib/libc/libc_hwcap1.so.1
5.9G 3.8G 2.1G 65% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 1.2G 32K 1.2G 1% /tmp
/etc/svc/volatile/ 1.2G 344K 1.2G 1% /var/run
rpool/export 6.0G 32K 2.1G 1% /export
rpool/export/home 6.0G 32K 2.1G 1% /export/home
rpool 6.0G 31K 2.1G 1% /rpool
bash-3.2#
bash-3.2# ~.
[Connection to zone 'zone1' console closed]
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~# zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 zone1 running /export/home/zone1 solaris10 shared
root@Solaris11:~#
root@Solaris11:~#
root@Solaris11:~# ping 10.0.0.175
10.0.0.175 is alive
root@Solaris11:~#
Easy and simple steps to follow while creating a branded zone, just need a flash archive of required OS version...
#####################################################################################
Wow! This could be one particular of the most beneficial blogs We have ever arrive across on this subject. Actually Fantastic. I’m also an expert in this topic therefore I can understand your effort. view publisher site
ReplyDelete