Wednesday, February 20, 2013

Linux : Deleting Multiple Multipath Devices


How to delete multiple Linux DM-MPIO (Device Mapper - Multi Path IO ) devices from a Linux system?


From below command output, find wwid of multipath devices you want to remove and list them in a file.

  multipath -l |grep dm-
  vi /tmp/wwid_of_multipath_devices_to_delete.txt
  cat /tmp/wwid_of_multipath_devices_to_delete.txt


Verify the details of devices you are going to remove

  while read i ; do multipath -ll $i ; done </tmp/wwid_of_multipath_devices_to_delete.txt

Collect LUN's scsi ID of multipath devices you are going to delete
  
   while read i ; do
     multipath -l $i|grep -E "[0-9]{1,2}:{1,4}"|awk '{print $2}'|awk '{print $2}'
   done </tmp/wwid_of_multipath_devices_to_delete.txt | sort -u > /tmp/disks_scsi_id.txt
   cat /tmp/disks_scsi_id.txt


Ensure multipath devices are not used on system. Flush multipath device information

   multipath -ll > multipath-output-pre.txt
       multipath -ll | grep -E "$(cat wwid_of_multipath_devices_to_delete.txt |grep -v "^$"|tr '\n' '|')xxxxyyy"|awk '{print $1}' | tee multipath_disk_alias_name.txt
     cat multipath_disk_alias_name.txt
   while read i ; do
        multipath -f $i
        echo "$i flushed"
   done </tmp/multipath_disk_alias_name.txt
       multipath -ll > multipath-output-post.txt
       diff multipath-output-pre.txt multipath-output-post.txt 


Now they should NOT be seen in multipath output
  
    while read i ; do 
         multipath -ll $i
   done </tmp/wwid_of_multipath_devices_to_delete.txt

Delete LUN details from the Linux Kernel

   while read j ; do
     echo 1 > /sys/bus/scsi/drivers/sd/$j/delete
     echo "$j deleted"
   done </tmp/disks_scsi_id.txt


Delete the device information from /etc/multipath.conf

  cp -a /etc/multipath.conf /etc/multipath.conf.ORG
  vi /etc/multipath.conf


Restart multipathd and verify that the disks have been deleted and no more recognized by multipathd. Below should not list any device

  service multipathd restart
  while read i ; do multipath -ll |grep $i ; done </tmp/wwid_of_multipath_devices_to_delete.txt
  while read i; do ls -l /dev/mapper/|grep $i; done </tmp/wwid_of_multipath_devices_to_delete.txt


Unmask LUN from system on Storage SAN Switch/Frame


If you decide to add the LUNs again on same system, mask the LUN on system, rescan the BUS and add them in multipath.conf

  for i in $(ls /sys/class/scsi_host/) ; do echo - - - > /sys/class/scsi_host/$i/scan ; done
  service multipathd restart
  multiptah -ll | grep dm-
  vi /etc/multipath.conf
  service multipathd reload



References
http://www.kernel.org/doc/ols/2005/ols2005v1-pages-155-176.pdf
http://en.wikipedia.org/wiki/Linux_DM_Multipath


Saturday, February 16, 2013

Linux : Extending filesystem on expanded SAN LUN

An existing disk has been extended by +60GB to extend filesystem. To reduce operational risk, I'll recommend adding a new disk to extend VG and filesystem instead of increasing size of existing disk. Anyway, let us see howto do this ..


We will take three different cases when a Disk is expanded from 15 GB to 75GB

Case-1: Filesystem created on LV and PV is whole disk /dev/sdb

1.1 What is existing PV,VG,LV and filesystem size ?

# fdisk -l /dev/sdb

Disk /dev/sdb: 12.9 GB, 12890275840 bytes
5 heads, 37 sectors/track, 136088 cylinders
Units = cylinders of 185 * 512 = 94720 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4b38f4fe

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      136077    12587008   8e  Linux LVM
# df -hP /testfilesystem
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup02-Vol0201  4.0G  932M  2.9G  25% /testfilesystem
# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  VolGroup02   1   4   0 wz--n- 12.00g    0
# lvs
  LV      VG         Attr   LSize Origin Snap%  Move Log Copy%  Convert
  Vol0201  VolGroup02 -wi-ao 4.00g
# fdisk -l /dev/sdb|grep GB
Disk /dev/sdb: 12.9 GB, 12890275840 bytes
# pvs
  PV         VG         Fmt  Attr PSize  PFree
  /dev/sdb1  VolGroup02 lvm2 a-   12.00g    0


1.2 Expand the LUN.  Then rescan disk or HBA to allow kernel to know read new disk size

1.2.1 If there is only one path of disk
 # echo 1 > /sys/block/sdb/device/rescan

1.2.2 if disk there are multiple path of disk
 # for i in $(ls /sys/class/scsi_host/) ; do echo - - - > /sys/class/scsi_host/$i/scan ; done

1.3 Verify new size

# fdisk -l /dev/sdb|grep GB
Disk /dev/sdb: 75.2 GB, 75161927680 bytes
# grep -w capacity /var/log/messages
Apr 25 22:21:27 mysystem kernel: sdb: detected capacity change from 12890275840 to 75161927680


1.4 Resize PV and extend LV
  pvs /dev/sdb ( 15GB )
  pvresize /dev/sdb (resize PV from 15G to 65G)
  pvscan
  pvs /dev/sdb ( 65GB )
  vgscan
   vgs VolGroup02 
   lvextend -l +100%FREE /dev/VolGroup02/Vol0201

1.5. Extend filesystem

# df -hP /testfilesystem
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup02-Vol0201  4.0G  932M  2.9G  25% /testfilesystem
# resize2fs -p /dev/mapper/VolGroup02-Vol0201
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/VolGroup02-Vol0201 is mounted on /testfilesystem; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 4
Performing an on-line resize of /dev/mapper/VolGroup02-Vol0201 to 16250880 (4k) blocks.
The filesystem on /dev/mapper/VolGroup02-Vol0201 is now 16250880 blocks long.

-Filesystem has been extended from 4GB to 62GB using all spaces of extended disk.
# df -hP /testfilesystem
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup02-Vol0201   62G  937M   58G   2% /testfilesystem



Case-2: Filesystem created on LV and PV is disk partition /dev/sdb1

Assuming a single partition (sdb1) was created on disk. We have 2 option here.

Option-2A:
Increase size of sdb1 to  cover new size. In thsi case, we nee to extend the partition before resizing PV. First execute 1.1 and 1.2.

2.1 This step will preserve your existing data, though be careful ! Delete existing partition and create a new primary to cover whole disk WITHOUT EXITING fdisk.  ( Key sequence - m, d, 1, n, p, 1 <last sector size>, w)

  fdisk /dev/sdb 

>>Now complete 1.4 and 1.5 by replacing sdb to sdb1.

Option-2B:
Create a new partition sdb2 in extended space and add it as new PV.

# fdisk -l /dev/sdb

Disk /dev/sdb: 75.2 GB, 75161927680 bytes
5 heads, 37 sectors/track, 793516 cylinders
Units = cylinders of 185 * 512 = 94720 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4b38f4fe

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      136077    12587008   8e  Linux LVM
/dev/sdb2          136077      793516    60813158   8e  Linux LVM

>> What's this ? Corresponding device file has not been created !
# ll /dev/sdb2
ls: cannot access /dev/sdb2: No such file or directory

>> Re-reading new partition table using partprob failed with following message.
# partprobe /dev/sdb
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sdb (Device or resource busy).  As a result, it may not reflect all of your changes until after reboot.


>> partprobe has problem in rhel6 (see Note-1 below). Use partx to reread partition. ( ignore below mention error).

# partx -v -a /dev/sdb
device /dev/sdb: start 0 size 146800640
gpt: 0 slices
dos: 4 slices
# 1:       128- 25174143 ( 25174016 sectors,  12889 MB)
# 2:  25174144-146800459 (121626316 sectors,  62272 MB)
# 3:         0-       -1 (        0 sectors,      0 MB)
# 4:         0-       -1 (        0 sectors,      0 MB)
BLKPG: Device or resource busy
error adding partition 1
added partition 2

>> Now, device file is created.
# ll /dev/sdb2
brw-rw---- 1 root disk 8, 18 Apr 25 22:24 /dev/sdb2

>> Now, do usual stuff, pvcreate, vgextend, lvextend, resize2fs
# pvcreate /dev/sdb2
  Physical volume "/dev/sdb2" successfully created
# vgextend VolGroup02 /dev/sdb2
  Volume group "VolGroup02" successfully extended
# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  VolGroup02   2   4   0 wz--n- 69.99g 57.99g
# lvextend -l +100%FREE  /dev/mapper/VolGroup02-Vol0201
  Extending logical volume Vol0201 to 61.99 GiB
  Logical volume Vol0201 successfully resized
# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  VolGroup02   2   4   0 wz--n- 69.99g    0
# lvs
  LV      VG         Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  Vol0201  VolGroup02 -wi-ao 61.99g


>>Now complete 1.5

Case-3: Filesystem created on /dev/sdb1

Execute 1.1 and 1.2. Then execute 2.1(Option-2A) and finally 1.5


References 


  • How to see extended size in LVM after extending vmdk disk / datastore on ESX 
    • https://access.redhat.com/knowledge/solutions/118783
  • Increasing/reducing filesystem on a disk
    • http://www.howtoforge.com/linux_resizing_ext3_partitions
  • Increasing the size of a virtual disk in VMware
    • http://kb.vmware.com/kb/1004047
  • Extending a logical volume in a virtual machine running Red Hat or Cent OS
    • http://kb.vmware.com/kb/1006371
  • How to use a new partition in RHEL6 without reboot? (Note-1)
    • https://access.redhat.com/solutions/57542
  • Increasing the size of a disk partition
    • http://kb.vmware.com/kb/1004071


Thursday, February 14, 2013

Veritas Cluster Filesystem, samba service and virtual IP configuration


On a two node Veritas cluster:
  • We need to create a shared cluster filesystem. Mounted on all node
  • Samba service should be up on one node, if shared filesystem is already mounted
  • Samba Service should start, if a IP used for samba is already up on the system.
  • Assumption: basic cluset is installed and up

Veritas CLI to configure VCS main.cf


Add a group ClusterFileSystemgroup as CFS filesyetm group.

haconf -makerw
hagrp -add ClusterFileSystemgroup
hagrp -modify ClusterFileSystemgroup SystemList Node1 0 Node2 1
hagrp -modify ClusterFileSystemgroup AutoFailOver 0
hagrp -modify ClusterFileSystemgroup Parallel 1
hagrp -modify ClusterFileSystemgroup AutoStartList Node1 Node2


Add cfsmount1 resource to mount CFS filesystem on all nodes. It will import Fg and start volume

hares -add cfsmount1 CFSMount ClusterFileSystemgroup
hares -modify cfsmount1 Critical 0
hares -modify cfsmount1 MountPoint "/mount/dir"
hares -modify cfsmount1 BlockDevice "/dev/vx/dsk/VxDGname/VxVolName"
hares -modify cfsmount1 NodeList  Node1 Node2
hares -modify cfsmount1 CloneSkip no
hares -modify cfsmount1 Enabled 1
hares -add cvmvoldg1 CVMVolDg ClusterFileSystemgroup
hares -modify cvmvoldg1 Critical 0
hares -modify cvmvoldg1 CVMDiskGroup VxDGname
hares -modify cvmvoldg1 CVMVolume  bvmsanvol
hares -local cvmvoldg1 CVMActivation
hares -modify cvmvoldg1 CVMActivation sw -sys Node1
hares -modify cvmvoldg1 CVMActivation sw -sys Node2
hares -modify cvmvoldg1 Enabled 1


Add cluster volume manager (cvm) group

hagrp -add cvm
hagrp -modify cvm SystemList  Node1 0 Node2 1
hagrp -modify cvm AutoFailOver 0
hagrp -modify cvm Parallel 1
hagrp -modify cvm AutoStartList  Node1 Node2
hares -add vxfsckd CFSfsckd cvm
hares -local vxfsckd ActivationMode
hares -modify vxfsckd ActivationMode  VxDGname sw -sys Node1
hares -modify vxfsckd ActivationMode  VxDGname sw -sys Node2
hares -modify vxfsckd Enabled 1
hares -add cvm_clus CVMCluster cvm
hares -modify cvm_clus CVMClustName lon6bvmdrwww_cluster
hares -modify cvm_clus CVMNodeId  Node1 0 Node2 1
hares -modify cvm_clus CVMTransport gab
hares -modify cvm_clus CVMTimeout 200
hares -modify cvm_clus CVMNodeAddr -delete -keys
hares -modify cvm_clus Enabled 1
hares -add cvm_vxconfigd CVMVxconfigd cvm
hares -modify cvm_vxconfigd Critical 0
hares -modify cvm_vxconfigd CVMVxconfigdArgs  syslog
hares -modify cvm_vxconfigd Enabled 1
hagrp -link ClusterFileSystemgroup cvm online local firm
hares -link cfsmount1 cvmvoldg1
hares -link cvm_clus cvm_vxconfigd
hares -link vxfsckd cvm_clus


Create a failover group SambaServerGroup - When cluster will come up, this goup will be online on Node2

hagrp -add SambaServerGroup
hagrp -modify SambaServerGroup SystemList -add Node2 1 Node1 2
hagrp -modify SambaServerGroup AutoStartList Node2 Node1
hagrp -modify SambaServerGroup Parallel 0


Add a resource SambaFlotingIP in SambaServerGroup group. It will use NIC bond0 (also used for system public link) and assign IP/Netmask

hares -add SambaFlotingIP IP SambaServerGroup
hares -modify SambaFlotingIP Device bond0
hares -modify SambaFlotingIP Address 10.10.10.50
hares -modify SambaFlotingIP NetMask 255.255.255.192
hares -modify SambaFlotingIP Enabled 1

hagrp -modify SambaFlotingIP AutoStartList Node2 Node1

Also add  a resource SambaApplication in SambaServerGroup group to use /etc/init.d/smb script to control service.

hares -add SambaApplication Application SambaServerGroup
hares -modify SambaApplication User "root"
hares -modify SambaApplication StartProgram "/etc/init.d/smb start"
hares -modify SambaApplication StopProgram "/etc/init.d/smb stop"
hares -modify SambaApplication PidFiles "/var/run/smbd.pid"
hares -modify SambaApplication Enabled 1


SambaApplication resource should start if SambaFlotingIP resource is up.

hares -link SambaApplication SambaFlotingIP

And SambaServerGroup should depend on ClusterFileSystemgroup (CFS filesystem group name)

hagrp -link SambaServerGroup ClusterFileSystemgroup online local firm
haconf -dump -makero