Tuesday, February 18, 2014

VxVM vxdg Error - cannot be added to a CDS disk group ERROR V-5-1-6478


Issue

Not able to add a disk on shared disk group.
VxVM vxdg ERROR V-5-1-6478 Device disk_1 cannot be added to a CDS disk group


Solution:

You are not executing command on master node !

- Find which node is mater node
vxdctl -c mode

- Initialize disk
vxdisksetup -i disk_1

- Add disk is diskgroup

vxdg -g diskgroup_name adddisk   a_good_disk_name=disk_1


Isn't simple ?

Thursday, February 13, 2014

Rehat satellite satellite-sync fails with ORA-00001: unique constraint (RHNSAT.RHN_PACKAGE_FILE_PID_CID_UQ) violated

Problem

Redhat satellite 5.4.1 satellite-sync fails while syncing of one software channel with below erroe

10:00:00    Importing *relevant* package metadata: rhel-some-channel-name (99)
SYNC ERROR: unhandled exception occurred:

(Check logs/email for potentially more detail)


(54, 'ORA-00001: unique constraint (RHNSAT.RHN_PACKAGE_FILE_PID_CID_UQ) violated\n', '\n     Package Upload Failed due to uniqueness constraint violation.\n     Make sure the package does not have any duplicate dependencies or\n     does not already exists on the server\n     ')



Solution

1- Enable debug in rhn.conf

# echo 'debug = 7' >> /etc/rhn/rhn.conf


2- Run satellite sync for failed software channel

# satellite-sync -c rhel-some-channel-name


3- See package ID causing issue in log - say package-id causing problem is 12345

# less /var/log/rhn/rhn_server_satellite.log


4- Take DB backup and delete package_id

sqlplus $(spacewalk-cfg-get default_db)
SQL > delete from rhnPackageFile where package_id=12345 ;
SQL > commit;
SQl> quit


5- Repeat steps 2,3,4 until satellite-sync for rhel-some-channel-name is complete.


6- Run full satellite sync

satellite-sync


It's rare issue. Did it help you?

Error V-3-20005 and V-3-24996 while mounting VVR replicated filesystem as read only on secondary


Are you getting below error when trying to mount Veritas VVR replicated volume on secondary in read only mode?



root@secondary ~]# mount -t vxfs -o ro /dev/vx/dsk/product_dg  /mnt
UX:vxfs mount.vxfs: ERROR: V-3-20005: read of super-block on /dev/vx/dsk/product_dg/product_vol failed: Input/output error
UX:vxfs mount.vxfs: ERROR: V-3-24996: Unable to get disk layout version


Solution

Error is misleading !

Check RVG status. Most probably it is in DISABLE state. volume group is an object of replicated RVG. If RVG is disable, even volumes are started (ENABLED), it is not usable.

root@secondary ~]# vxprint -rt |grep ^rv
rv product_rvg      1            DISABLED CLEAN    secondary 3        srl_vol

Actions

1- Start RVG

vxrvg -g product_dg start product_rvg


2- Verify if rvg is enabled

root@secondary ~]# vxprint -rt |grep ^rv
rv product_rvg      1            ENABLED CLEAN    secondary 3        srl_vol


3-Mount filesystem as read only . NOTE- it is not recommended to mount replicated volume in even read-only on secondary )

root@secondary ~]# mount -t vxfs -o ro /dev/vx/dsk/product_dg  /mnt
root@secondary ~]#


Did this post help you ?

Friday, February 7, 2014

Rename Veitas Shared Disk Group

How do you rename a shared disk group cfs_group_name configured under SFCFSHA ?

Assumptions:

- cfs_group_name is VCS cluster group importing DG on multiple nodes at same time.
- Shred filesystem is mounted on all cluster nodes in read-write mode


  • Bring cfs_group_name offline on all nodes. Repeat below for each node

         hagrp -offline cfs_group_name -sys node1


  • Confirm cfs_group_name is offline on all nodes

          hastatus -sum


  • Fine out which node is master node

         vxdctl -c mode

  • From master node, deport DG on one node. It will deport on on all other node

        vxdg deport old_dg_name

  •     Import DG on master node with new name

    vxdg -n app_data_dg -s import old_dg_name ## -s == shared

  • Update Veritas config file

           haconf -makerw
     hares -modify cvmvoldg1 CVMDiskGroup app_data_dg
     hares -modify cfsmount1 BlockDevice /dev/vx/dsk/app_data_dg/appvol
     haconf -dump -makero  ## it will update config on all nodes


  • Bring cfs_group_name online on all nodes (repeat below on each node)

          hagrp -online cfs_group_name -sys node1


  • Verify shared filesystem has been monted sucessfully and there is no error in log

           df -hP ; tailf /var/VRTSvcs/log/engine_A.log



  • On related note : How to rename volume of shared disk group ?
   Offline colume resource, rename volume, change cluster config using hares -modify and bring resource online.


     /usr/sbin/vxedit -g  diskgroup rename oldname newname