Sunday, August 3, 2014

Four ways to update HP iLO firmware

HP iLO firmware are scexe (self extracting files).  To get the iLO firmware, search system model on support section of hp.com.

ONE : From OS (Linux) CLI use scexe file

- Download iLO firmware say CP023069.scexe
- Unpack it
$ cd /tmp
$ chnmod 755 CP023069.scexe
$ ./CP023069.scexe
- follow online instruction
- Check iLO new version
$ curl http:///xmldata?item=All

TWO : From iLO CLI load bin file


On some webserver ( that should be on same same vlan as iLO valn and network), copy firmware bin file in DocumentRoot.

$ ./CP023069.scexe --unpack=firmware
$ cd firmware
$ cp firmware/ilo2_NNN.bin $DocumentRoot (usually /var/www/html/)

- Login on iLO over ssh and upload firmware
$ssh -l Administrator
- To see current iLO version
iLO> show

- to check if your web servers is pingable from iLO (to get iLO bin file)
iLO> cd /map1
iLO> oemhp_ping
iLO> load -source http:///firmware/ilo2_NNN.bin

- Your session will seems to hang for 2-3 minute - do not worry ! If all is good. iLo firmware will be upgraded and iLO will be rebooted. Check iLo version
$ curl http:///xmldata?item=All

THREE : Using iLO WebUI to upload bin file


- Download and save ilo firmware bin files extracted above by keying in below in you web browser
http:///firmware/ilo2_NNN.bin
-access iLO Web UI, go to Administration tab and upload bin file to upgrade iLO firmware
http://


FOUR : HPSIM to push firmware

- Use HPSIM if you  paying SIM license fee !


Reference : http://pipe2text.com/?page_id=1908





Scripts to upload multiple files on dropbox.redhat.com and ftp.veritas.com

You may need to upload multiple files multiple time on Redhat (sosreports, spacewalk-debug ) and Veritas (VRTSexplorer) websites to get vendor support. Below scripts will be handy for this purpose.

Script to upload file(s) on dropbox.redaht.com

#!/bin/bash
#set -n
if [ $# -eq 0 ]; then
        echo "Usages: $0 "
        exit 1
else
  # Check if file exist and make file readable, if it is not
  for f in $*
  do
    if [ -f $f ];then
      [ ! -r $f ] && sudo chmod 755 $f || (echo Failed to set 755 permission on $f && exit 1)
    else
      echo $f does not exist
      exit 1
    fi
  done

  # Upload file
  echo "Uploading files $* on dropbox.redhat.coms"

ftp -n <
open dropbox.redhat.com
user anonymous nasimuddin.ansari@company.com
bin
prompt
hash
passive
cd  /incoming
mput $*
bye
EOD
echo "Finish uploading of :"
echo $*|tr ' ' '\n'
exit 0
fi



Script to upload file(s) on ftp.veritas.com

#!/bin/bash
#sh -n
# http://www.symantec.com/docs/TECH66995 - for password - it changes in every 3 months

if nc -zvw2 ftp.veritas.com 21; then
        if [ $# -eq 0 ]; then
            echo "Usages: $0 "
            exit 1
        else
        # Check if file exist and make file readable, if it is not
        for f in $*
        do
          if [ -f $f ];then
            [ ! -r $f ] && sudo chmod 755 $f || (echo Failed to set 755 permission on $f && exit 1)
          else
            echo $f does not exist
            exit 1
          fi
        done

          echo "Uploading files $* on ftp.veritas.com"
          ftp -n <
          open ftp.veritas.com
          user iosupport YHJ8YLJpuUN8cY-t
          bin
          prompt
          hash
          passive
          cd  /incoming
          mput $*
          bye
EOD
          echo "Finish uploading of :"
          echo $*|tr ' ' '\n'
          exit 0
        fi
else
        echo 'ftp.veritas.com is not accessible from this host'
        exit 1
fi

Tuesday, July 15, 2014

An example to configure ganglia module (jenkins) thru puppet

This page is to illustrate to configure jenkins ganglia module (using standard ganglia puppet module available on puppetlab) via puppet


  • Copy ganalia.py in jenkins_ganglia_module/files/jenkins.py
  • You need to do soem changes in jenkins.py if python version <2 .6="" li="">


Here is jenkinsganglia.pp puppet file.

# puppet class jenkins_ganglia_module/manifests/jenkinsganglia.pp
class jenkins_ganglia_module::jenkinsganglia{

  file{

      '/usr/lib64/ganglia/python_modules/jenkins.py':
        ensure => file,
        owner  => 'root',
        group  => 'root',
        mode   => '0644',
        source => "puppet:///modules/jenkins_ganglia_module/jenkins.py",
        notify => Class['ganglia::gmond::service'];
  }

# create config file

    ganglia::gmond::module{ 'jenkins':
            language => 'python',
            params   => {
                base_url       => 'http://127.0.0.1:8080',
                username       => 'hudson',
                apitoken       => '123456123456123456123456'
            }
    }

    ganglia::gmond::collection_group { 'jenkins':
            collect         => 10,
            time_threshold  => 20
    }

    Ganglia::Gmond::Metric {
        collection_group => 'jenkins',
        value_threshold => 1.0
    }

    ganglia::gmond::metric {
        'jenkins_overallload_busy_executors':
            user_title => 'Number of busy executors on master and slaves';

        'jenkins_overallload_queue_length':
            user_title => 'Length of the queue on master and slaves';

        'jenkins_overallload_total_executors':
            user_title => 'Number of executors on master and slaves';

        'jenkins_jobs_total':
            user_title => 'Total number of jobs';

        'jenkins_jobs_blue':
            user_title => 'Number of jobs with status blue';

        'jenkins_jobs_red':
            user_title => 'Number of jobs with status red';

        'jenkins_jobs_yellow':
            user_title => 'Number of jobs with status yellow';

        'jenkins_jobs_grey':
            user_title => 'Number of jobs with status grey';

        'jenkins_jobs_aborted':
            user_title => 'Number of jobs with status aborted';

        'jenkins_jobs_notbuilt':
            user_title => 'Number of jobs with status notbuilt';

        'jenkins_jobs_disabled':
            user_title => 'Number of jobs with status disabled';
    }
}


  1. Check jenkins.conf in ganglia module.d  directory ( jenkins ganlia module configuration)
  2. Check jenkins.conf in ganglia configuration directory ( jenkins metrics configuration)
  3. Check jenkins.py in in ganlia python_module directory
  4. Run puppet agent and restart gmond if required

Wednesday, July 9, 2014

Steps to extend a SAN disk under Linux multipath

Suppose you are using a 250 GB SAN disk under multiapth. You need to extend it to 500G. You need to extend LUN on storage side and follow below steps on Linux to extend LUN on multiapth level.

 # multipath -ll mpath111
mpath111 (01234567890123456789) dm-122 MSA2012i
[size=250G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 8:0:0:99  sdab       88:888   [active][undef]
 \_ 9:0:0:99  sdbm       77:777     [active][undef]


>>Resacan all paths of disk

# echo 1 >/sys/block/sdab/device/rescan
# echo 1 >/sys/block/sdbm/device/rescan

>> Confirm new size one each path
# fdisk -l /dev/sdab
# fdisk -l /dev/sdbm

>> Reread device map  and create/extend partition

# kpartx -a -v /dev/mapper/mpath111
# multiapth -v2 mpath111 OR service multipathd restart
# fdisk -l /dev/mapper/mpath111 (delete partition and create again WITHOUT existing fdisk !)
# kpartx -a -v /dev/mapper/mpath111
# dmsetup info /dev/mapper/mpath111 |gerp Status (should be Active)
# multipath -ll mpath111
mpath111 (01234567890123456789) dm-122 MSA2012i
[size=500G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 8:0:0:99  sdab       88:888   [active][undef]
 \_ 9:0:0:99  sdbm       77:777     [active][undef]


Notes
1- Run kpartx with -a -p p -v  option on rhel >= 62
2- If kpartx fails with  'device-mapper: resume ioctl failed: Invalid argument', upgrade device-mapper-multipath >=0.4.7-59
3- Multiapthd can be restart even when paths are are online and use.

Curl error with Self Signed certificate - curl: (60) SSL certificate problem - SSL3_GET_SERVER_CERTIFICATE:certificate verify failed

If you have decided to use self sign certificate with your internal only web server to save few hundred dollars bill per year of public certificate cost ( Thawte, Verisign, Go Daddy etc. )  and did not address CA Root certificate issue on client side, we will end up with following message.

$ curl https://YourWebServer.company.com/
curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). The default
 bundle is named curl-ca-bundle.crt; you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.

What does this message say?

Client has  requested for secure connection to web server. Web server send his public certificate. Client is not able to check integrity of web certificate against public certificate of Certificate Authority ( Root certificate or PKIRootCA.crt) used to sign Web Server certificate.

What you need to do?

curl use opensll ca-bundle to verify. Other client (say java, IE, Firefox etc. ) have different location for ca-bundle. On client, you need to append PKIRootCA.crt in ca-bunldle. ca-bundle location is defined against certs variable in openssl config file /etc/pki/tls/openssl.cnf

$ grep certs /etc/pki/tls/openssl.cnf
certs           = $dir/certs    # Where the issued certs are kept

Take backup, append CA Root Certificate and test.

#cp -a /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.org
$ curl https://YourWebServer.company.com/  << will fail on client
# cat companyPKIRootCA.crt >> /etc/pki/tls/certs/ca-bundle.crt
$ curl https://YourWebServer.company.com/  << should work fine on client

Additional Notes

- Upgrading/downgrading of openssl or ca-certificates  rpms does not overwrite ca-bundle.crt
- you need to appen Root Certificate on all clinets

Reference
Article  -How do I configure a CA and sign certificates using OpenSSL in Red Hat Enterprise Linux?

Sunday, July 6, 2014

Using Websphere MQ Veritas Agent on Linux

> Assuming Websphere MQ has been already installed, download VCS MQ agent from https://sort.symantec.com/agents


> Install agent and related rpms on all nodes ( unpack tarball first)

# rpm -ivh VRTSacclib-5.2.4.0-GA_GENERIC VRTSmq6-5.1.15.0-GA_GENERIC_noarch


> Test MQ start and stop out of Veritas

# su - mqm
$ mq_qmgr_start.sh  VCSTSTQM   # start queue Manager
$ dspmq                        # List queue manager and see status
$ dspmqver -i  # To see version information. It is required to configure MQ Agent
  Name:        WebSphere MQ
  Version:     7.5.0.1
$ mq_qmgr_stop.sh  VCSTSTQM 1  # stop queue manager


> Import agent type - run below on any one node if VCS is already running

# /etc/VRTSagents/ha/conf/WebSphereMQ/WebSphereMQTypes.cmd


> If VCS is not running, copy the agent types file on all nodes

# cp -a /etc/VRTSagents/ha/conf/WebSphereMQ/WebSphereMQTypes.cf  /etc/VRTSvcs/conf/config


> Add below in /etc/VRTSvcs/conf/config/main.cf if it has not been added by above steps

include "WebSphereMQTypes.cf"


> Stop VCS and add following in main.cf to add a failover group

group MQ_GROUP (
        SystemList = { Node1 = 0, Node2 = 1 }
        AutoStartList = { Node1, Node2 }
        )
        WebSphereMQ Q_Manager (
                ResLogLevel = ERROR
                QueueManager = VCSTSTQM
                MQVer = "7.5"
                )
        requires group FileSystsmeGroup online local firm


> Start VCS and test MQ failover

hastart                     # on all nodes
hastatus -sum               # group should be online on Node1
ps -ef|grep mqm             # On Node1
hagrp -switch MQ_GROUP -sys Node2 # Move MQ to Node2

> Group should be online on Node2 and MQ should be running.


Ref : Symantec Document

fdisk warning - WARNING: Re-reading the partition table failed with error 22: Invalid argument

What is correct way to create partition on a multipath device?


# fdisk /dev/mapper/mpath99
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): Command action
   e   extended
   p   primary partition (1-4)
Partition number (1-4): First cylinder (1-130541, default 1): Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-130541, default 130541): Using default value 130541

Command (m for help): The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

# kpartx -a -v /dev/mapper/mpath99  ( RHEL <6 span="">
    OR
# kpartx -a -p p -v /dev/mpath99 ( RHEL >=6)
# fdisk -l /dev/mapper/mpath99


Why is show warning - WARNING: Re-reading the partition table failed with error 22: Invalid argument?

cause of above warning is - multipath device maps (i.e. mpathX devices) is in use in device-mapper tables, kernel would not be able to re-read partition table of devices. It is safe to ignored. Ensure to run "kpartx" command with multipath devices after creating partition.


What to create single partition on whole disk quickly of thru script?

# echo -e 'n\np\n1\n\n\nw'  | fdisk /dev/mapper/mpath99

Refer : "How can I create partitions on multipath devices in RHEL?"  & fdisk warning