unknown
1970-01-01 00:00:00 UTC
On Wed, Mar 31, 2010 at 11:04 AM, Stuart Andrews <
Colin
OK – I see you are using enclosure naming – and BTW – the /dev/vx/dmp
/dev/vx/dmp are just the same as disks BUT with the added provision of DMP
to keep the device online.
How many paths to these devices
# vxdmpadm getsubpaths
or
# vxdisk path
If a CLARiiON , and if only 2 paths – then set the array iopolicy to
singleactive – this is most likely – see later for DMP
# vxdmpadm listenclosure all
# vxdmpadm setattr enclosure ENC_name iopolicy=singleactive
If a CLARiiON , and if more than 2 paths – then set the array iopolicy to
balanced – and DMP does know how to stop IO to the secondary paths. Note –
this is against EMC recommendations but it works.
# vxdmpadm listenclosure all
# vxdmpadm setattr enclosure ENC_name iopolicy=balanced
I also notice that the enclosure names are lower case – indicating a V5.x
release VxVM installed. – are the CLARiiON APMs running ?
# vxdmpadm listapm all
Check the CLARiiON are Active
If a fencing cluster – then these are local LUNs at the point of
/dev/vx/dmp names for the devices. And yes SCSI3 keys will be placed on them
in a fencing cluster
# gabconfig –a
Check if there is Port b membership – if so then yes you have a fencing
cluster.
Check also the DMP block switch – it may be that with the iopolicy
incorrect, and on low IO you did not reach the limit for
# vxdmpadm gettune all
dmp_pathswitch_blks_shift
Now – when busy and IO chunks bigger than path switch level ( and with
iopolicy incorrect ) a path switch will cause a trespass (check SAN logs)
AND a block, drain, resume on the DMP path. There will be a failover
message logged in /etc/vx/dmpevents.log – check here also.
Stuart
------------------------------
*Sent:* Wednesday, 31 March 2010 2:09 AM
*To:* William Havey
*Subject:* Re: [Veritas-vx] VxVm
Sorry for any confusion...
I've got several powerpath devices from a dead system that I'm mounting
temporarily on one node in my cluster. I run a devfsadm -Cv and vxdctl
enable. After that I can see the powerpath devices listed as...
emc_clarrion0_10 auto:none - - online invalid
I modified my /etc/vfstab file to mount the devices..
/dev/vx/dmp/emc_clariion0_10s6 /dev/vx/rdmp/emc_clariion0_10s6 /u10
ufs 3 yes -
The device mounts and I can access the file system with all my data. When
the activity starts to increase on these temporary mount points, I see a
count down on the console that port H has lost connectivity. After the 16
seconds, the node panics and of course reboots. However, if I mount the
power path devices using a single path..
/dev/dsk/c1t5006016100600432d10s6 /dev/rdsk/c1t5006016100600432d10s6
/u10 ufs 3 yes -
I never get the port H losing connectivity.
I want to use the dmp name in case I lose a path to these disk.
Any reason why using the dmp name causes port H to lose connectivity vs.
using a single path?
Thanks,
Collin
The original message states "mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 ". Perhaps this is normal behavior. Mounts are
of devices which receive I/O. A "/dev/vx/dmp/..." device entry isn't I/O
capable.
I think a clearer statement of what Collin intends to do is needed.
Bill
Hello,
Panic string and previous messages usually helps to understand cause..
Release notes to RP2-RP3 also provides short descriptions of fixed issues
like "Fixed the cause of a system panic when mutex_panic() was called from
vol_rwsleep_wrlock()."
mount these disk in the /dev/dsk/cXtXdXsX format I run the risk that if
something were to cause the direct path to go down I would lose the
databases on these mount points. But when I mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 my system panics and core dumps.
Dmitry Glushenok
Jet Infosystems
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
OK – I see you are using enclosure naming – and BTW – the /dev/vx/dmp
/dev/vx/dmp are just the same as disks BUT with the added provision of DMP
to keep the device online.
How many paths to these devices
# vxdmpadm getsubpaths
or
# vxdisk path
If a CLARiiON , and if only 2 paths – then set the array iopolicy to
singleactive – this is most likely – see later for DMP
# vxdmpadm listenclosure all
# vxdmpadm setattr enclosure ENC_name iopolicy=singleactive
If a CLARiiON , and if more than 2 paths – then set the array iopolicy to
balanced – and DMP does know how to stop IO to the secondary paths. Note –
this is against EMC recommendations but it works.
# vxdmpadm listenclosure all
# vxdmpadm setattr enclosure ENC_name iopolicy=balanced
I also notice that the enclosure names are lower case – indicating a V5.x
release VxVM installed. – are the CLARiiON APMs running ?
# vxdmpadm listapm all
Check the CLARiiON are Active
If a fencing cluster – then these are local LUNs at the point of
/dev/vx/dmp names for the devices. And yes SCSI3 keys will be placed on them
in a fencing cluster
# gabconfig –a
Check if there is Port b membership – if so then yes you have a fencing
cluster.
Check also the DMP block switch – it may be that with the iopolicy
incorrect, and on low IO you did not reach the limit for
# vxdmpadm gettune all
dmp_pathswitch_blks_shift
Now – when busy and IO chunks bigger than path switch level ( and with
iopolicy incorrect ) a path switch will cause a trespass (check SAN logs)
AND a block, drain, resume on the DMP path. There will be a failover
message logged in /etc/vx/dmpevents.log – check here also.
Stuart
------------------------------
*Sent:* Wednesday, 31 March 2010 2:09 AM
*To:* William Havey
*Subject:* Re: [Veritas-vx] VxVm
Sorry for any confusion...
I've got several powerpath devices from a dead system that I'm mounting
temporarily on one node in my cluster. I run a devfsadm -Cv and vxdctl
enable. After that I can see the powerpath devices listed as...
emc_clarrion0_10 auto:none - - online invalid
I modified my /etc/vfstab file to mount the devices..
/dev/vx/dmp/emc_clariion0_10s6 /dev/vx/rdmp/emc_clariion0_10s6 /u10
ufs 3 yes -
The device mounts and I can access the file system with all my data. When
the activity starts to increase on these temporary mount points, I see a
count down on the console that port H has lost connectivity. After the 16
seconds, the node panics and of course reboots. However, if I mount the
power path devices using a single path..
/dev/dsk/c1t5006016100600432d10s6 /dev/rdsk/c1t5006016100600432d10s6
/u10 ufs 3 yes -
I never get the port H losing connectivity.
I want to use the dmp name in case I lose a path to these disk.
Any reason why using the dmp name causes port H to lose connectivity vs.
using a single path?
Thanks,
Collin
The original message states "mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 ". Perhaps this is normal behavior. Mounts are
of devices which receive I/O. A "/dev/vx/dmp/..." device entry isn't I/O
capable.
I think a clearer statement of what Collin intends to do is needed.
Bill
Hello,
Panic string and previous messages usually helps to understand cause..
Release notes to RP2-RP3 also provides short descriptions of fixed issues
like "Fixed the cause of a system panic when mutex_panic() was called from
vol_rwsleep_wrlock()."
I've got the following....
Solaris 10
VxVM 5.0MP3RP1HF12
I have a number of mount points that are being migrated from
/dev/dsk/cXtXdXsX to clustered mount points. The problem I'm having is if ISolaris 10
VxVM 5.0MP3RP1HF12
I have a number of mount points that are being migrated from
mount these disk in the /dev/dsk/cXtXdXsX format I run the risk that if
something were to cause the direct path to go down I would lose the
databases on these mount points. But when I mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 my system panics and core dumps.
Does VxVM have any issues mounting /dev/vx/dmp/<emc_array>_Xs6??
Thanks,
Collin
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
--Thanks,
Collin
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
Dmitry Glushenok
Jet Infosystems
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
--
sengork
--000e0cd29e26735cc50483427189
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable
<div>Guys,<br></div><div><br></div><div>A common misconception, I'm pretty sure PowerPath will failover stuff even if you're using one of the native OS path devices as the filesystem mount device. This is explicitly stated in the PP Admin guide and happens transparently through the driver stack. Question should be which one of the VxDMP or PP is actually actively doing multipathing, I'm guessing VxDMP goes into passive mode when it sees PP. </div>
<div><br></div><div>Also there's a script which comes with VCS that'll probe your LUNs and inquire whether they support any necessary SCSI reservations for the cluster to operate.
sengork
--000e0cd29e26735cc50483427189
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable
<div>Guys,<br></div><div><br></div><div>A common misconception, I'm pretty sure PowerPath will failover stuff even if you're using one of the native OS path devices as the filesystem mount device. This is explicitly stated in the PP Admin guide and happens transparently through the driver stack. Question should be which one of the VxDMP or PP is actually actively doing multipathing, I'm guessing VxDMP goes into passive mode when it sees PP. </div>
<div><br></div><div>Also there's a script which comes with VCS that'll probe your LUNs and inquire whether they support any necessary SCSI reservations for the cluster to operate.