Discussion:
[Veritas-vx] VxVm
Collin
2010-03-29 15:02:36 UTC
Permalink
I've got the following....

Solaris 10
VxVM 5.0MP3RP1HF12

I have a number of mount points that are being migrated from
/dev/dsk/cXtXdXsX to clustered mount points. The problem I'm having is if I
mount these disk in the /dev/dsk/cXtXdXsX format I run the risk that if
something were to cause the direct path to go down I would lose the
databases on these mount points. But when I mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 my system panics and core dumps.

Does VxVM have any issues mounting /dev/vx/dmp/<emc_array>_Xs6??

Thanks,
Collin
Brian Wilson
2010-03-29 18:26:46 UTC
Permalink
Post by Collin
I've got the following....
Solaris 10
VxVM 5.0MP3RP1HF12
I have a number of mount points that are being migrated from /dev/
dsk/cXtXdXsX to clustered mount points.
I'm guessing you mean 'clustered' in the sense of in a cluster -
failover? Not clustered as in Veritas Clustered Filesystem that's
multi-writer? I'm going to go with that asumption.
Post by Collin
The problem I'm having is if I mount these disk in the /dev/dsk/
cXtXdXsX format I run the risk that if something were to cause the
direct path to go down I would lose the databases on these mount
points. But when I mount these disks as /dev/vx/dmp/<emc_array>_Xs6
my system panics and core dumps.
I've never tried to use Veritas DMP without creating Veritas disk
groups and volumes - which would mount up from /dev/vx/dsk/
(diskgroupname)/(volumename). It almost looks like you're trying to
take raw UFS mounts that maybe used to sit on emcpower pseudo devices,
and mount them up through DMP like it's PowerPath? I'm not sure that
works....
Post by Collin
Does VxVM have any issues mounting /dev/vx/dmp/<emc_array>_Xs6??
Thanks,
Collin
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
Aleksandr Nepomnyashchiy
2010-03-29 19:25:04 UTC
Permalink
Collin,
Do you have ASL ( Array Support Library) installed?

Aleksandr
Post by Collin
I've got the following....
   Solaris 10
   VxVM 5.0MP3RP1HF12
I have a number of mount points that are being migrated from
/dev/dsk/cXtXdXsX to clustered mount points.
I'm guessing you mean 'clustered' in the sense of in a cluster - failover?
 Not clustered as in Veritas Clustered Filesystem that's multi-writer?  I'm
going to go with that asumption.
Post by Collin
The problem I'm having is if I mount these disk in the /dev/dsk/cXtXdXsX
format I run the risk that if something were to cause the direct path to go
down I would lose the databases on these mount points.  But when I mount
these disks as /dev/vx/dmp/<emc_array>_Xs6 my system panics and core dumps.
I've never tried to use Veritas DMP without creating Veritas disk groups and
volumes - which would mount up from
/dev/vx/dsk/(diskgroupname)/(volumename).  It almost looks like you're
trying to take raw UFS mounts that maybe used to sit on emcpower pseudo
devices, and mount them up through DMP like it's PowerPath?  I'm not sure
that works....
Post by Collin
Does VxVM have any issues mounting /dev/vx/dmp/<emc_array>_Xs6??
Thanks,
Collin
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
Dmitry Glushenok
2010-03-30 07:01:58 UTC
Permalink
Hello,

Panic string and previous messages usually helps to understand cause.. Release notes to RP2-RP3 also provides short descriptions of fixed issues like "Fixed the cause of a system panic when mutex_panic() was called from vol_rwsleep_wrlock()."
Post by Collin
I've got the following....
Solaris 10
VxVM 5.0MP3RP1HF12
I have a number of mount points that are being migrated from /dev/dsk/cXtXdXsX to clustered mount points. The problem I'm having is if I mount these disk in the /dev/dsk/cXtXdXsX format I run the risk that if something were to cause the direct path to go down I would lose the databases on these mount points. But when I mount these disks as /dev/vx/dmp/<emc_array>_Xs6 my system panics and core dumps.
Does VxVM have any issues mounting /dev/vx/dmp/<emc_array>_Xs6??
Thanks,
Collin
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
--
Dmitry Glushenok
Jet Infosystems
William Havey
2010-03-30 14:48:50 UTC
Permalink
The original message states "mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 ". Perhaps this is normal behavior. Mounts are
of devices which receive I/O. A "/dev/vx/dmp/..." device entry isn't I/O
capable.

I think a clearer statement of what Collin intends to do is needed.

Bill
Post by Dmitry Glushenok
Hello,
Panic string and previous messages usually helps to understand cause..
Release notes to RP2-RP3 also provides short descriptions of fixed issues
like "Fixed the cause of a system panic when mutex_panic() was called from
vol_rwsleep_wrlock()."
Post by Collin
I've got the following....
Solaris 10
VxVM 5.0MP3RP1HF12
I have a number of mount points that are being migrated from
/dev/dsk/cXtXdXsX to clustered mount points. The problem I'm having is if I
mount these disk in the /dev/dsk/cXtXdXsX format I run the risk that if
something were to cause the direct path to go down I would lose the
databases on these mount points. But when I mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 my system panics and core dumps.
Post by Collin
Does VxVM have any issues mounting /dev/vx/dmp/<emc_array>_Xs6??
Thanks,
Collin
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
--
Dmitry Glushenok
Jet Infosystems
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
Collin
2010-03-30 15:09:02 UTC
Permalink
Sorry for any confusion...

I've got several powerpath devices from a dead system that I'm mounting
temporarily on one node in my cluster. I run a devfsadm -Cv and vxdctl
enable. After that I can see the powerpath devices listed as...

emc_clarrion0_10 auto:none - - online invalid

I modified my /etc/vfstab file to mount the devices..

/dev/vx/dmp/emc_clariion0_10s6 /dev/vx/rdmp/emc_clariion0_10s6 /u10
ufs 3 yes -

The device mounts and I can access the file system with all my data. When
the activity starts to increase on these temporary mount points, I see a
count down on the console that port H has lost connectivity. After the 16
seconds, the node panics and of course reboots. However, if I mount the
power path devices using a single path..

/dev/dsk/c1t5006016100600432d10s6 /dev/rdsk/c1t5006016100600432d10s6 /u10
ufs 3 yes -

I never get the port H losing connectivity.

I want to use the dmp name in case I lose a path to these disk.

Any reason why using the dmp name causes port H to lose connectivity vs.
using a single path?

Thanks,
Collin
Post by William Havey
The original message states "mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 ". Perhaps this is normal behavior. Mounts are
of devices which receive I/O. A "/dev/vx/dmp/..." device entry isn't I/O
capable.
I think a clearer statement of what Collin intends to do is needed.
Bill
Post by Dmitry Glushenok
Hello,
Panic string and previous messages usually helps to understand cause..
Release notes to RP2-RP3 also provides short descriptions of fixed issues
like "Fixed the cause of a system panic when mutex_panic() was called from
vol_rwsleep_wrlock()."
Post by Collin
I've got the following....
Solaris 10
VxVM 5.0MP3RP1HF12
I have a number of mount points that are being migrated from
/dev/dsk/cXtXdXsX to clustered mount points. The problem I'm having is if I
mount these disk in the /dev/dsk/cXtXdXsX format I run the risk that if
something were to cause the direct path to go down I would lose the
databases on these mount points. But when I mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 my system panics and core dumps.
Post by Collin
Does VxVM have any issues mounting /dev/vx/dmp/<emc_array>_Xs6??
Thanks,
Collin
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
--
Dmitry Glushenok
Jet Infosystems
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
v***@xanthia.com
2010-03-30 15:31:05 UTC
Permalink
Couple questions:

Are the underlying volumes VxVM volumes? If so, why are you not mounting
the /dev/vx/dsk/DGNAME/volname objects?

If the devices actually /are/ powerpath devices (i.e., they're
metadevices coalescing the underlying EMC LUN paths), why aren't you
mounting the powerpath device node rather than the DMP devic node?

Overall, I guess what's not clear, here, is what role you are actually
wanting Storage Foundation to have? Are you attempting to use it
strictly for multi-path support? If so, there's likely more
cost-effective ways of doing multi-pathing (sounds like you're already
paying for PowerPath, any way, and, even if you weren't, MPxIO sh/would
be available to you).
Post by Collin
Sorry for any confusion...
I've got several powerpath devices from a dead system that I'm
mounting temporarily on one node in my cluster. I run a devfsadm -Cv
and vxdctl enable. After that I can see the powerpath devices listed
as...
emc_clarrion0_10 auto:none - - online invalid
I modified my /etc/vfstab file to mount the devices..
/dev/vx/dmp/emc_clariion0_10s6 /dev/vx/rdmp/emc_clariion0_10s6
/u10 ufs 3 yes -
The device mounts and I can access the file system with all my data.
When the activity starts to increase on these temporary mount points,
I see a count down on the console that port H has lost connectivity.
After the 16 seconds, the node panics and of course reboots. However,
if I mount the power path devices using a single path..
/dev/dsk/c1t5006016100600432d10s6 /dev/rdsk/c1t5006016100600432d10s6
/u10 ufs 3 yes -
I never get the port H losing connectivity.
I want to use the dmp name in case I lose a path to these disk.
Any reason why using the dmp name causes port H to lose connectivity
vs. using a single path?
--
"You can be only *so* accurate with a claw-hammer" --me
William Havey
2010-03-30 15:41:51 UTC
Permalink
Is I/O Fencing in place in the cluster? If so, then, if those devices you
are attempting to mount have registration keys on them which are unknown to
gab, gab tells had about the unknown keys, had stops thus bringing down port
h, and the system panics.
Post by Collin
Sorry for any confusion...
I've got several powerpath devices from a dead system that I'm mounting
temporarily on one node in my cluster. I run a devfsadm -Cv and vxdctl
enable. After that I can see the powerpath devices listed as...
emc_clarrion0_10 auto:none - - online invalid
I modified my /etc/vfstab file to mount the devices..
/dev/vx/dmp/emc_clariion0_10s6 /dev/vx/rdmp/emc_clariion0_10s6 /u10
ufs 3 yes -
The device mounts and I can access the file system with all my data. When
the activity starts to increase on these temporary mount points, I see a
count down on the console that port H has lost connectivity. After the 16
seconds, the node panics and of course reboots. However, if I mount the
power path devices using a single path..
/dev/dsk/c1t5006016100600432d10s6 /dev/rdsk/c1t5006016100600432d10s6
/u10 ufs 3 yes -
I never get the port H losing connectivity.
I want to use the dmp name in case I lose a path to these disk.
Any reason why using the dmp name causes port H to lose connectivity vs.
using a single path?
Thanks,
Collin
Post by William Havey
The original message states "mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 ". Perhaps this is normal behavior. Mounts are
of devices which receive I/O. A "/dev/vx/dmp/..." device entry isn't I/O
capable.
I think a clearer statement of what Collin intends to do is needed.
Bill
Post by Dmitry Glushenok
Hello,
Panic string and previous messages usually helps to understand cause..
Release notes to RP2-RP3 also provides short descriptions of fixed issues
like "Fixed the cause of a system panic when mutex_panic() was called from
vol_rwsleep_wrlock()."
Post by Collin
I've got the following....
Solaris 10
VxVM 5.0MP3RP1HF12
I have a number of mount points that are being migrated from
/dev/dsk/cXtXdXsX to clustered mount points. The problem I'm having is if I
mount these disk in the /dev/dsk/cXtXdXsX format I run the risk that if
something were to cause the direct path to go down I would lose the
databases on these mount points. But when I mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 my system panics and core dumps.
Post by Collin
Does VxVM have any issues mounting /dev/vx/dmp/<emc_array>_Xs6??
Thanks,
Collin
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
--
Dmitry Glushenok
Jet Infosystems
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
Stuart Andrews
2010-03-31 00:04:52 UTC
Permalink
Colin



OK - I see you are using enclosure naming - and BTW - the /dev/vx/dmp
/dev/vx/dmp are just the same as disks BUT with the added provision of
DMP to keep the device online.

How many paths to these devices

# vxdmpadm getsubpaths

or

# vxdisk path



If a CLARiiON , and if only 2 paths - then set the array iopolicy to
singleactive - this is most likely - see later for DMP

# vxdmpadm listenclosure all

# vxdmpadm setattr enclosure ENC_name iopolicy=singleactive

If a CLARiiON , and if more than 2 paths - then set the array iopolicy
to balanced - and DMP does know how to stop IO to the secondary paths.
Note - this is against EMC recommendations but it works.

# vxdmpadm listenclosure all

# vxdmpadm setattr enclosure ENC_name iopolicy=balanced



I also notice that the enclosure names are lower case - indicating a
V5.x release VxVM installed. - are the CLARiiON APMs running ?

# vxdmpadm listapm all

Check the CLARiiON are Active



If a fencing cluster - then these are local LUNs at the point of
/dev/vx/dmp names for the devices. And yes SCSI3 keys will be placed on
them in a fencing cluster

# gabconfig -a

Check if there is Port b membership - if so then yes you have a fencing
cluster.



Check also the DMP block switch - it may be that with the iopolicy
incorrect, and on low IO you did not reach the limit for

# vxdmpadm gettune all

dmp_pathswitch_blks_shift

Now - when busy and IO chunks bigger than path switch level ( and with
iopolicy incorrect ) a path switch will cause a trespass (check SAN
logs) AND a block, drain, resume on the DMP path. There will be a
failover message logged in /etc/vx/dmpevents.log - check here also.



Stuart



________________________________

From: veritas-vx-***@mailman.eng.auburn.edu
[mailto:veritas-vx-***@mailman.eng.auburn.edu] On Behalf Of Collin
Sent: Wednesday, 31 March 2010 2:09 AM
To: William Havey
Cc: veritas-***@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] VxVm



Sorry for any confusion...

I've got several powerpath devices from a dead system that I'm mounting
temporarily on one node in my cluster. I run a devfsadm -Cv and vxdctl
enable. After that I can see the powerpath devices listed as...

emc_clarrion0_10 auto:none - - online invalid

I modified my /etc/vfstab file to mount the devices..

/dev/vx/dmp/emc_clariion0_10s6 /dev/vx/rdmp/emc_clariion0_10s6 /u10
ufs 3 yes -

The device mounts and I can access the file system with all my data.
When the activity starts to increase on these temporary mount points, I
see a count down on the console that port H has lost connectivity. After
the 16 seconds, the node panics and of course reboots. However, if I
mount the power path devices using a single path..

/dev/dsk/c1t5006016100600432d10s6 /dev/rdsk/c1t5006016100600432d10s6
/u10 ufs 3 yes -

I never get the port H losing connectivity.

I want to use the dmp name in case I lose a path to these disk.

Any reason why using the dmp name causes port H to lose connectivity vs.
using a single path?

Thanks,
Collin

On Tue, Mar 30, 2010 at 10:48 AM, William Havey <***@gmail.com>
wrote:

The original message states "mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 ". Perhaps this is normal behavior. Mounts
are of devices which receive I/O. A "/dev/vx/dmp/..." device entry isn't
I/O capable.

I think a clearer statement of what Collin intends to do is needed.

Bill



On Tue, Mar 30, 2010 at 3:01 AM, Dmitry Glushenok <***@jet.msk.su>
wrote:

Hello,

Panic string and previous messages usually helps to understand cause..
Release notes to RP2-RP3 also provides short descriptions of fixed
issues like "Fixed the cause of a system panic when mutex_panic() was
called from vol_rwsleep_wrlock()."
Post by Collin
I've got the following....
Solaris 10
VxVM 5.0MP3RP1HF12
I have a number of mount points that are being migrated from
/dev/dsk/cXtXdXsX to clustered mount points. The problem I'm having is
if I mount these disk in the /dev/dsk/cXtXdXsX format I run the risk
that if something were to cause the direct path to go down I would lose
the databases on these mount points. But when I mount these disks as
/dev/vx/dmp/<emc_array>_Xs6 my system panics and core dumps.
Post by Collin
Does VxVM have any issues mounting /dev/vx/dmp/<emc_array>_Xs6??
Thanks,
Collin
_______________________________________________
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
--
Dmitry Glushenok
Jet Infosystems


_______________________________________________
Veritas-vx maillist - Veritas-***@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
Sengor
2010-04-02 15:15:01 UTC
Permalink
Guys,

A common misconception, I'm pretty sure PowerPath will failover stuff even
if you're using one of the native OS path devices as the filesystem mount
device. This is explicitly stated in the PP Admin guide and happens
transparently through the driver stack. Question should be which one of the
VxDMP or PP is actually actively doing multipathing, I'm guessing VxDMP goes
into passive mode when it sees PP.

Also there's a script which comes with VCS that'll probe your LUNs and
inquire whether they support any necessary SCSI reservations for the cluster
Loading...