[SATLUG] evms command line...

Enrique Sanchez esanchezvela.satlug at gmail.com
Tue Sep 8 16:30:36 CDT 2009


On Tue, Sep 8, 2009 at 3:46 PM, John Pappas<j at jvpappas.net> wrote:
> On Tue, Sep 8, 2009 at 12:02, Enrique Sanchez <esanchezvela.satlug at gmail.com
> So you have a standard active/passive (failover) cluster set up for
> Application and RAC (Infers Active/Active) for DB, right?
>
>
>> >  I am thinking that standard LVM can be used via import/export scripts in
>> the failover policy.
>>
>> that's an idea, will explore it now. thank you.,
>>


you nailed it, when I first began working on this problem I was
wondering why it couldn't be done as it had been done in the original
HACMP ( for IBM's/CLAM's HACMP) also I wasn't aware of how Heartbeat
v2. worked at all, so after asking a couple of questions & reading a
lot of information got into EVMS, which thankfully royally s**ks, then
after more reading into how HB works and learned all about the OCF
resources/primitives,etc... which lead me to read check the existing
primitives, there I found a strangely named LVM which doesn't do
anything else other than running vgexport/vgimport on a traditional
LVM2 volume group, by then I had forgotten my initial question and
wanted to do it on an active-active cluster.

>
> This would only apply to the Application cluster.  The RAC would have to be
> run differently.  From my cursory research, CLVM looks like the VGs can be
> shared amongst participating nodes (comm is done via CLVMD over the network,
> not sure what best practice is, since most clusters have at least 1 client
> facing network and at least 1 cluster only network, so not sure if the
> cluster.conf handles nodes with more than 1 ip address.  Would assume so,
> but have not tried it.
>

right, only for the App servers, the Database ones will be another
story, guess I'll need to use EVMS there, even thou I don't like it.


>
>> I get a bunch of 8.43 GB EMC Shares and I need to group them together
>> by class ( Tier1, Tier2 and T3) on volume groups.
>>
>
> That makes sense, T1 VGs, T2 VGs, T3 VGs.  Would not have MixTier VG (as
> that could lead to badness)
>
>
>> > That makes sense, AFAIK RAC (and most standard cluster types) requires
>> > shared block storage (FC/iSCSI/etc) most do not function well with shared
>> > file-level (NFS or similar) due to the locking mechanisms involved.
>>
>> Oracle RAC works well with GPFS (IBM's but that costs an arm and a leg).
>>
>
> Some would argue that the licensing required to run RAC costs an Arm + Leg +
> Soul to begin with, so adding GPFS would leave no legs nor arms.  Not sure
> if GFS (RedHat Global FS) could help here either.  Maybe even Polyserve (Not
> sure of Oracle support though)
>
> I agree, besides EVMS is only supported in SUSE10 and no longer
>> developed, so adopting it right now is not what I would call
>> brilliant.
>>
>
> Did not realize EVMS bit the dust.  I agree, building in another deprecated
> dependancy would be poor planning.
>
>

GFS is not supported in SLES10, Jeremy Mann has suggested Lustre,
which seems to be a good option as I don't want to be stuck on
something that does not provide a migration path to something, there
is also Veritas Cluster manager but Lustre seems more promising.




>> thank you again,
>> Enrique.
>> --
>>
>
> Sure.  <Shamelessness>  I freelance if there is room on the project
> </shamelessness>
>
> John P


More information about the SATLUG mailing list