[SATLUG] evms command line...

Enrique Sanchez esanchezvela.satlug at gmail.com
Tue Sep 8 12:02:10 CDT 2009


On Tue, Sep 8, 2009 at 11:45 AM, John Pappas<j at jvpappas.net> wrote:
> Sorry for the latency (congestion occurred) but:
>

No problema I appreciate your help.
I see lots of good ideas here and I am stalling the project until I
have a firm foot on it.

> On Mon, Aug 31, 2009 at 03:09, Enrique Sanchez <
> esanchezvela.satlug at gmail.com> wrote:
>
>> On Sun, Aug 30, 2009 at 11:09 AM, John Pappas<j at jvpappas.net> wrote:
>> >
>> > Define cluster.  Are you using a cluster file system (GFS, Lustre, OCFS,
>> > CXFS), or a standard cluster where the disks are shared, but only one
>> mounts
>> > at a time
>>
>> We need to provide highly available application and database servers,
>> the database servers will be using Oracle RAC using ocfs2
>>
>
> Ok, so native LVM cannot be used due to Cluster locking issues (Standard LVM
> has a single system lock setup), and I am not sure if CLVM (Cluster LVM) is
> multi-active capable (Reading...)
>
>

right now I am not looking at the active-active scenario, that would
be later. thnx.....

>> and the application servers would be present on one node at a time, if
>> the node fails the application server must be restarted on the
>> surviving node ( including the IP address and filesystems), thus, the
>> filesystems can be mounted on both servers at the same time or one at
>> a time.
>>
>
> I assume that the Active/Passive Application server setup is due to lack of
> a load balancer or session persistence, regardless CLVM is certainly
> acceptable for active-passive (quorum controlled resource) clusters, as only
> one node is expected to own the resource at a time.  I am thinking that
> standard LVM can be used via import/export scripts in the failover policy.
>

that's an idea, will explore it now. thank you.,


> if the standard LVM manager can be used to mount the filesystems on
>> one server at a time and import them on the surviving one, I'll throw
>> evms out of the window, however, it can not be used, as I found out,
>> to create ocfs2 filesystems mounted on both servers.
>>
>
> I am not sure which "it cannot be used" references, but LVM does not care
> what filesystem is on a LV, and EVMS requires a "plug-in" so that it can
> call the appropriate `mkfs.x` to format the created ELV.
>
>
>> I need to group disks into larger units (cluster aware) then slice
>> them into filesystems, the disks are EMC shares 8.43 GB and
>> filesystems can be any size.
>>
>
> I am not sure how OCFS2 reacts to LVM, but RAC uses multiple-instance,
> single-tablespace to allow Active/Active clustering with shared data.  CLVM
> would then have to allow multiple node VG import for this to work.  URL
> below infers that multiple node VG access is allowed, but it is unclear, so
> more reading needed.
> See:
> http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/LVM_Cluster_Overview.html
>
> AFAIK most cluster software allows for the definition of a resource group
> that includes the physical disks included in that group.   RAC
> nonwithstanding, is that the case here?  If so, I would guess that "best
> practice" would be to define and maintain the grouping there.  If not, LVM
> presumes similar physical disks, since a VG takes all disk blocks on all PVs
> and groups them into a pool of shared extents.  I am getting the feeling
> that your "grouping" need is not LVM style management.
>


>
>> >> check the "it could be
>> >> possible" part, the evmsgui seems to do the job just fine but I need
>> >> to work with 20 to 30 disks per cluster, with different sizes and raid
>> >> levels, so it is just not feasible to get the job done with the GUI,
>> >> besides, I hate GUIs.
>>
>
> I assume that the RAID levels are being defined on the array, so the OS has
> no visibility (nor interest really) to the RAID levels (unless
> PowerPath/NavCLI) are installed on the nodes involved.  Even then, grouping
> dis-similar RAID PVs in a VG is not a fantastic idea.  The inferred
> assumption is that the creation of a VG implicitly shares all PVs therein as
> a shared pool, and LVs can be cut from any/all of them without regard to the
> underlying PVs.  This does not seem to be the grouping strategy for which
> you are aiming.  Many shops use "custom" config files and wrappers that
> "group" disks when there is not a classical cluster management tool involved
> to handle the resource groups.
>

I get a bunch of 8.43 GB EMC Shares and I need to group them together
by class ( Tier1, Tier2 and T3) on volume groups.


>
>> > I assume that you are using a shared storage backend here (FibreChannel,
>> > Switched SAS, etc), as "Disks" don't have RAID levels, where LUNs do.
>>
>> We're using shared FC Storage (visible by both servers).
>>
>
> That makes sense, AFAIK RAC (and most standard cluster types) requires
> shared block storage (FC/iSCSI/etc) most do not function well with shared
> file-level (NFS or similar) due to the locking mechanisms involved.
>

Oracle RAC works well with GPFS (IBM's but that costs an arm and a leg).


>
>> >> The only problem is that I've been having more than the fun I
>> >> bargained for with the CLI as it seems to have a mind of its own. has
>> >> anyone successfully worked with the CLI that can provide a good
>> >> reference point to work with it?   so far, I've found a couple of EVMS
>> >> user guides but their information on the command line is very limited.
>> >
>> >
>> > We just use LVM and the appropriate FS tools to manage the disks,
>> granted,
>> > you have to often do multiple operator steps (create LV, run mkfs.x;
>> where
>> > EVMS does that in one step), but finding LVM expertise is MUCH easer than
>> > EVMS expertise.
>> >
>>


I agree, besides EVMS is only supported in SUSE10 and no longer
developed, so adopting it right now is not what I would call
brilliant.

thank you again,
Enrique.


More information about the SATLUG mailing list