[SATLUG] evms command line...

John Pappas j at jvpappas.net
Tue Sep 8 14:30:22 CDT 2009


Found some Cluster LVM material (Some is dated, but still relevant from a
concept/general procedure perspective):

http://sources.redhat.com/cluster/doc/usage.txt
Most relevant, as GFS and OCFS are similar in nature.

http://iggi.mandriva.com/ch23.html
Seems to be thorough, but seems to document non-shared storage configs.

On Tue, Sep 8, 2009 at 10:45, John Pappas <j at jvpappas.net> wrote:

> Sorry for the latency (congestion occurred) but:
>
> On Mon, Aug 31, 2009 at 03:09, Enrique Sanchez <
> esanchezvela.satlug at gmail.com> wrote:
>
>> On Sun, Aug 30, 2009 at 11:09 AM, John Pappas<j at jvpappas.net> wrote:
>> >
>> > Define cluster.  Are you using a cluster file system (GFS, Lustre, OCFS,
>> > CXFS), or a standard cluster where the disks are shared, but only one
>> mounts
>> > at a time
>>
>> We need to provide highly available application and database servers,
>> the database servers will be using Oracle RAC using ocfs2
>>
>
> Ok, so native LVM cannot be used due to Cluster locking issues (Standard
> LVM has a single system lock setup), and I am not sure if CLVM (Cluster LVM)
> is multi-active capable (Reading...)
>
>
>> and the application servers would be present on one node at a time, if
>> the node fails the application server must be restarted on the
>> surviving node ( including the IP address and filesystems), thus, the
>> filesystems can be mounted on both servers at the same time or one at
>> a time.
>>
>
> I assume that the Active/Passive Application server setup is due to lack of
> a load balancer or session persistence, regardless CLVM is certainly
> acceptable for active-passive (quorum controlled resource) clusters, as only
> one node is expected to own the resource at a time.  I am thinking that
> standard LVM can be used via import/export scripts in the failover policy.
>
> if the standard LVM manager can be used to mount the filesystems on
>> one server at a time and import them on the surviving one, I'll throw
>> evms out of the window, however, it can not be used, as I found out,
>> to create ocfs2 filesystems mounted on both servers.
>>
>
> I am not sure which "it cannot be used" references, but LVM does not care
> what filesystem is on a LV, and EVMS requires a "plug-in" so that it can
> call the appropriate `mkfs.x` to format the created ELV.
>
>
>> I need to group disks into larger units (cluster aware) then slice
>> them into filesystems, the disks are EMC shares 8.43 GB and
>> filesystems can be any size.
>>
>
> I am not sure how OCFS2 reacts to LVM, but RAC uses multiple-instance,
> single-tablespace to allow Active/Active clustering with shared data.  CLVM
> would then have to allow multiple node VG import for this to work.  URL
> below infers that multiple node VG access is allowed, but it is unclear, so
> more reading needed.
> See:
> http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/LVM_Cluster_Overview.html
>
> AFAIK most cluster software allows for the definition of a resource group
> that includes the physical disks included in that group.   RAC
> nonwithstanding, is that the case here?  If so, I would guess that "best
> practice" would be to define and maintain the grouping there.  If not, LVM
> presumes similar physical disks, since a VG takes all disk blocks on all PVs
> and groups them into a pool of shared extents.  I am getting the feeling
> that your "grouping" need is not LVM style management.
>
>
>> >> check the "it could be
>> >> possible" part, the evmsgui seems to do the job just fine but I need
>> >> to work with 20 to 30 disks per cluster, with different sizes and raid
>> >> levels, so it is just not feasible to get the job done with the GUI,
>> >> besides, I hate GUIs.
>>
>
> I assume that the RAID levels are being defined on the array, so the OS has
> no visibility (nor interest really) to the RAID levels (unless
> PowerPath/NavCLI) are installed on the nodes involved.  Even then, grouping
> dis-similar RAID PVs in a VG is not a fantastic idea.  The inferred
> assumption is that the creation of a VG implicitly shares all PVs therein as
> a shared pool, and LVs can be cut from any/all of them without regard to the
> underlying PVs.  This does not seem to be the grouping strategy for which
> you are aiming.  Many shops use "custom" config files and wrappers that
> "group" disks when there is not a classical cluster management tool involved
> to handle the resource groups.
>
>
>> > I assume that you are using a shared storage backend here (FibreChannel,
>> > Switched SAS, etc), as "Disks" don't have RAID levels, where LUNs do.
>>
>> We're using shared FC Storage (visible by both servers).
>>
>
> That makes sense, AFAIK RAC (and most standard cluster types) requires
> shared block storage (FC/iSCSI/etc) most do not function well with shared
> file-level (NFS or similar) due to the locking mechanisms involved.
>
>
>> >> The only problem is that I've been having more than the fun I
>> >> bargained for with the CLI as it seems to have a mind of its own. has
>> >> anyone successfully worked with the CLI that can provide a good
>> >> reference point to work with it?   so far, I've found a couple of EVMS
>> >> user guides but their information on the command line is very limited.
>> >
>> >
>> > We just use LVM and the appropriate FS tools to manage the disks,
>> granted,
>> > you have to often do multiple operator steps (create LV, run mkfs.x;
>> where
>> > EVMS does that in one step), but finding LVM expertise is MUCH easer
>> than
>> > EVMS expertise.
>> >
>>
>> I think I am familiar enough with Linux LVM to be able to create vgs
>> and I am breaking ground with evms, too bad it is dead beyond SLES 10.
>>
>
> Active/Active clusters are complex beasts, and most rely on the cluster
> management tool to group cluster resources (Disks, IPs, Names, Services,
> Files, etc) together, rather than a subsystem like LVM.
>
> >> Why SUSE and why 10SP2,  because that is what the customer wanted to
>> >> use, no bargaining there.
>> >>
>> >
>> > This is like running RHEL 4.x over 5.x, so I assume that they have
>> > a dependency (custom software, or the like; or simple politics) that
>> > prevents using a new kernel or userspace tool set.
>>
>> I know and aware of the situation, however SAP, or Novell,  has not
>> certified SAP for SLES11 (or vice versa).
>>
>
> Yep, aforementioned assumed dependency.  In this case, it is a "vendor
> support" dependency;  sometimes it is hardware, sometimes software,
> sometimes political, but always a "show stopper"
>
> HTH,
> jp
>


More information about the SATLUG mailing list