[SATLUG] evms command line...

Enrique Sanchez esanchezvela.satlug at gmail.com
Mon Aug 31 03:09:13 CDT 2009

On Sun, Aug 30, 2009 at 11:09 AM, John Pappas<j at jvpappas.net> wrote:
> I am not sure I can help here but:
> On Fri, Aug 28, 2009 at 16:07, Enrique Sanchez <
> esanchezvela.satlug at gmail.com> wrote:
>> Hey folks,
>> I need to setup a few  SUSE 10SP2 clusters
> Define cluster.  Are you using a cluster file system (GFS, Lustre, OCFS,
> CXFS), or a standard cluster where the disks are shared, but only one mounts
> at a time

We need to provide highly available application and database servers,
the database servers will be using Oracle RAC using ocfs2
and the application servers would be present on one node at a time, if
the node fails the application server must be restarted on the
surviving node ( including the IP address and filesystems), thus, the
filesystems can be mounted on both servers at the same time or one at
a time.

if the standard LVM manager can be used to mount the filesystems on
one server at a time and import them on the surviving one, I'll throw
evms out of the window, however, it can not be used, as I found out,
to create ocfs2 filesystems mounted on both servers.

>> and evms seems to be the
>> only option to bundle several disks together,
> Define bundle.  LVM "bundles" disks into Volume Groups, and is much more
> common than EVMS.  AKAIK, EVMS combines operations so that you don't have to
> manipulate LVM then manipulate the FS.  I think that EVMS is kind of a
> "Wrapper" to LVM+FS.

 I need to group disks into larger units (cluster aware) then slice
them into filesystems, the disks are EMC shares 8.43 GB and
filesystems can be any size.

>> check the "it could be
>> possible" part, the evmsgui seems to do the job just fine but I need
>> to work with 20 to 30 disks per cluster, with different sizes and raid
>> levels, so it is just not feasible to get the job done with the GUI,
>> besides, I hate GUIs.
> I assume that you are using a shared storage backend here (FibreChannel,
> Switched SAS, etc), as "Disks" don't have RAID levels, where LUNs do.

We're using shared FC Storage (visible by both servers).

>> The only problem is that I've been having more than the fun I
>> bargained for with the CLI as it seems to have a mind of its own. has
>> anyone successfully worked with the CLI that can provide a good
>> reference point to work with it?   so far, I've found a couple of EVMS
>> user guides but their information on the command line is very limited.
> We just use LVM and the appropriate FS tools to manage the disks, granted,
> you have to often do multiple operator steps (create LV, run mkfs.x; where
> EVMS does that in one step), but finding LVM expertise is MUCH easer than
> EVMS expertise.

I think I am familiar enough with Linux LVM to be able to create vgs
and I am breaking ground with evms, too bad it is dead beyond SLES 10.

>> Why SUSE and why 10SP2,  because that is what the customer wanted to
>> use, no bargaining there.
> This is like running RHEL 4.x over 5.x, so I assume that they have
> a dependency (custom software, or the like; or simple politics) that
> prevents using a new kernel or userspace tool set.  I am not aiming for a
> distro flame war, and I am biased, but SLES 10 is a reasonable enterprise
> OS, as much so as RHEL4, but RHEL is marketed much better, thus is more
> common.  Both wrap "linux" (the kernel), with common GNU tools (Bash,
> binutils, etc) but are "built" differently and "feel" different (ie the /etc
> organization is slighly different, and the bundled "admin" tools are
> different).

I know and aware of the situation, however SAP, or Novell,  has not
certified SAP for SLES11 (or vice versa).


More information about the SATLUG mailing list