[SATLUG] Filesystem/RAID advice
travis+ml-satlug at subspacefield.org
travis+ml-satlug at subspacefield.org
Sun Oct 24 19:30:09 CDT 2010
On Sun, Oct 24, 2010 at 06:28:28PM -0500, Brad Knowles wrote:
> Actually, it depends on the hardware and software. If the hardware
is slow enough and the software is fast enough, it may actually be
faster to emulate it in software than to run it in hardware.
Sure, but if they're both on the same CPU, as is the case with
hardware virtualization, then emulate is slower than native, usually.
> But the big trick to a lot of emulation is that you're not actually
emulating the hardware itself, you're emulating many layers of
software that is running on top of the hardware. If you can
translate those layers of software and intercept the calls earlier
in the stack and replace the layers underneath with other software
that has been tuned to execute faster, then you can make it seem
like emulating hardware in software on the same machine is actually
faster than running the software on the bare hardware -- what you've
really done is cut out many layers of crappy software and replaced
them with more efficient software that runs faster.
> See above. There are plenty of cases where Microsoft has shipped
really, really crappy software, and by chopping out many of the
layers of crappy software that Microsoft has shipped, it is quite
possible that you can make the same code run faster in a "virtual
Now, I'm not a specialist on virtualization, but I supposed we were
all talking about full hardware virtualization here - virtualizing at
the hardware-software interface, a la VMWare.
As such, it's difficult to "do some stuff in software and then do it
in hardware" (s+h1) and be faster than "do it in hardware" (h2).
Possible, but difficult, and it only happens if h1 < h2; that is, if
you can use the (same) hardware more efficiently.
I think what you're discussing is paravirtualization:
In that case, yeah, you're substituting s1+h1 with s2+h2, and
you can see more gains, since you're replacing more - not just
hardware, but potentially inefficient software also.
OP: You can deal with the "buggy file system" issue, if it's very
rare, by detecting corruption quickly and before copying data to
another location where you keep periodic backups.
I don't really recommend those consumer NAS devices; my impression is
that most of them tended to be underpowered CPUs with limited RAM and
slow interfaces, limited upgradeability, limited file systems, though
I'd be happy to be proved wrong. It's certainly not a design
requirement, just market forces I think ("cost per terabyte" thinking).
I dumped about $2k into my file server, which is a real unix system.
I think I posted my performance figures here a while back.
A couple of surprising things:
* With a multi-TB system, you want a file system that doesn't require
costly fscks at startup
* All the discussion of NVRAM and stuff matters very little if you
have a UPS on the server and can do clean shutdowns
* The UPS on your file server will limit the uptime of your entire
* A little thing like a filesystem oops on shutdown (which I had with
xfs quite often) can ruin your whole "shut down cleanly" plan.
* If your boot CDs don't have the right LVM and FS tools, you could
find yourself in a very annoying situation.
* Buggy file systems are super-super annoying terrible and RAID only
makes the data corruption more efficient. ;-)
* Hardware RAID is rare, and support for Linux limited. I ended
up going with 3ware and haven't had a problem except getting used
to the setup and management interface.
On file systems, I ended up using reiserfs, though I look forward to
Good code works on most inputs; correct code works on all inputs.
My emails do not have attachments; it's a digital signature that your mail
program doesn't understand. | http://www.subspacefield.org/~travis/
If you are a spammer, please email john at subspacefield.org to get blacklisted.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 833 bytes
Desc: not available
Url : http://www.satlug.org/pipermail/satlug/attachments/20101024/7bc0838d/attachment.bin
More information about the SATLUG