[SATLUG] 67 TB for less than $8K

Brad Knowles brad at shub-internet.org
Tue Oct 9 20:48:34 CDT 2012


On Oct 9, 2012, at 6:16 PM, Bruce Dubbs <bruce.dubbs at gmail.com> wrote:

> But the beauty of the solution is that parts are cheap.  There's no expensive Ferrari engine or transmission.

But the O&M costs are still very high, especially since the probability of failure skyrockets each time that a human being actually touches the device, or there is a drive failure and the array is put into degraded mode.

Those are facts of life for anyone who is operating a drive array.

> Also, the follow up report was that Hitachi drives failures were at 1%, not 5%.  The failure rate included infant mortality, so the burn in time provides a good screening of bad drives.

The way I read the second post, the overall failure rate was 5%, and initial data indicated that they were seeing lower failure rates with the Hitachi drives but that they did not yet have enough experience with them to see what the overall lifetime failure rate might be. 

> In the case of single unit, there are some places without redundancy (e.g. the motherboard), but the power supplies are redundant and a single unit could be set up with multiple RAID arrays.

There are two power supplies, but I don't think that they are actually redundant.  I think that's just how much power you have to provide for all the drives in the case.

If you can take a look at the wiring diagram and decipher what is connected where, maybe you can confirm or deny whether or not there is actually any power redundancy here.  I can tell that there are five molex power connectors from one power supply and four from the other one, so I'm pretty sure that there isn't any actual power redundancy here, but I'm not a mechanical or electrical engineer.


Of course, you could set up multiple RAID arrays per device.  If it was me, I'd be doing four or five-disk RAID-6 with three (or six) hot spare drives per chassis (one or two per controller), but you lose a hell of a lot of storage that way.

Due to the way the port multipliers are connected to the PCI cards, you're going to get device/channel imbalances pretty much any way you do it, and that's going to create significant bottlenecks just due to the amount of storage you'd be providing and the way you'd be providing it per box.


Of course, you'd also be left looking for a network card where you could handle that much bandwidth at a reasonable latency, and the three drive controllers would already have filled the three available PCIe slots on the motherboard.

Ideally, you'd want two dual-interface NICs, each set up in LACP bonding mode and each connected to two different switches, so that you could have higher bandwidth but also survive either switch or NIC failure -- or both.  These could be 10GigE, if you had a place to plug them in.

OTOH, you're going to have such serious performance problems rising from the way the drives are laid out and spread across so few SATA channels that maybe it wouldn't make a difference if you had just the single Gig-E interface per box.

> I'm not going to build one of these things because I don't need it, but I would suggest it to an employer with a limited budget but a need for a large amount of disk space to try out and develop actual experience with the concept from both a technical point of view and from a cost perspective.

There are definitely lots of lessons that you'd learn from building a box like this.  I'm not sure that you necessarily want to try to learn those lessons the hard way, however.

Even if it was your goal to learn those lessons the hard way, this seems like a pretty expensive way to learn.

--
Brad Knowles <brad at shub-internet.org>
LinkedIn Profile: <http://tinyurl.com/y8kpxu>



More information about the SATLUG mailing list