[SATLUG] 67 TB for less than $8K

Brad Knowles brad at shub-internet.org
Tue Oct 9 23:14:47 CDT 2012


On Oct 9, 2012, at 8:47 PM, Bruce Dubbs <bruce.dubbs at gmail.com> wrote:

> I think that your biggest issue is with the case design.  I found an alternative at
> 
> http://www.amazon.com/Supermicro-CSE-847A-R1400LPB-Redundant-Rackmount-Chassis/dp/B0036F1ROE
> 
> It allows 48 drives and has redundant power supplies.  It's probably a little more expensive overall, but seems interesting.

No, I don't have that much issue with the case design.  I think they could use more vibration isolation between the drives and the case, but as far as the case itself is concerned they seem to have fairly faithfully followed the Sun X4500/"Thumper" design, which I think is not bad for what it is.

You do have to slide out the whole chassis in order to replace drives, and that's one bloody heavy system to be projecting out that huge amount of distance on sliding rails, but then the "Thumper" had that same problem -- and that's why everyone I know who actually had any Thumpers always mounted them only at the bottom of the rack and never anywhere else.


I have much more problems in the way they specified their SATA-to-SATA controllers, the way they designed their non-redundant and imbalanced power supplies, the way they didn't specify ECC RAM or more than 8GB of RAM for a machine with this much storage, the fact that they didn't allow for interface bonding/load balancing and swapping out Gig-E for 10Gig, and a variety of other factors.

It wouldn't cost that much more to use SAS-to-SATA controllers, which would give you a hell of a lot more bandwidth to the disks and I think it would even free up one PCIe slot.  In fact, two SAS-to-SATA controllers might end up being less expensive than three SATA-to-SATA controllers.

Going to redundant power supplies I think would be a significant engineering change, and would probably make the unit considerably more expensive.  So, I'm more equivocal about this change -- the fewer of these units you can afford to have, the more important that I think redundant power supplies would be, but many people just don't understand how important this kind of thing is until it's too late.  At the very least, they could switch to a more common and modular power supply, so that you could get back online quickly and easily, even if the power supplies in question cost a few pennies more.

Using ECC RAM would be more expensive, but I believe it would be worthwhile.  However, either way it would be relatively cheap to double or even quadruple the amount of RAM in the box.  That way you could at least keep all the filesystem meta-data in RAM, as opposed to being forced to constantly page in and out even that most basic information.

Building in the idea of allowing for a pair of extra dual-interface NICs would also help greatly improve performance, especially if they were 10Gb and not just plain Gig-E.  You could also get improved reliability and the ability to sustain a switch failure, as well as increased performance.  NICs with TCP Offload Engines (TOEs) are still damn bloody expensive, but I think this is a case where we could go with less expensive "plain" 10Gb NICs.

If you re-arrange the drives, you could squeeze in 50 disks instead of 45, and you could even put them on hot-swap sleds, if that was important to you.  They are the single class of component most likely to fail (and largest in number), so it would make sense to engineer the systems in such a way as to try to optimize that process as much as is feasible.

Of course, I'm a big fan of ZFS, so I'd also like to see the design allow for putting in small SSDs for read and write caching, especially for storing the journal log.  The SSDs wouldn't have to be big, and would be a tiny fraction of the overall system cost, but they would be a huge, huge performance win.


Even if you didn't make all these changes, the result would be a system that doesn't cost all that much more, but which is at least 10x as fast as the Backblaze pod on which it would be based.

It would also be much more in line with the kind of performance and quantity of storage that you should be able to get from a reasonable vendor if you were to spend on the order of $12-24k to buy from them.

--
Brad Knowles <brad at shub-internet.org>
LinkedIn Profile: <http://tinyurl.com/y8kpxu>



More information about the SATLUG mailing list