gabriel rosenkoetter on 11 Oct 2006 00:26:41 -0000


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] RAID cards


On Sat, Oct 07, 2006 at 09:12:22PM -0400, Will Dyson wrote:
> The standard is called DDF. Sorry for being too lazy to look it up before.
> 
> http://www.snia.org/standards/home

My bogosity meter is pegged deep by "DDF v1.2 Intellectual Property
and Essential Claims". Patents are not standards, nor are the latter
the expressed usefully in the language of the former... never mind
the parties involved (HPaq, IBM, Microsoft, oh my!). That said...

... the standard as addressed seems pretty reasonable and does
address the relevant concerns. They wank a bit longer than I think
they need to (and make some algorithmic requirements on RAID sets...
which may be fine, but *requiring* them scares me, because somebody
eventually find a better way, but if they're locked into supporting
this standard, they won't bother to go looking) about how RAID sets
should be laid out, but when they finally get down to business in
chapter 5, they're making sense.

> Given that a few minutes of searching have not located me any cards
> that even claim to support DDF, others seem to share your opinion on
> the need for such a thing.

I think the underlying trouble for DDF is that they're trying to fix
a problem nobody has in the real world. They're trying to make it so
that you can physically move hard drives from what piece of
equipment into another piece of equipment without having to transfer
the data, thereby saving you... um... the cost of buying new hard
drives? I guess?

But this is silly. By the time I want to move from one RAID device
(say, the one on a server motherboard, using hard drives physically
in that server) to another one (say one in a fibre channel, NAS, or
even direct-attached external device) because I need more space or
need to offer LUNs to multiple hosts, I've depreciated the cost on
the original hardware purchase and the new hardware available
includes faster and larger disks, so I want to transfer the blocks
anyway.

What's more, if I'm physically transferring hard disks, I have to
take the server down completely for a non-trivial period of time. If
I'm transferring the data online, from one set of disks to another,
I can do most of that in the background by way of various hardware
or software snapshot methodologies, then quiesce/stop the
application very briefly, swap out the backing store, and start it
back up again with minimal downtime.

This is really a Solution in desperate search of its Problem. That
was a decent way to get venture capital in '93, but it's just a good
way to get sales slime laughed at these days.

I wouldn't be surprised to find HP- and IBM-branded (internal) RAID
hardware and Microsoft (software) RAID using DDF some time in the
next 5 years... but I really doubt anybody will care. EMC, HDS,
NetApp... none of those people will care, because there's no reason
for them to. Because even if I wanted to, I couldn't pull the drives
out of my HP DL535 and shove them into my EMC Symmetrix/DMX/Clariion:
the drives would be out of spec, I would need to go pay for drive
trays for them, and on and on, to the point where I might as well
just have paid for new drives.

In fact, the cost for the physical hard drives is functionally
irrelevant. EMC and Hitachi ship arrays with all drive trays filled...
if you've paid for less raw storage than the whole array, actual use
of those drives is software inhibited. This costs them next to
nothing, and means that when you outgrow the space, you just start
paying them more per annum and they don't have to send some tech out
to do any physical installs... but if individual hard drives were
actually a relevant cost, it wouldn't be that way.

Even from the point of view of an organization well below EMC and
HDS's target market... I can wander over to MicroCenter and buy a
250 GB SATA disk under $100 right now, retail, as a single drive
(never mind in bulk). I mean, really. Even if it's a server in my
house, for my home business, I'd rather just build up new hardware,
test it, copy the data across. That's less painful than scrounging
the old hard drives out and connecting them into the new chassis.

I guess I can see this being kind of convenient on a workstation
with RAIDed disks when the user wants to swap the motherboard out
but keep their data... so, um, yeah. I have a very hard time caring
about that case. RAID for workstations is like getting a vault door
on your apartment. Sure, it provides greater reliability of your
data / physical security, but you could just actually do backups
of the data that actually matters / make friends with your neighbors.

Maybe I'm missing something... why would YOU want this?

> In a full-hardware-raid card, the on-disk description of array groups
> a matter for the card's firmware alone. If such a card were to support
> DDF, then I would expect its management interface to work just like it
> does with the vendor's current proprietary format.
> 
> If the DDF specification failed to specify the byte sex (or whatever)
> of the on-disk data, then that would be a grave flaw in the standard.

Right, and because of the silliness of the concept I described
above, I kind of figured their Solution would be silly too. I was
wrong... but the whole thing's still silly.

> Are we even talking about the same thing?

I think so.

-- 
gabriel rosenkoetter
gr@eclipsed.net

Attachment: pgpfCClcVQ4wU.pgp
Description: PGP signature

___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug