gabriel rosenkoetter on Sun, 12 Jan 2003 09:43:04 -0500


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Server Recommendation


On Sun, Jan 12, 2003 at 09:00:22AM -0500, Kam Salisbury wrote:
> I think the quality and capabilities of the motherboard are most important
> of all.

And I still say that a workstation-level motherboard cannot accept
ECC RAM. (Yes, it costs more, and yes, you don't *have* to use it,
but the pain that comes from NOT being able to use it and wanting or
needing it is unfixable. The fact that you can't get a rack-mount
server from Dell with ECC RAM without stepping beyond the PowerEdge
2xx0 series was enough to make my employer switch to IBM as a
vendor. Of course, IBM only has a single power supply in their
xSeries 3xx systems--which are a royal pain to get mounted in an
industry-standard 19" rack--so I'm not too sure where we're going
next.)

> The ability of the motherboard to accept a certain type of RAM or
> processor or even storage device is not all that relevant if you do not
> intend to use that feature or capability.

If you've got a system where you need none of the assurances
provided by redundant power supplies, UPS, ECC RAM, and RAIDed disk,
then it's not a server in my book. :^>

> Keep in mind, those Compaq, Dell and etc. servers that you may buy fall into
> this same bucket.

Unfortunately often true. And those machines shouldn't really
qualify as servers. :^>

Dell's lower-end PowerEdge servers lack some features they really
ought to have (ECC RAM), as do IBM's lower-end eServers (redundant
power supplies).

Compaq has always and still does produce truly server-quality
machines in their ProLiant line (two separate Mylex SCSI RAID
controllers for the in-system bays, redundant power supplies);
their weakness is that they only update their model line once a
quarter (if that), and when they do, they don't jump straight to
the newest stuff out, which means that they *still* aren't shipping
systems with Xeons in them.

> Once you begin talking about 32 way IBM boxes on AS400 then we
> enter the realm of mainframe style computing.

Barry Roomberg has an appropriate quote here: "mainframe style
computing == not mainframe computing". ;^>

> Beyond quality and support garuntees, you 'can' make a standard desktop
> computer into a sub-server class machine. It really depends on how much
> money you are willing to invest.

You *can*, but if you're starting from scratch, right now, and you
want a reliable system, buy one with the right stuff now, rather
than having to replace effectively everything but the case later and
you'll save money in the long run.

> ECC RAM is great if you actually put that in (it costs more) and if the case
> has redundant power supplies and you power the system with a clean UPS
> device.

I'd say those three things are orthogonal, though related.

ECC RAM helps, not just for a power failure, but also if a DIMM
develops errors under a certain threshold (and the way DIMMs are
mass-marketed these days, they're liable to eventually).

Redundant power supplies don't need to be plugged into separate
UPSes to be useful; they can just be plugged into separate circuits
(so if an air conditioner/hair iron/whatever blows one, the system
doesn't go down).

UPS's benefits for both regulating power (as a good--not just any,
but a *good*--power strip will too) and providing time for the
system to shut down cleanly (don't try to run off of them unless
your UPS is actually uninteruptible... you'll know if it is, because
you'll have installed a diesel generator) in the event of a true
power loss.

> both IDE and SCSI that beat the pants off of many competitors similiar
> models. Yes... IDE RAID controllers. Yes... 5 disks. You really should check
> their site. For a server with 25 to 50 people asking of it all day it is
> enough. The money saved going the IDE route versus SCSI can be applied to
> larger disks and a really good tape drive (USB or firewire of course, we
> want to future proof ourselves right?).

I don't agree. SCSI has always been a better protocol for
many-disked machines, and it still is. I'm sure I know what 3ware's
doing (using a controller per disk in their array), and I'm sure
it's a bad idea (more moving parts to fail, and an expensive fix
when they do, since 3ware's controllers are almost definitely not
individually replaceable).

One point I'm wavering on is the IDE-to-SCSI disk adapters. I'd
never install one of these at work, but it's mighty attractive at
home from the cost point of view. But I've read some reviews of the
couple available, which showed that drives attached this way were
noticeably both slower and more bursty than if they were just
attached directly to an IDE bus, which sort of defeats the purpose,
when you think about it. (In this case, there's still an extra
moving part to fail, but a failure of the IDE controller attached to
each disk doing the conversion to SCSI signals on the bus can more
reasonably be viewed as a disk failure.)

Oh, and if you're "future-proofing" yourself, USB (and even firewire)
isn't where you want to be, fibre channel is. (Though precisely
which kind of fibre-channel you want is a bit unclear.)

> Need auto fail-over for network cards? Power supplies? Yep. they too can be
> added to a PCI slot just like a really kickin' stereo to mom's mini-van.

Sure, until you run out of externally-accessible PCI slots in that
bargain-basement case, and have to replace it too.

Also, how do you propose to run multiple power supplies out of a PCI
slot that can all power the motherboard and communicate with each
other?

> I have and still actually do perform requests for just what I am talking
> about. A client with an old box, lets say an old PII-333 workstation.

Hrm. I seem to recall that the original question here was about
purchasing a new computer, not about having an existing one to
upgrade. If you were buying new parts, would you still by
workstation-class materials and then upgrade them? Don't you think
that would be more expensive?

> harddisks, max out the RAM at 384MB and install a supplemental harddrive

Nowhere *near* enough, in my book. Servers need to be running with,
at a bare minimum, 512 MBs of RAM these days, especially if they're
serving users with X sessions. And I wouldn't put anything new into
production at work with less than 2 GBs.

> install Redhat in a RAID1 configuration with Samba and a few extras. Viola'!

Software RAID? Are you joking?

I think the upgrades you do work for a certain category of user,
wholly separate from the category of user I serve (who expect to
spin all the processors at peak for days operating on several
terabytes of data across NFS), which may be the real source of our
difference in opinion on this.

Despite this, my advice for someone purchasing new hardware remains:
figure out what you need out of the system now, seriously considering
features like ECC RAM, redundant power supplies, (hardware!) RAID,
and (LVD!) SCSI rather than IDE for internal disks, and purchase
accordingly. Buying a new workstation and then adding all the parts
you need to it will quite probably result in redundant purchases
(not of the power supply, but of the motherboard kind) and cost
you way more in the long run.

-- 
gabriel rosenkoetter
gr@eclipsed.net

Attachment: pgpM4MtyJ7MV6.pgp
Description: PGP signature