Ian Reinhart Geiser on Fri, 13 Jul 2001 09:20:06 -0400


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: PLUG letterhead (was: Re: [PLUG] Fwd: plug meeting at usip.edu - fyi)


On Friday July 13, 2001 8:45 am, you wrote:
> On Thu, Jul 12, 2001 at 03:53:22PM -0400, Darxus@chaosreigns.com wrote:
> > You should've tied him up & drug him to the meeting.  Guillermo, the guy
> > who built the thing, was there, and told us a bit about it.  Actually,
> > you should probably get this person to talk to Guillermo.
>
> The professor in question is a she. ;^>
>
> I'm actually quite familiar with how beowulf clusters work, and I
> don't like them very much. I wan something that's architecture-
> independent, that adapts appropriately to varying ability in its
> nodes, and that exports at least the majority of the Unix userland
> interface to the network level (so that connecting to the cluster
> places processes on various machines without the user having to do
> anything special to get them there).
>
> Beowulf's failings, in my eyes, are that it's pretty much i386 only
> (um, sure, so Linux runs on alphas and some powerpcs too... but go
> look at http://www.netbsd.org/Ports/; regardless, I've read nothing
> of anyone building a beowulf cluster out of non-i386 machines), that
> it presumes all the machines in the cluster are basically the same
> (with regard to process placement for load balancing), and that
> writing software for it is still an exercise in parallel
> programming.
>
> Which is not to say that it's not useful, but I think it's possible
> to do better.

ummm, beuwulf is a wrapper term for PVM and MPI clusters on linux.  I have 
extensive experience with PVM and MPI clusters that have had NT boxes running 
along with very strange hardware.  I have actually built beowulf clusters on 
embedded ARM hardware running over a local PCI bus.  Purely academic but we 
where hosting the mess with a intel based box.  The problem you run into is 
getting the correct binary built for each machine.

It has been my experience that PVM is the most hackable while MPI seems to 
get the best performance.  MPI is also more comercialy used and has some 
tasty options if you are building massive systems.  Most of the national 
labs use MPI.

If anyone wants to come over you can watch PVM Povray have its way with a PPC 
running linux, AMD Athlon running NT and a Dec Alpha running True64 at the 
same time :)  (granted this is a needlessly over complicated demo, but it 
proves the point)

For more information on evil things to do with PVM and a few spare strongARMs:
http://www.msoe.edu/~barnicks/courses/cs400/199900/beowulf/ 

Needless to say all parrallel processing needs some degree of special 
programming.  If you lack the skill to write parrallel algorythms stick with 
batch based systems, the R&D time for developing scaleable parrallel systems 
will eat up any time you save in the end. :)

-ian reinhart geiser

p.s. as a side note if anyone needs a parrallel processing expert on site i 
am willing to send a cv to anyone interested :)

Fortune for the day:
---------------------------------------------------------------------
She sells cshs by the cshore.
---------------------------------------------------------------------
Ian Reinhart Geiser   -=<*>=-  Linux & KDE Developer


______________________________________________________________________
Philadelphia Linux Users Group       -      http://www.phillylinux.org
Announcements-http://lists.phillylinux.org/mail/listinfo/plug-announce
General Discussion  -  http://lists.phillylinux.org/mail/listinfo/plug