Jon Galt on Sun, 10 Feb 2002 00:03:26 +0100 |
On Fri, 8 Feb 2002 jaw+plug@tcp4me.com wrote: > typically, someone looking for geographically diverse servers has > one or both of the following goals in mind: > redundancy in the event of a failure. If a fiber is cut, natural > disaster, terrorist attack, etc. causes one server to be unable > to serve data, the other server(s) should take over and continue. > > improved end-user experience. By serving data from a server closer > to the end-user, performance is improved. > > both of these are "hard problems" to do well. > > the former generally involves hardware or software that monitors the servers > and re-directs traffic either by re-routing the packets or by changing the > addresses returned in DNS queries. > > the latter is nearly always done with "smart" DNS servers, that return the > address of the "best" server to use. Determining the "best" server involves > mapping the Internet topology and calculating metrics, often via BGP, or > ad-hoc pinging. It seems that the latter solution is a reasonable solution to both problems, since a down server (for whatever reason) can be seen as the lowest of low performance, zero. > there are also content-delivery companies that you can outsource to. they > colocate thousands of servers scattered around the world, and have a staff > of people that analyze BGP sessions mapping the Internet. Wow, I would think it would have to be almost all automated. > for further reading: f5.com, akamai.com Excellent, thank you! Wayne ______________________________________________________________________ Philadelphia Linux Users Group - http://www.phillylinux.org Announcements-http://lists.phillylinux.org/mail/listinfo/plug-announce General Discussion - http://lists.phillylinux.org/mail/listinfo/plug
|
|