gabriel rosenkoetter on Fri, 8 Feb 2002 18:20:26 +0100 |
On Thu, Feb 07, 2002 at 05:33:40PM -0500, Rebecca Ore wrote: > Explain a bit. Dnews and some of the nntpcache programs ask only for articles > on demand. There are suck feeds. Almost always the word among people who > aren't the Dnews developers is that this is fine for smaller sites. I'd hope what nntpcache does is ask for the articles on demand... and then *keep* them. (Which is why I'd want INN to be doing this, just like I want BIND 9 to do caching DNS rather than some flimsy caching-only implementation.) All the local spool files (yes, yes, or cycbuffs) exist, but get filled on a single-article (or, acceptably, single-group) basis. After which time, the next requesting client gets the cached copy of the article (without wasting the peer bandwidth on it). > Ask in news.software.nntp about the feasibility of this. Most > sites don't want to rely on their peers' idea of good retention. Sure, but if I were, say, a national DSL ISP, I think I'd be very happy if I could run a central news server on a fat (external) pipe that peered with appropriate places and then a separate, caching news server in each of my COs with stub DNS (or a transparent proxy) in place to point users at the news server at their CO. That way, I'd only have to pull, say, comp.sys.windows.sucks down once for all my DSL users to get it. Cheaper for pretty much everyone, and faster for all but the first reader in a given CO. (My transit time to my CO is significantly better than that to Speakeasy's main office... though I get to the latter over an internal network, it's still all the way up in Seattle across a few more switches and such.) -- gabriel rosenkoetter gr@eclipsed.net Attachment:
pgpaZb88j3uDd.pgp
|
|