zuzu on 18 Oct 2007 03:10:12 -0000 |
On 10/17/07, Brian Stempin <brian.stempin@gmail.com> wrote: > > sure, but as a cultural mindset, I am suggesting that "growing the > > pie" (i.e. just adding more cheap bandwidth) is too often "off the > > table" of consideration. this is the cultural problem at hand. > > > This comment kinda sparked a thought in my head (mark the calendar!). Isn't > one of the reasons that people bash Vista due to the fact that, as JP's > signature would suggest, Vista has effectively nullified Moore's law? > Didn't Vista do this by (a) providing <x> new functionality and (b) using > <x>^n worth of computing power? (don't forget to include sysadmining / developers in that equation. for example, garbage collection is more costly for computer resources but saves programmers from having to manually manage memory, and computers are getting faster and cheaper at a rate much faster than humans are getting faster or cheaper. so if the difference is between humans working and computers working, always throw more computers at the problem.) > Isn't that what would happen with bandwidth? Yeah, we'd have more > throughput, but wouldn't we also (eventually?) have a disproportional growth > in waste? It's one thing to justify a growth in bandwidth for accommodating > new features, but what I see is the suggestion that we need to justify a > growth in bandwidth to keep doing what we're already doing. the difference is _who decides_, which is the fundamental epistemological question of all economic calculation. individuals choose (or not) to run Windows Vista (or XP or Linux or Plan9). the problem I'm attempting to identify with QoS / tiering and blacklisting is that a cabal of "central planners" are deciding for users, rather than users deciding for themselves. (individuals choosing to use MAPS/RBL for their mail server is an interesting debatable interpretation, I agree that much with you. I don't like the idea of government forcing you to not use blacklisting with your own private property either.) > Case and point: > Why would I purchase twice the bandwidth if I can only do as much with it > (throughput wise, I must note...) as I could before I doubled it? That just > seems counterintuitive to me. interestingly enough, "double the bandwidth" was exactly the scenario debated by Dave Isenberg in his analysis that overprovisioning is less expensive overall for a company than maintaining a QoS / deep packet inspection system. http://arstechnica.com/news.ars/post/20070709-neutral-net-needs-up-to-twice-the-bandwidth-of-a-tiered-network.html A neutral 'Net needs up to twice the bandwidth of a tiered network By Nate Anderson | Published: July 09, 2007 - 01:32PM CT Recent research suggests the obvious: that building an undifferentiated network requires far more capacity than one in which traffic is prioritized, throttled, and controlled. But when AT&T researchers are involved in writing the paper in question, the results seem a bit more sinister. Is the research just another attempt by a major backbone Internet operator to justify a non-neutral Internet? Some observers think so. A recent piece in The Register on the paper was titled "AT&T rigs net neutrality study"âtell us how you really feel, gents. But corporate sponsorship of research doesn't automatically invalidate that research; what's needed is a close look at the actual results to determine if they were done correctly. According to David Isenberg, a long-time industry insider and proponent of "dumb" (neutral) networks, the research itself is fine. In his view, it's simply obvious that a dumb network will require more peak capacity than a managed one. But extending that banal observation to make the claim that running a managed network is cheaper is, to Isenberg, not at all intuitive. For one thing, doubling the peak volume of a network does not mean spending twice as much money as it cost to build the original network. "The failure of the authors to extend the conclusions from capacity to raw costs of capacity is deliberately misleading," Isenberg says, "especially when the researchers invoked 'economic viability' and 'cost of capacity' in their introduction to the work." He presents other arguments, but the gist of his criticism is that the paper is fine (Isenberg used to work at AT&T and knows of the some people involved in the research) but simply leaves out important considerations. It cannot, then, be used to make the claim that a non-neutral Net is a cheaper Net. According to Isenberg, the cheapest and best alternative is simply to build out dumb capacity: to "overprovision" by as much as 100 percent. The "bandwidth is scarce" argument plays right into the hands of the major ISPs, which can use it to start charging a premium for crucial services that run across their networks. If they simply built out the networks to the point of abundance, they couldn't make all this extra money. Vendors who sell quality of service and deep packet inspection gear have been arguing that bandwidth is constrained for some time now; in talks with Ars, several of these companies have stressed that management is the only way to head off a bandwidth crisis. Throw more capacity at the problem, they claim, and P2P and YouTube will simply suck it up. While this debate over network capacity can sound arcane, it's crucial to the entire network neutrality argument. If Isenberg is right, then there's no compelling reason for a non-neutral Net. ISPs should simply invest in more capacity; it will be cheaper for them and it will allow customers to use any services they want at full speed. If Isenberg's wrong, then get ready for the wonders of a tiered Internet. ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug
|
|