Malcolm on 15 Sep 2004 16:52:03 -0000 |
I'm trying to work out if it's possible to limit bandwidth usage on a per process level under linux, but haven't found anything in my searches so I figured I'd ask here. I found documentation on rate limiting per protocol (eg. no more than 80% http), or cumulative (eg. don't exceed more than 1Mbs) and other QoS type things. However what I need to do is run a collection of identical processes each of which is limited to a fixed maximum (I'm trying to simulate load from what will be a significant number of machines on dialup connections without actually having a few hundred machines to run the test with). I do have control over the software, so if there isn't a system level control available I can edit things, however all the software side solutions I've found are 'bursty' - any one connection can exceed the limits for short periods of time (whatever your time interval is, and two short an interval distorts the traffic as well as you end up sending more smaller packets), and as the communications in question are bursty in nature this distorts the test. Anyone got any pointers as to where to look? (google has not helped for once). -- We are the faith of your tomorrows let us breathe, let us see, let us be. - 'Breathe' Cruxshadows ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug
|
|