Quote:
Originally Posted by roughbeast
As congestion increases systems can, up to a certain threshold, maintain acceptably low latency, but beyond that point things have to give, or are allowed to give, and the flood gates open. Similarly as congestion reduces systems can just as rapidly restore acceptable levels.
|
Pretty much, yes. As Chrysalis pointed out earlier (though it wasn't entirely relevant at the time), fat pipes tend to not show any increase in latency until they are very close to full, you may gradually get up to 98%-99% utilization without any noticeable increase in latency. Core routers are so fast everything is forwarded at line speed, 999Mbps data comes in, 999Mbps data is immediately forwarded out the other end with no buffering or waiting. Once you hit 1001Mbps incoming traffic but an outgoing interface being a 1000Mbps line, that extra 1Mbps will cause buffering to start, hence the large increase in latency. The buffers are usually no more than a few megabytes so even 0.1% overutilization will fill them up in seconds.
Quote:
Given that congestion inevitably builds up and reduces gradually, this is the only explanation I can think of. Or, is there perhaps some attribute of the TBB connection to VM or of their latency measuring method that makes the profile look so abrupt?
|
Your explanation is basically right. Congestion builds up gradually, but buffering is basically zero until the link hits 100%.
---------- Post added at 20:00 ---------- Previous post was at 20:00 ----------
Quote:
Originally Posted by roughbeast
O
Here we go. Here comes the blip.
|
Now traceroute to TBB, as well as everything else under the sun.