Quote:
Originally Posted by Chrysalis
their pings are fine, its their speedtest. Few points I will make here which were made open after alot of pushing from myself and someone else.
1 - ncuk peer over linx on 2 networks, as well as lonap. ncuk host tbb. VM dont peer over lonap so that leaves linx or external transit, ncuk send traffic to VM over linx. Since tbb intially blamed vM they would appear to be saying VM's linx link is congested however ping graphs and other traffic going over linx doesnt back this claim up.
2 - after the pushing it became apparent the bulk of the speedtest issues were down to one of the tbb speedtest server's, when this was bypassed speed's shot up although some issues remained they were much less severe.
Ultimately using the tbb monitor is just a test between 2 end points, so it may not truly reflect generic performance on the connection, but I would say most of the time it should give a reasonable picture of whats going on.
|
I would also agree with you Chrys,
From the evidence shown from my TBB results VM have managed to finally track down specific times when my modulation is currently switching from 16Qam to QPSK, the TBB monitor clearly shows the point at which my connection becomes cronically saturated due to the reduced bandwidth available when the modulation switches to QPSK.
So for this instance I've found it a very usefull tool.
However what the TBB results dont and cant show is when the actual switch from different modulations take place, just when the connection is busy enough to saturate the reduced bandwidth. Either way it's handy to give them a rough time slot for them to check on their diagnostic equipment.