Quote:
Originally Posted by Ignitionnet
The speed tester's ping reporting is unreliable, ignore it. On the gaming machine I have the ping time is 5ms, on this laptop it's 40ms+, both have the same latency on traceroutes.
Pinging gonzales.namesco.net [85.233.160.167] with 32 bytes of data:
Reply from 85.233.160.167: bytes=32 time=28ms TTL=52
Reply from 85.233.160.167: bytes=32 time=16ms TTL=52
Reply from 85.233.160.167: bytes=32 time=16ms TTL=52
Reply from 85.233.160.167: bytes=32 time=16ms TTL=52
Ping statistics for 85.233.160.167:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 16ms, Maximum = 28ms, Average = 19ms

|
I assumed was a tcp ping on the speedtest not a udp one which would then make it affected by tcp window sizes which are smaller on laptops which is what explanation I gave to myself why my laptop got lower ping times on it. However with your results my idea is out the window unless your laptop is tuned for large tcp window sizes or has a dodgy wireless signal.
---------- Post added at 17:56 ---------- Previous post was at 17:52 ----------
Quote:
Originally Posted by qasdfdsaq
I disagree there. Statistical contention again, here thinking of the user's own connection. A single person is unlikely to use 200mb on their own for any significant length of time - the oft-quoted most webservers only have 100mb for example. Usage goes up but nowhere near linearly - something like 20% higher usage when speed gets doubled in the last study I saw.
Assuming all the conditions are favourable, yes, but they would only do so for half the amount of time. Again, reducing statistical contention.
---------- Post added at 15:47 ---------- Previous post was at 15:41 ----------
I believe upgrading from 750 to 1Ghz was part of the 100mb rollout upgrades.
|
They dont need to use it for a significant amount of time. Even a 200mbit user doing a 10 second speedtest can cause 10 seconds of congestion. I am talking about burst rate demand, not overall usage. Incidently its not too diffilcult to exceed 100mbit assuming no congestion on VM side, various servers now use gigabit interfaces, p2p uses multiple sources meaning they can exceed 100mbit by collective means rather than a single fast connection, giganews and the like will very unlikely still be using 100mbit interfaces, eg. every ftp file server I run is at least a gigabit interface, a few are actually multi gigabit bonded as even a gigabit is considered small now days for file hosting. The only downloads that would struggle is http downloads as they usually single stream only with web server software often not using rfc1323.
---------- Post added at 18:00 ---------- Previous post was at 17:56 ----------
Quote:
Originally Posted by Ignitionnet
Nah, most areas are 750MHz or 860MHz. Overbuilding has been done on areas running at 550MHz, areas at 750MHz have only had work done where needed for upload upgrades and downstream laser changes to permit use of 256QAM in areas previously doing 64QAM.
|
So those with the delayed uplift work for overbuilding are now in the best position with more useable bandwidth?