View Single Post
Old 28-04-2011, 11:56   #109
Chrysalis
cf.mega poster
 
Join Date: Sep 2003
Posts: 12,048
Chrysalis is cast in bronzeChrysalis is cast in bronzeChrysalis is cast in bronzeChrysalis is cast in bronze
Chrysalis is cast in bronze
Re: Should Virgin Media Throttle p2p traffic?

andy you not quite right.

hard caps would work fine alongside iplayer et all unless either the caps are very low or the user is someone who uses iplayer quite heavily eg. 10 hours a day every day. However in the latter case I dont see that as a reason to not go ahead with hard caps, it is by far the most sensible solution but it conflicts with marketing.

'any' form of protocol shaping is pretty much doomed to fail, its 2 main problems are (a) it has too little % of users to work on and as a result those who do get throttled can get throttled excessively, if an isp has to throttle people down to dialup speeds to manage their capacity then that is going too far, and (b) false positives, even plusnet who have been doing this for years have regular issues with unidentified traffic.

Also p2p/nntp users will not forever suck up upgraded capacity, if the capacity is upgraded in drip feeds, ie. an extra 10% here and there then it could well seem that way because at the point they decide to upgrade they may be massively under capacity in the first place,if proper upgrading is done eg. a 10 fold increase (would be incredibly over subscribed if one not enough) then things would be ok. Of course VM operate in a manner they can/cant wont do large upgrades, a large upgrade for VM is doubling of bandwidth ie. node split or docsis2 port upgrade. In any case if an upgrade is immediatly saturated all it shows is that the oversubscription level is severe. I guess not too surprising when considering VM wait till ports are saturated before even starting to plan an upgrade never mind start the work. Even when saturated they still may not do anything and the CEO office or tier2 support need pushing to get things moving. In that situation if eg. it takes 6 months to do the upgrade then there is 6 months of growth going on whilst this is happening and the upgrade will likely only get them back to where they were 6 months ago if enough growth and of course means VM are constantly then playing catchup with capacity. This method of capacity management is doomed to fail even without p2p users.

p2p may be majorily copyright content (ignition its copyright infringement not theft), however isp's are not the police they should not be picking and choosing what types of traffic to crippled, I can understand to an extenct crippling the heaviest users by usage of 'any' protocol but not picking on specific protocols only.

Like I said in another post I am curious why protocol shaping is so popular in this country, our government is certianly one of the most anti copyright in the world as we have the most agressive laws in regards to copyright, I cant get out of my head copyright is a factor in why isp's have come to employ protocol shaping. Entanet's old now unused capacity management is by far the best I have seen in this country for those who dont know it worked like this.

Entanet had numerous BT central pipes (their chokepoint's) for their customers to use, at specific times of the day like in the evenings at 10pm when unlimited usage started then there was high demand causing all the pipes to hit 100% utilisation. They had a system called ALT (anti loss tool), you can maybe guess from the name its primary purpose was to prevent packet loss so basically maintain QOS. The pipes had different status colours, green, amber, red and black. Green is when utilisation below a certian threshold which I think was something like 95% utilisation. (because those pipes 655mbit much bigger than a UBR port they could tolerate higher % of usage before service detriment), then amber up to maybe 98% red up to 100% and black over 100%, apparently was possible to to slightly go over 655mbit on the pipes due to them been physically gigabit links but artifically throttled by BT. When green nothing was throttled, amber would be no change, if red then max possible speed for every single customer would be dropped by 0.5mbit. Same on black. So they were polled at intervals as I cannot remember how often lets say every 5 minutes. If the status still red or black then go down a further 0.5mbit, its possible if black it was more agressive as I cannot remember fully, this would keep going reduction of speeds until either hit the min threshold set by entanet which was 2mbit, so noone be throttled below 2mbit, or the status turned to amber or green. when amber I cannot remember it either kept it where it is or increased by 0.5mbit, green it increased by 0.5mbit every poll until full speed again or change to red. So basically it was constantly monitoring and adjusting end user throttled speeds to manage spare capacity. Now in my view entanet were over subscribed, this system kicked in on average 2 to 3 hours a day every weekday evening and nearly always went down to the bottom 2mbit but not on every single pipe, some were often barely affected due to inbalance of user spread across the pipes. On the weekends is where I seen the issue in that the system kicked in both every saturday and sunday just about the entire day from the morning all the way until midnight or so. That to me was excessive and led me to leave entanet however the actual throttling system did its job very well, things like packet loss and jitter rarely happened, when I used ssh was fine, streaming was fine so they kept mainstream activities and latency sensitive stuff working. p2p ultimately was affected by the system but only down to 2mbit not something silly like 5kB/sec. They even had customers excempt, all entanet resellers who paid for their own usage were excempt from ALT. So like plusnet if willing to pay extra it could be bypassed, the option was there. Entanet still run ALT on BT's new 21CN system but its not so effective now I think, the guy who designed it left the company. ALT in general is way less complex than protocol shaping, the only things that need configuring are the thresholds and cap levels. It would use sigificantly less processing power, doesnt need to inspect packets, is no false positives as it doesnt discriminate, is dynamic based on utilisation of port used on so if not needed wont even kick in. Of course its not so attractive to isp's like VM as people would get slower speedtest results and as a result there would be more complaints. Even tho overall the service would be better quality, which then leads me to say a ALT type system that prioritises on heaviest users first is probably the ideal solution which is apparently what comcast use.
Chrysalis is offline   Reply With Quote