Quote:
Originally Posted by R Jones
Yes - the metatag is the obvious one, but I don't recall anything from Webwise/Phorm/BT that says they are looking at the metatags. They have been SO vague about robots.txt and unless I've missed it, there has been nothing about metatags. Please - if anyone has anything concrete explaining how they deal with noindex,nofollow metatags, please do post it.
|
I suspect they don't know about the meta tags. All I've seen is a vague claim that they will only profile where Google would index. That
should mean that they look at robots.txt and meta tags. However they've only ever mentioned robots.txt to my knowledge (and then not very clearly).
Quote:
Originally Posted by R Jones
I know that for some this is "not the issue" - but it seems that even if the way Phorm/Webwise/BT are looking at it has practical holes in it then it tends to cause more embarrassment for them and more pressure.
|
Yes. I touched on some of this with some of my previous posts but they got a bit lost in the flames. This seems to me to be a fundamental problem. If you assume webmasters have a choice about being profiled then you have a catch 22. Determining which http requests are for which websites requires an intercept. An opt-in or opt-out could be done at various levels (IP address, domain(via robots.txt) or per page(via meta tags)) but all of these require an intercept of some of the communication. As you go through my list of levels you get more fine grained control, but you also need to look deeper into the packets which starts to very quickly look like an illegal intercept.
Quote:
Originally Posted by R Jones
I agree that the real issue is the legality of the interception in the first place, and the need for explicit, informed, rather than implied consent, but I am trying to challenging the way even their "implied" consent model works.
|
Indeed. A strong legal argument doesn't have a single point of failure.
If they managed to argue that an implied opt-in for webmasters is OK, then this is a fall back argument.
Either they inspect all outgoing packets or they don't touch any of them. Unless they've invented a clairvoyant packet filter.
They might be able to argue that filtering on IP address is not an intercept - it's just routing. I actually think there is some merit in that argument. However that falls far short of the level of control that Google offers. A single IP address can relate to thousands of websites with different webmasters and completely different types of content.