The limits to transparency

Is the end (of the internet) nigh?

The recent EU Court ruling that Google must remove harmful material from its search results is very sensible. This was based on finding Google responsible for its search results and that they were harmful to an individual. But the reaction to the judgement has been somewhat hysterical: will we ever be able to trust the internet again? Will anyone who wants to hide their past be able to deceive the rest of us? And so on.

The answers to these questions are ‘yes’ and ‘no’ – yes, we will be able to trust the internet. After all in how many cases will this sort of ruling apply? And we will also be able to trust the internet to do less harm. And no, this won’t apply to anyone at all, but only where a greater harm is caused by unfettered access to information than by its concealment.

So the principle that transparency should be the default assumption, unless it causes more harm than good still stands (as I argued some time ago in Corporate Truth).

The interesting questions are about how it will be done, if it is needed on any scale. First of all there is the question of the process for deciding the balance of harms. At the moment that process is the court system, which is likely to slow things down to altogether imperceptible. But no doubt internet companies and Information Commissioners will come up with something a little more streamlined – eventually.

Then there are the technical issues. If you are trying to monitor the data collected about a person, how can you find all of it? You can’t just type their name in the search box, as anyone who shares their name with anyone else on earth will have found. So the ruling is likely to stimulate some serious innovation in how the internet works – maybe we will have an internet of people before we have an internet of things.

So in the end the response to the ruling will benefit all of us, not just those with something that should be concealed.


Technology cuts both ways – does that make it neutral?

Technology can facilitate freedom and civil rights. But it can also facilitate oppression. That doesn’t make it neutral, it just means the jury’s still out.

There are a lot of interesting things going on that build on the apparently anarchic style of new technology – at least in California, as April Dembosky’s FT article describes. But there are also real problems with the use of surveillance technologies, as Salil Tripathi relates.

Having good points and bad points does not make a person, a company or a technology ‘on balance’ about neutral. It just means that our critical faculties should not be suspended.



When small may not be so beautiful

The UK government has just published its strategy for nanotechnology.

The approach is to encourage innovation and to support more research. Even though some of this research will be into the effects of eating and breathing nanoparticles, there is still a big gap in the thinking.

What is missing is any real encouragement or guidance for industry or universities to adopt a precautionary approach in its research and development. The message seems to be: let a hundred nanoparticles bloom. And someone will pick up the all those tiny particles later.