Criteria for Privacy-Enhanced Security
I was at a workshop in Copenhagen this week, organized by the EU-funded PRISE project (Privacy and Security in Europe). It is an interesting beast. The EU commission has asked them to develop privacy criteria for the heap of "security technology" funding it will give out in its 7th research framework programme. When I got the invitation, I first thought "ok, they take us as a fig leaf" but at least I get to Copenhagen". But then, the workshop really turned out interesting, open-minded, and even creative.
After the keynotes, I participated in the interactive session on "criteria for privacy enhancing security technologies". Their idea was to develop some criteria for privacy impact assessments of future technologies, a bit similar to what is already being done for federal IT procurement in Canada, and even partially in the US, under the "e-Government Act" of 2002. The latter also tells you why these approaches are not really working. The general flaw is to first design the technology (even if only the rough architecture) and then see if it's good or bad in terms of privacy.
What we came up with in Copenhagen was to implant privacy considerations very early on in the process of developing the system - at best when developing the rough vision. This is conventional wisdom among sociologists of techology, who moved from technology impact assessments to technology development as early as the 1980s. So we moved from criteria for the technology to criteria for the process of designing technology. This also should include institutional checks and criteria, like "were any privacy experts continously involved in this process?"
But it went even further. We agreed that if you want to start building "privacy enhanced security technologies", you should first check if they are actually security-enhancing technologies at all. Much of the stuff rolled out since 9/11 2001 is just "security theater", as Bruce Schneier calls it. It does not enhance security, but it often infringes on privacy. The criteria should be designed in such a way that in cases like this, they trigger a clear"no". So, we had again moved from designing technology to assessing if it is really an adequate security solution.
But there are security problems out there, sure. But the "solutions" (more correct: the security strategies) can be quite diverse. One participant told us of a big company that had thought about ordering a grand identification scheme for access controls or something like that. In the end, they gave it up, because it was cheaper, easier, and even more privacy-friendly to just buy an insurance. So, we had moved from criteria for working security solutions to criteria for assessig a security strategy.
But in the end, somebody mentioned that you still have so many security problems (or perceptions of them, at least), and only so little money. The same amount of money that is currently being spent on huge surveillance and dataming systems with very little hope to maybe find 40 terrorists in the EU could also be spent on significantly reducing the number of traffic casualties (car accidents still kill ten thousands annually in Europe) or HIV victims. The decision about what to focus on with your security strategy, and which strategy to take in the first place, is a political decision. It will always be a bit arbitrary (that is what makes it political by definition), but it is important to have the costs, benefits, and alternatives in mind when making these decisions. How do you make sure this is done, and it is even done in a way people are explicitly informed about these often hidden alternatives (hegemonic discourse and agenda-setting, you know it), and can have a reasoned debate? Well, the folks in the PRISE project will now have to think about it.
This is what I liked so much about the workshop: We started with criteria on privacy-enhanced security technologies, and we ended with criteria for rational government.
0 Comments:
Post a Comment
<< Home