thoughts and observations of a privacy, security and internet researcher, activist, and policy advisor

Monday, February 19, 2007

CardSpace's Privacy Problems

From Ben Laurie:
If Microsoft are really serious about providing “non-audit” (i.e. unlinkable) modes for CardSpace, then they need to get with the program and stop trying to pretend that they can do this with RSA signatures. Its a shame that they’re going to such lengths to make CardSpace good but can’t quite seem to go the last mile and make their claims actually true. Perhaps they don’t want to?

Friday, February 16, 2007

ID standards war is over - but what now?

So the heated exchange of "mine is safer" arguments between Kim Cameron from Microsoft and Dick Hardt from Sxip was just the PR prelude to the great romantic ending in heightened public attention: Microsoft will be using OpenID and CardSpace together. It was announced like the next big thing at the RSA conference, and Verisign (the usual suspects for identity provision aka "I tell others about you, and they pay for it") as well as JanRain also signed the joint announcement. Everybody was keen to ensure that this is not some buy-out by Microsoft. Scott Kveton from JanRain announced:
"Microsoft did not cave in to the OpenID community and the OpenID community is giving nothing up to Microsoft."
Interestingly, just a day before that, some folks from Higgins, Bandit and Novell had demonstrated open source identity services that are interoperable with Microsoft's Windows CardSpace system and enable Liberty Alliance-based identity federation via Novell Access Manager. Microsoft CardSpace and Liberty specifications interoperating. Wow. But they were not Bill Gates, so it was not as widely reported. But the effect is that now, the three biggest players in the field cooperate (or "coopete", as some call it).

Today, AOL announced that they also will use OpenID for AIM. It looks like the standards wars are over. But what will follow from this?

The core problem with CardSpace will remain: It may help against phishing, but it can also be used for tracking your movements through the web through the identity provider. At least our governments won't have such a difficulty anymore to decide which identity technology to use foryour online "show your ID please" experience. I have not looked into OpenID enough to really see what the problems are, but my computer science friends tell me it's a big hole, and you can read about man-in-the-middle attacks as well as phishing possibilities. A recent white paper by Ping Identity therefore concludes:
"While not necessarily a concern for the use cases that initially motivated OpenID, such a privacy risk will limit OpenID’s success in more sensitive use cases (e.g. Internet banking, eCommerce, health care, etc)."
Gerry Gebel from the Burton Group also has a very sober perspective on the convergence fuzz and the visions of an internet-wide identity system:
"In his keynote, Bill Gates described a world in which every device, person, and datum will have a unique identifier, the network address space will vastly expand, and policies will be much more granular and specific than they are today. The scale of the policy management problem in that world will be orders of magnitude larger than it is today; where are the models which will support a solution?"
One thing that gives me hope is this here: Credentica has just released its "U-Prove" ID management kit, which works with SAML, Liberty ID-WSF, and CardSpace while, according to the press release, massively enhancing the privacy of its users. Among other things, it allows for "sharing information without revealing source data". While I am not cryptographer enough to really understand zero-knowledge proofs and related fancy (and fuzzy) algorithms, Stefan Brands and his colleagues certainly know their stuff. Hopefully this or similar technology will also find widespread adoption.

Saturday, February 03, 2007

Criteria for Privacy-Enhanced Security

I was at a workshop in Copenhagen this week, organized by the EU-funded PRISE project (Privacy and Security in Europe). It is an interesting beast. The EU commission has asked them to develop privacy criteria for the heap of "security technology" funding it will give out in its 7th research framework programme. When I got the invitation, I first thought "ok, they take us as a fig leaf" but at least I get to Copenhagen". But then, the workshop really turned out interesting, open-minded, and even creative.

After the keynotes, I participated in the interactive session on "criteria for privacy enhancing security technologies". Their idea was to develop some criteria for privacy impact assessments of future technologies, a bit similar to what is already being done for federal IT procurement in Canada, and even partially in the US, under the "e-Government Act" of 2002. The latter also tells you why these approaches are not really working. The general flaw is to first design the technology (even if only the rough architecture) and then see if it's good or bad in terms of privacy.

What we came up with in Copenhagen was to implant privacy considerations very early on in the process of developing the system - at best when developing the rough vision. This is conventional wisdom among sociologists of techology, who moved from technology impact assessments to technology development as early as the 1980s. So we moved from criteria for the technology to criteria for the process of designing technology. This also should include institutional checks and criteria, like "were any privacy experts continously involved in this process?"

But it went even further. We agreed that if you want to start building "privacy enhanced security technologies", you should first check if they are actually security-enhancing technologies at all. Much of the stuff rolled out since 9/11 2001 is just "security theater", as Bruce Schneier calls it. It does not enhance security, but it often infringes on privacy. The criteria should be designed in such a way that in cases like this, they trigger a clear"no". So, we had again moved from designing technology to assessing if it is really an adequate security solution.

But there are security problems out there, sure. But the "solutions" (more correct: the security strategies) can be quite diverse. One participant told us of a big company that had thought about ordering a grand identification scheme for access controls or something like that. In the end, they gave it up, because it was cheaper, easier, and even more privacy-friendly to just buy an insurance. So, we had moved from criteria for working security solutions to criteria for assessig a security strategy.

But in the end, somebody mentioned that you still have so many security problems (or perceptions of them, at least), and only so little money. The same amount of money that is currently being spent on huge surveillance and dataming systems with very little hope to maybe find 40 terrorists in the EU could also be spent on significantly reducing the number of traffic casualties (car accidents still kill ten thousands annually in Europe) or HIV victims. The decision about what to focus on with your security strategy, and which strategy to take in the first place, is a political decision. It will always be a bit arbitrary (that is what makes it political by definition), but it is important to have the costs, benefits, and alternatives in mind when making these decisions. How do you make sure this is done, and it is even done in a way people are explicitly informed about these often hidden alternatives (hegemonic discourse and agenda-setting, you know it), and can have a reasoned debate? Well, the folks in the PRISE project will now have to think about it.

This is what I liked so much about the workshop: We started with criteria on privacy-enhanced security technologies, and we ended with criteria for rational government.