Mobile devices and privacy: Should we focus of changing behaviour ofpeople OR changing behaviour of devices?

Guest blog from Ajit Jaokar.   Original post is here



The many privacy related issues raised by the Web will be amplified in the world of mobility and even more so, in a world dominated by sensor networks. Current thinking seems to converge on one important conclusion: through the combined interaction of law, technology and Internet literacy, people should be in a position to control how their own personal information is made available and used for commercial (or other) purposes.
In this post, we explore the feasibility of users managing their own data. i.e. if we indeed want users to manage their own data, what are the issues involved in making this happen? We also look at an alternative i.e. allowing devices to mirror social privacy norms. Hence, I see the discussion as ‘Changing user behaviour to incorporate new device functionality’ OR ‘Changing device behaviour to mirror privacy expectations in human interactions

Privacy and management of data – A background

Today Facebook has become the lightning rod for privacy and they continue to push the issue with new products like “check ins” where facebook allows others to “tag” or check you in at a location, provided you are Facebook friends. Predictably, this has drawn fire from organizations like the ACLU – American Civil Liberties Union when they say Facebook Places: Check This Out Before You Check In. And we see new products and services that are launched to protect user privacy. For example The Fridge aims to be a service that shares content with a group i.e. if you belong to a group everyone can see it. You don’t have to ‘friend’ everyone and by the same token, no one outside the group can see it. Cataphora’s freeware “Digital Mirror” helps to gain an understanding of what we might look like to other people online.

The complexity and benefits of social networking data

Discussions about Privacy generate a lot of ‘heat but little light’. The concerns of data management are known and everyone has a view on it. Everyone wants to be protected and most people have a perception of being ‘exploited’ by companies. But social network data is complex. Noted security expert Bruce Schneier recently published a revised taxonomy of social networking data. It can be summarized as:
Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.
Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
Entrusted data is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data once you post it — another user does.
Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it’s basically the same stuff as disclosed data, but the difference is that you don’t have control over it, and you didn’t create it in the first place.
Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.
Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you’re likely gay yourself.
There are other ways in which data benefits society. 10 ways data is changing how we live lists the benefits as: Shopping, Relationships(dating), Business deliveries(ex courier services), Maps, Education(schools), Politics(openlylocal), Society (social and spatial relationships through location data), War (wikileaks), Advertising, Linked data and the future.
And I have also said before in the The fallacy of the Better mousetrap: Privacy advocates want to have their cake and eat it too you can’t have it both ways! i.e. publish your content/data and then ask for a share of profits! The future is likely to get more complex in a world dominated by mobility and sensor networks as I point out in The Silence of the chips

Changing User behaviour v.s. Changing device behaviour

How realistic is the idea of people maintaining their own data? i.e. changing user behaviour?
This sounds very seductive until you realize
a) That there is an extra step (inertia) to overcome in managing my data. This will be in multiple sites (facebook, MySpace etc)
b) Much of the data about me is not owned by me (ex comments about me created by other people)
c) The real concern often is metadata i.e. data insights derived by a site based on collective analysis of multiple people which is then retrospectively applied to individuals. Data is owned by individuals, metadata is owned by the site
d) In a world of Mobility and sensor networks (see silence of the chips above), the ability to individually permit or deny sensors to monitor information about people is probably unfeasible. What are the implications in that case?
The option is for us to maintain our behaviour but to have devices change according to society’s privacy norms
Danah Boyd raises an important point when she says that: Privacy Is Not Dead – The way privacy is encoded into software doesn’t match the way we handle it in real life. The reason for this disconnect is that in a computational world, privacy is often implemented through access control. Yet privacy is not simply about controlling access. It’s about understanding a social context, having a sense of how our information is passed around by others, and sharing accordingly. As social media mature, we must rethink how we encode privacy into our systems.
And Instead of forcing users to do that, why not make our social software support the way we naturally handle privacy?
Thus the question for me is: Is it realistic to expect users to take responsibility for their own data? OR should we make our social software support the way we naturally handle privacy? So, should we focus of changing behaviour of people OR changing behaviour of devices? The privacy concerns we are seeing are just the tip of the iceberg and I think this question would apply more to mobile and sensor data going forward.
I realise of course that this could be a false dichotomy but I feel that if we spent more efforts on making our devices mirror social norms of privacy, we could have a greater chance of success rather than changing the behaviour of people.