Sunday, June 1, 2014

1973, meet 2014. Don't Wait for Godot.

In an odd but timely juxtaposition of the worlds of 1973 and 2014, I found particular poignance in a statement re-discovered a couple of weeks ago during a quick review of the Hew Report (more formally, by "The Secretary's Advisory Committee on Automated Personal Data Systems”, which foreshadowed the U.S. Privacy Act of 1974) when reading this week’s May 27 Washington Post article “Brokers use ‘billions’ of data points to profile Americans.

Here’s what the Hew authors wrote in 1973:

A permanent (Unique Identifier) issued at birth could create an incentive for institutions to pool or link their records, thereby making it possible to bring a lifetime of information to bear on any decision about a given individual. American culture is rich in the belief that an individual can pull up stakes and make a fresh start, but a universally identified man might become a prisoner of his recorded past.

The irony is that over the last number of years, the combination of the way we use email address and mobile phone numbers, has enabled what the Washington Post writer described as follows:

Are you a financially strapped working mother who smokes? A Jewish retiree with a fondness for Caribbean cruises? Or a Spanish-speaking professional with allergies, a dog and a collection of Elvis memorabilia?

All this information and much, much more is being quietly collected, analyzed and distributed by the nation’s burgeoning data-broker industry, which uses billions of individual data points to produce detailed portraits of virtually every American consumer, the Federal Trade Commission reported Tuesday.

The FTC report provided an unusually detailed account of the system of commercial surveillance that draws on government records, shopping habits and social-media postings to help marketers hone their advertising pitches. Officials said the intimacy of these profiles would unnerve some consumers who have little ability to track what’s being collected or how it’s used — or even to correct false information. The FTC called for legislation to bring transparency to the multibillion-dollar industry and give consumers some control over how their data is used.

Data brokers’ portraits feature traditional demographics such as age, race and income, as well as political leanings, religious affiliations, Social Security numbers, gun-ownership records, favored movie genres and gambling preferences (casino or state lottery?). Interest in health issues — such as diabetes, HIV infection and depression — can be tracked as well.

With potentially thousands of fields, data brokers segment consumers into dozens of categories such as “Bible Lifestyle,” “Affluent Baby Boomer” or “Biker/Hell’s Angels,” the report said. One category, called “Rural Everlasting,” describes older people with “low educational attainment and low net worths.” Another, “Urban Scramble,” includes concentrations of Latinos and African Americans with low incomes. One company had a field to track buyers of “Novelty Elvis” items.

There’s something deeply de-humanizing about commercial surveillance, as it’s articulated in the Post article.  It’s very much a characterization that makes individuals — at least when it comes to marketing — “prisoners of their past.” Albeit “prisoners” of a very narrow past, defined by a history of consumption.  One cannot help but wonder whether in so doing, the data brokers are using “big data” to address a marketing “problem” that made sense in twentieth century… how to target one’s advertising in a cookie-cutter, factory-like way…  but makes (increasingly?) less sense today.  The Cluetrain Manifesto (1999) anticipated the still emerging world of markets as conversations. Within that framework, the "push" mentality seems outdated.

How did we get here?

Where the private sector has succeeded, for its own purposes, the U.S. government has tied itself in knots for over 40 years trying to avoid the creation of a national ID system. As a result, almost every relationship we create is characterized by an (enrolment) initiation ritual in which we share a certain amount of personally identifying information for the purpose of helping the Relying Party determine whether we are who we say we are (identity assurance).  That process has scattered our PII into hundreds and perhaps thousands of databases… and has, by virtue of the lack of governance control over how that data is shared and used, essentially crushed the practical notion of privacy.  The scattering of PII means that it has become easy for others to collect, assemble, and use for purposes for which we as individuals do not, or would not, consent.  Whatever privacy "tools" we have, have been forged as compliance instruments, designed from an enterprise perspective.

The European Court’s decision  regarding “the right to be forgotten” comes from a legitimate set of concerns about privacy.   Somewhere, we need to find the balance between maintaining the historical record -- and making it discoverable -- while providing individuals with a degree of control (or better, agency) over their reputation in cyberspace.  For instance, it would help to know when what you are about to do is "on the record," instead of constantly having to keep one's guard up.  It would help, when a college prankster posts a video of a classmate doing something stupid, but inconsequential... to allow the data subject to be able to say "no" to that becoming part of their digital reputation. It's also problematic when something negative from 15 years ago dominates search results because... it's the only data point out there, or one of very few data points.  The Court calls that "excessive impact."

Unfortunately — along with an explosion of other issues that +Jeff Jarvis articulates — the horrible irony of the Court's decision is that it brings to mind Orwell’s 1984, and the job of erasing aspects of the past.  It also brings to mind that anachronistic — perhaps possible in America’s infancy, during the days of the Wild West, and pre-Technological Revolution — sense that “an individual can pull up stakes and make a fresh start,” as the Hew Report puts it.  That seems kind of like an implied "right" or "entitlement" to be forgotten. Instead of trying to erase history, perhaps we need to cultivate — or perhaps migrate from our traditional notion of community, to a broader concept of community — a richer sense of tolerance for mistakes, for not quite getting it right, and a sensibility that that we should not define others by their mistakes (just as we wouldn’t want to be defined by our own).

That’s a cultural challenge, not a legal one.  One wonders whether this culture is already emerging among digital natives, and within practices like agile development, which eschews the factory model of production of software (waterfall methodology) in favour of rapid iterations, in which it is perfectly acceptable to present products that are less than finished, less than perfect.  The approach is humbler, more effective, and more humanistic.  That iterative approach is also emerging in the practices of co-creation, where value is created through dialogue and experimentation.  Mistakes and mis-steps are very much part of that.  The acceptance of imperfection has always been part of one of the safest domains in the world — the airline industry — which makes a rigorous point of reporting errors, for the sake of improving practices. Airline regulators have long recognized that what matters is what a party does after making a mistake.  As problematic as the European Court's decision is, we could cut them some slack for interpreting the law as it stands today, and contribute by having the conversation about how to cope with the fact that everyone has a global printing press in their pocket.

What’s clear is that we’re in the middle of an invigorating conversation about how to live in a data-driven society. That conversation, when combined with our rapidly changing world of technology, will give rise to innovations that help individuals regain appropriate measures of control and governance of personal information.  This isn't just a philosophical issue, or some academic concept of privacy.  In an increasingly digital world, where decisions about how to treat individual requests for rights, benefits, entitlements, privileges will increasingly be mediated by "smart" software with access to reputational data, there's a compelling need for individuals to be able to govern their own digital reputation, to exercise cyber agency.  Liberty hangs in the balance.

+Adrian Seccombe has an excellent new post exploring the emergence of consciousness of the importance of "cyber agency," or the  "degree of control that an entity has over all the elements of Cyber Space that they interact with, be they: Data, Things or Services."  In it, he draws on the metaphor of the stages of cognition to explain where we are with respect to coping with cyber agency, suggesting that the majority of the population is currently in a "don't know they don't know" state.  He poses as the challenge: How to get individuals to the next state? without them freaking out."

Perhaps the answer to this question is right in front of us.  The power of the network is such that we don't need to wait for Godot.  By continuing the discussion, by sharing the concerns about privacy- and agency-lost, with our personal networks, we can cultivate and strengthen global concern and demand for better solutions than are offered today.  We can also counter the mood of resignation -- that nothing can be done, that a dystopian future is inevitable -- with a mode of optimism and ambition... because the technologies to shape a better future are already around us... it's a matter of discovering how to orchestrate them to address new purposes.