Sunday, December 28, 2014

Security and Privacy Anomaly Ground Zero

When it comes to security and privacy, one of the starkest anomalies of our social common sense lies in this contradiction: On the one hand, government and industry are intensely concerned about cybersecurity, breaches of which have to date caused tremendous havoc while not yet causing the mass casualties that targeted attacks on elements of critical infrastructure could yield.  The pressure on enterprises and government operations is to “better secure” their networks, systems and data.  In some cases, government will even sue enterprises for data breaches.  On the other hand, the government agencies at the epicenter of the cybersecurity concern have been shown to participate in the very activities they are trying to defend against. They have engaged in massive breaches and compromises of internet security.  When faced with the commercial reaction — Apple’s security on iOS, for example — to implement strong encryption and enable user control over the keys (i.e., no corporate back-door), security agencies cry foul and demand that these firms create a weakness for government exploitation.  Add to this the fact that these agencies cannot themselves control the actions of the individuals working for them.  Edward Snowden is the most obvious example of an individual who — to draw on Mikko Hyponnen’s analogy in The Internet is On Fire — pulled a fire alarm and walked away with a trove of classified data.  But the better examples of the challenges of insider security come from the far more mundane and everyday abuses driven by personal concerns and errors.  These may not occur “every day,” but demonstrate that even the most security conscious organizations on the planet suffer failures.  The logical argument against back-doors is that it’s just a matter of time before some individual reveals the secret, using or allowing it to be used it in an inappropriate way.

The madness will continue until we break away from this broken conversation.

Postscript: http://m.spiegel.de/international/germany/a-1010361.html#spRedirectedFrom=www&referrrer=http://t.co/urL1IR2EvD

Thursday, December 4, 2014

Digital Identity: Our Fetish for Instant Gratification Blinds us to Something Fundamental

+ChrisMessina’s Thoughts on Google+ piece raises a number of interesting issues, particularly with respect to digital identity and privacy.  Messina notes that he joined Google in part to ensure that “the future of digital identity should not be determined by one company (namely Facebook).” He goes on to give his sense of why digital identity matters: “Digital identity unlocks universal personalization (i.e. better ads), payments and commerce (i.e. Snapcash), environmental adaptation (i.e. an Uber that plays your Spotify music), communications (i.e. Path Talk), and access (i.e. Sosh Concierge). Today’s most exciting apps are barely scratching the surface of what will be possible when there are years of preferences data stored up on each of us, that we can leverage at a moments notice, in any context.

Explained below: We need to separate the management of individual identity from the operation of digital services that manage status functions, and we need to find a way to place the individual in firm, confident and inalienable control of their digital identity.

The Candy of Convenience
Our collective fascination with use cases that highlight convenience is blinding us to something far more important: Digital identity is the key to liberty. Our drift into a world in which we have little choice but to allow enterprises — Google, Facebook, Wyndham, HomeDepot, your employer, the government — to manufacture and set the terms of use for the identities we use when we engage with them, puts both the enterprise and the individual in a bad place. The drift, that seemingly inexorable movement in a direction with which many are uncomfortable, is driven by the daily compromises we make to offer and take the candy of convenience and instant gratification.

The Fundamentals (How Identity Connects with Liberty)
In its simplest terms, an individual’s identity is a collection of status functions, and the history of transactions associated with those status functions.  As John Searle says in a 2013 lecture on consciousness (see note 1 and Youtube link below), “we live in a sea of status functions.”  Oversimplifying… a status function reflects that the individual associated with the identity has been granted, or have granted themselves, to allowed to do do something, or proscribed from doing it.  The most obvious and pervasive status functions are rights, entitlements, benefits, capacities, etc; these form a “skeleton” of identity, and enable us to negotiate our way in the world. They constitute individual elements of social power (Searle calls this “deontic power”).  A few simple examples: a driver’s license is an indicator that the state has entitled the individual to drive a car; a credit score is an indicator of financial capacity, and the ability to borrow; a Facebook account is an indicator of membership in a publishing community, and the capacity to build and project a social identity on a ubiquitous platform.  Take the driver’s license away, and the individual can no longer drive.  Take the credit score away, and the individual can no longer borrow at preferential interest rates.  Take the Facebook account away, and the individual can no longer be part of a ubiquitous platform.  When status functions are taken away, or degraded, so too is the individual’s liberty.

The Importance of Managing Your Identit(ies)
An individual’s capacity to govern and control the use of digital identity is central to the cultivation and maintenance of reputation.  Each time a digital identity is used — to withdraw funds, to publish something in a public or less-than-public space, to authorize something — the resulting transaction becomes part of the identity’s reputation.  The individual’s eligibility for new status functions (or the continuation of existing ones) increasingly depends on that reputation.  A simple example: A few years ago, I spoke with a visa adjudicator at the U.S. Department of State who reported a new adjudication practice of reviewing social media records.  One of her anecdotes included a case where a young European applicant for a nanny visa wrote on her Facebook page that she had no intention of being a nanny, but would instead travel and work elsewhere in the U.S.  Visa denied.

Why This Matters
At one level, within the last ten years, we have shifted into a new world where we have little choice but to participate in social media, learn new skills for actively cultivating and managing the projection of our identities in the digital world, and adopt new principles of accountability.  We’re living through a historical shift in the way we understand and self-consciously define the self, a lifelong activity in which we require secure means of claiming and managing our identity. 

At another level, we have drifted into this new world with a pattern in which every enterprise requires that we identify ourselves when we engage with them.  Their concern isn't so much identity, as it is controlling access to the value they provide and create. Since individuals don’t generally “bring their own identity,” (or are uncomfortable using social login… see Note 2 below) enterprises have established practices of enrolling individual identity claims, creating accounts, and imposing ways that the individual will subsequently authenticate.  And so we have ended up in a world where we have dozens, and sometimes hundreds, of digital identities.  In each case, the enterprise decides what risk it is willing to take with respect to identity fraud, and what compromises it is willing to make for the sake of making its offering easy to access.  In many cases, this means minimizing identity assurance, and using lowest common denominator authentication practices.  Those choices put the individual, their digital identity and reputation at risk.  Even if we follow so-called best practices for password hygiene, even if we use tools like 1Password or Lastpass, the resulting mess is too complicated and time-consuming for individuals to manage effectively.  When one of those dozens or hundreds of user accounts is compromised, and someone else is able to impersonate us, our reputation is placed at risk in ways that jeopardize liberty.

Another basic problem with the notion that the Googles/Facebooks/Twitters of the world could or should be identity providers is that their core business is about enabling the convenience and instant gratification (universal personalization) in a way that is economically driven by its connection with identity. They monetize what you do with your identity, and so there's a financial incentive to maximize transparency.  And the more you rely on their platform, the harder it is to say "no" to a change in the terms of service.  Facebook is a great example.  Sure, you can say "no"... just by terminating your use of the service.  Saying "no" in that way becomes much harder if FB is both your identity provider, and your publishing platform.

Our lack of effective control over digital identity reflects a situation in which we have little ability to govern privacy — how we share information about ourselves, or how we govern how others may use the information we have shared.  The resulting lack of practical privacy facilitates distortions that place our reputation at risk.  Since our reputation is pivotal to establishing and maintaining eligibility for status functions, the lack of an effective capacity to manage identity and privacy is a threat to individual liberty.

What we Need
What is needed is an identity platform operated on behalf of its users by an enterprise whose sole mission focuses on enabling secure individual control of digital identity.  The challenge is that such an enterprise must avoid the business conflicts of interest that emerge when it has a financial interest in monetizing personal data and individual transaction histories through advertising or other third-party oriented services.  We need to separate the management of individual identity from the operation of digital services that manage status functions, and we need to find a way to place the individual in firm, confident and inalienable control of their digital identity, and of the identity and agency of their things.

Notes

Note 1… on Status Functions

I’d like to express my gratitude to Fernando Flores and the Pluralistic Networks team for bringing a Youtube-captured John Searle lecture entitled “The Normative Structure of Civilization” to my attention.  It’s a crisp articulation of the role of language (language is the platform of civilization) in the creation of consciousness and culture, and a convincing story about the connection of the domain of physical science with the domain of social science. One of my take-aways from this lecture is the concept of “status functions”, which I used to clumsily describe as “rights, benefits, entitlements, permissions, capacities, etc.”  In Searle’s account, “status functions” are actually richer than this… “We live in a sea of status functions…” and status function indicators, which include wedding rings, uniforms, passports and driver’s licenses.


(If the introduction doesn't hold your attention, please skip to 8:12, where the lecture begins.)


Note 2… on Social Login

While I appreciate what I assess as good intent on behalf of some of the firms who have created social login as a “free” component of the range of services from which they profit, my sense is that this will forever be great for convenience, and compromising for liberty.  While there exist technical means of “blinding” (the identity provider cannot see who the relying party is), and while OAuth mechanisms are a giant leap forward, there are numerous reason for discomfort with respect to popular implementations.  These include sometimes not-so-subtle little power grabs by Relying Parties (let us post to your Facebook account); unclear-as-to-purpose requests for access to your contacts; and the sense that the Identity Provider can maintain a record of where you logged in.  More importantly, as we see in the periodic revisions to Terms of Service by firms like FaceBook, the enterprise retains the power to unilaterally impose new conditions and terms.  The user has a choice of accepting… or terminating their use of the service. That means that if we become overly dependent on a particular social login, the consequences of objecting to a change in Terms of Service become widespread, damaging and disruptive. This imbalance of power between enterprises and individuals is unsustainable in a culture that is shifting towards consumerizing power.

Friday, July 4, 2014

Privacy Emergence-y

In the last month or so, we have seen a number of events that constitute bright flashes in the continued emergence of a new way of dealing with privacy. We have also seen the launch of a new initiative that anticipates a future in which individuals might be able to govern how their personal data is used.  Despite its incompleteness, the Respect Network is worthy not just of our attention, but also of its sign up fee. Think of it as a $25 action towards an optimistic future of privacy.

Back in May, the European Court of Justice issued a controversial interpretation of law  holding that individuals have a right to request that search providers exclude certain results about them if they are outdated or no longer relevant. The particular case involved an individual who sought to suppress a search result dating to the late 1990s, arguing that while the issue had been solved, the fact that it kept appearing at the top of searches continued to harm his reputation. The decision generated great consternation, some of it not unreasonable. And indeed, we have already started to see what could be viewed as unintended consequences -- the apparent use (?) by wealthy and powerful individuals (or someone representing them?) to suppress negative results about them. We might conclude from this that current EU laws are a blunt instrument when it comes to privacy protection in the digital age... and that what's needed is a step back, a deep breath, a new philosophy of privacy, and new legislation and practices.

Another flash came in mid-June in the referral (Schrems v Data Protection Commissioner) to the European Court of Justice by an Irish judge of the question whether the United States deserves to be considered a "Safe Harbor" for private data of European citizens. Put simply, the Judge believed there was enough evidence in the public domain that privacy safeguards in the United States are insufficient to meet the standard required by the European "right to privacy." Passing that ball to the ECJ probably ensures a header into goal.

In late June, with impeccable timing, the United States Supreme Court issued its unanimous Riley v California judgement.  The issue centered on whether law enforcement could search smart phones when they arrested an individual, in the same way that they were allowed to search the person for weapons and evidence.  In essence, the answer was "No... Get a warrant, first." In its explanation, the Court noted that the quantity and range of information on smart phones typically exceeds what could be found in a search of an individual's home. Though the decision anchors on -- and seems like a pretty clear interpretation of -- the 4th Amendment, Justice Alito ends his commentary with the following call to legislators:

"...Because of the role that these devices have come to play in contemporary life,searching their contents implicates very sensitive privacy interests that this Court is poorly positioned to understand and evaluate.  Many forms of modern technology are making it easier and easier for both government and private entities to amass a wealth of information about the lives of ordinary Americans, and at the same time, many ordinary Americans are choosing to make public much information that was seldom revealed to outsiders just a few decades ago.
  In light of these developments, it would be very unfortunate if privacy protection in the 21st century were left primarily to the federal courts using the blunt instrument of the Fourth Amendment.  Legislatures, elected by the people, are in a better position than we are to assess and respond to the changes that have already occurred and those that almost certainly will take place in the future."

Alito appreciates the -- irony? anomaly? emerging tension? incompatibility? -- of constraining the state's law enforcement agents to protect a fundamental human right while the commercial sector, for the sake of profit, remains virtually unfettered in performing its own manipulation and searches of the trove of data individuals deposit in a confusing array of virtual places, helped by the little spies (cookies) that websites deposit on our personal devices. (If that sounds like an extreme assessment, see this piece about Target, and this one about the data brokerage business.) 

As if to help us understand this better, because we so easily forget what happened in the past... Earlier this week, we were treated to the publication of an article by Facebook data scientists, demonstrating the social network firm's ability to manipulate user moods by designing algorithms that control what appears in a user's news feed.  As Simon Wardley points out, it is difficult to imagine that the over 600,000 users selected for this experiment would have understood they were giving Facebook the right to manipulate their emotional states when they clicked their consent to the service's privacy terms of service. Setting aside the interesting outcome of the study -- which probably goes a long way to explain how US television networks have contributed to the polarization of politics currently debilitating the nation -- the potential for easy and casual abuse of this power is staggering, and the risk is high. What Facebook has done demonstrates a disrespect for its users, arguably analogous to its originating college boy humour that published pictures of women with invitations to match their faces to those of animals. By so disrespecting its users, it has also handed to the plaintiffs in the Irish Schrems v Data Protection Commissioner referral to the European Court of Justice a perfect example of the abuses of personal information that underscore their argument.

Is there a light at the end of is tunnel? Perhaps. The Respect Network https://www.respectnetwork.com/ -- launched in mid-June -- provides another flash, envisioning a new infrastructure for privacy, and beginning to provide some of the technology needed to enable new practices for the handling of personal information in ways that respect the dignity and agency of the individual user.  In essence, the Respect Network enables individuals to purchase -- yes, purchase... It doesn't pretend to be free, like the plethora or services that cost nothing, but extract a high price in other, sometimes opaque and shifting ways -- a unique address for personal data services, and a promise that you can move your data between personal data providers as desired.  This is an important, first step toward enabling individuals to control and manage their personal data. Important elements of a comprehensive solution to individual privacy remain to be instantiated -- Adrian Seccombe shared some thoughts on this -- and ultimately even require an overhaul of the way in which our laws articulate the rules necessary for individuals to enjoy the right to privacy, and what it entails, in a digital age.  Nevertheless, I encourage my friends and network to place their $25 vote with the crazies who believe that we can use technology to usher in a new age of privacy, and respect for individuals.  There are enough of us in this network to create an insurgency around privacy, educate our peers about why this matters, and establish a new order.

Links in this piece:
http://curia.europa.eu/jcms/upload/docs/application/pdf/2014-05/cp140070en.pdf
http://www.bbc.com/news/business-28130581
http://www.washingtonpost.com/blogs/monkey-cage/wp/2014/06/20/the-case-that-might-cripple-facebook/
http://www.supremecourt.gov/opinions/13pdf/13-132_8l9c.pdf
http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/
http://www.washingtonpost.com/business/technology/brokers-use-billions-of-data-points-to-profile-americans/2014/05/27/b4207b96-e5b2-11e3-a86b-362fd5443d19_story.html
http://www.pnas.org/content/111/24/8788.full.pdf
http://www.europe-v-facebook.org/hcj.pdf




Sunday, June 1, 2014

1973, meet 2014. Don't Wait for Godot.

In an odd but timely juxtaposition of the worlds of 1973 and 2014, I found particular poignance in a statement re-discovered a couple of weeks ago during a quick review of the Hew Report (more formally, by "The Secretary's Advisory Committee on Automated Personal Data Systems”, which foreshadowed the U.S. Privacy Act of 1974) when reading this week’s May 27 Washington Post article “Brokers use ‘billions’ of data points to profile Americans.

Here’s what the Hew authors wrote in 1973:

A permanent (Unique Identifier) issued at birth could create an incentive for institutions to pool or link their records, thereby making it possible to bring a lifetime of information to bear on any decision about a given individual. American culture is rich in the belief that an individual can pull up stakes and make a fresh start, but a universally identified man might become a prisoner of his recorded past.

The irony is that over the last number of years, the combination of the way we use email address and mobile phone numbers, has enabled what the Washington Post writer described as follows:

Are you a financially strapped working mother who smokes? A Jewish retiree with a fondness for Caribbean cruises? Or a Spanish-speaking professional with allergies, a dog and a collection of Elvis memorabilia?

All this information and much, much more is being quietly collected, analyzed and distributed by the nation’s burgeoning data-broker industry, which uses billions of individual data points to produce detailed portraits of virtually every American consumer, the Federal Trade Commission reported Tuesday.

The FTC report provided an unusually detailed account of the system of commercial surveillance that draws on government records, shopping habits and social-media postings to help marketers hone their advertising pitches. Officials said the intimacy of these profiles would unnerve some consumers who have little ability to track what’s being collected or how it’s used — or even to correct false information. The FTC called for legislation to bring transparency to the multibillion-dollar industry and give consumers some control over how their data is used.

Data brokers’ portraits feature traditional demographics such as age, race and income, as well as political leanings, religious affiliations, Social Security numbers, gun-ownership records, favored movie genres and gambling preferences (casino or state lottery?). Interest in health issues — such as diabetes, HIV infection and depression — can be tracked as well.

With potentially thousands of fields, data brokers segment consumers into dozens of categories such as “Bible Lifestyle,” “Affluent Baby Boomer” or “Biker/Hell’s Angels,” the report said. One category, called “Rural Everlasting,” describes older people with “low educational attainment and low net worths.” Another, “Urban Scramble,” includes concentrations of Latinos and African Americans with low incomes. One company had a field to track buyers of “Novelty Elvis” items.

There’s something deeply de-humanizing about commercial surveillance, as it’s articulated in the Post article.  It’s very much a characterization that makes individuals — at least when it comes to marketing — “prisoners of their past.” Albeit “prisoners” of a very narrow past, defined by a history of consumption.  One cannot help but wonder whether in so doing, the data brokers are using “big data” to address a marketing “problem” that made sense in twentieth century… how to target one’s advertising in a cookie-cutter, factory-like way…  but makes (increasingly?) less sense today.  The Cluetrain Manifesto (1999) anticipated the still emerging world of markets as conversations. Within that framework, the "push" mentality seems outdated.

How did we get here?

Where the private sector has succeeded, for its own purposes, the U.S. government has tied itself in knots for over 40 years trying to avoid the creation of a national ID system. As a result, almost every relationship we create is characterized by an (enrolment) initiation ritual in which we share a certain amount of personally identifying information for the purpose of helping the Relying Party determine whether we are who we say we are (identity assurance).  That process has scattered our PII into hundreds and perhaps thousands of databases… and has, by virtue of the lack of governance control over how that data is shared and used, essentially crushed the practical notion of privacy.  The scattering of PII means that it has become easy for others to collect, assemble, and use for purposes for which we as individuals do not, or would not, consent.  Whatever privacy "tools" we have, have been forged as compliance instruments, designed from an enterprise perspective.

The European Court’s decision  regarding “the right to be forgotten” comes from a legitimate set of concerns about privacy.   Somewhere, we need to find the balance between maintaining the historical record -- and making it discoverable -- while providing individuals with a degree of control (or better, agency) over their reputation in cyberspace.  For instance, it would help to know when what you are about to do is "on the record," instead of constantly having to keep one's guard up.  It would help, when a college prankster posts a video of a classmate doing something stupid, but inconsequential... to allow the data subject to be able to say "no" to that becoming part of their digital reputation. It's also problematic when something negative from 15 years ago dominates search results because... it's the only data point out there, or one of very few data points.  The Court calls that "excessive impact."

Unfortunately — along with an explosion of other issues that +Jeff Jarvis articulates — the horrible irony of the Court's decision is that it brings to mind Orwell’s 1984, and the job of erasing aspects of the past.  It also brings to mind that anachronistic — perhaps possible in America’s infancy, during the days of the Wild West, and pre-Technological Revolution — sense that “an individual can pull up stakes and make a fresh start,” as the Hew Report puts it.  That seems kind of like an implied "right" or "entitlement" to be forgotten. Instead of trying to erase history, perhaps we need to cultivate — or perhaps migrate from our traditional notion of community, to a broader concept of community — a richer sense of tolerance for mistakes, for not quite getting it right, and a sensibility that that we should not define others by their mistakes (just as we wouldn’t want to be defined by our own).

That’s a cultural challenge, not a legal one.  One wonders whether this culture is already emerging among digital natives, and within practices like agile development, which eschews the factory model of production of software (waterfall methodology) in favour of rapid iterations, in which it is perfectly acceptable to present products that are less than finished, less than perfect.  The approach is humbler, more effective, and more humanistic.  That iterative approach is also emerging in the practices of co-creation, where value is created through dialogue and experimentation.  Mistakes and mis-steps are very much part of that.  The acceptance of imperfection has always been part of one of the safest domains in the world — the airline industry — which makes a rigorous point of reporting errors, for the sake of improving practices. Airline regulators have long recognized that what matters is what a party does after making a mistake.  As problematic as the European Court's decision is, we could cut them some slack for interpreting the law as it stands today, and contribute by having the conversation about how to cope with the fact that everyone has a global printing press in their pocket.

What’s clear is that we’re in the middle of an invigorating conversation about how to live in a data-driven society. That conversation, when combined with our rapidly changing world of technology, will give rise to innovations that help individuals regain appropriate measures of control and governance of personal information.  This isn't just a philosophical issue, or some academic concept of privacy.  In an increasingly digital world, where decisions about how to treat individual requests for rights, benefits, entitlements, privileges will increasingly be mediated by "smart" software with access to reputational data, there's a compelling need for individuals to be able to govern their own digital reputation, to exercise cyber agency.  Liberty hangs in the balance.

+Adrian Seccombe has an excellent new post exploring the emergence of consciousness of the importance of "cyber agency," or the  "degree of control that an entity has over all the elements of Cyber Space that they interact with, be they: Data, Things or Services."  In it, he draws on the metaphor of the stages of cognition to explain where we are with respect to coping with cyber agency, suggesting that the majority of the population is currently in a "don't know they don't know" state.  He poses as the challenge: How to get individuals to the next state? without them freaking out."

Perhaps the answer to this question is right in front of us.  The power of the network is such that we don't need to wait for Godot.  By continuing the discussion, by sharing the concerns about privacy- and agency-lost, with our personal networks, we can cultivate and strengthen global concern and demand for better solutions than are offered today.  We can also counter the mood of resignation -- that nothing can be done, that a dystopian future is inevitable -- with a mode of optimism and ambition... because the technologies to shape a better future are already around us... it's a matter of discovering how to orchestrate them to address new purposes.

Wednesday, May 21, 2014

Outside-In, Co-Creation and Mapping: Notes on LEF Executive Forum (Washington, May 15, 2014)

Last Thursday, CSC’s Leading Edge Forum (LEF) hosted its annual Executive Forum at Washington, DC’s W Hotel.  The LEF’s current research focuses on how enterprises can best cope with the disruptive changes roiling the business environment.

Some thoughts on take-aways from presentations by Erik Brynjolfsson (co-author, The Second Machine Age), Venkat Ramaswamy (co-author, The Co-Creation Paradigm), Liam Maxwell (CTO, UK Government) and Simon Wardley (LEF Researcher, Learning from Web 2.0).  In essence:
  • business and IT worlds are transforming in ways that are driven by the digitization of just about everything, the exponential improvement and accessibility of computing power, and a tsunami of recombinant innovation.  The effect is deeply disruptive;
  • the consumerization and proliferation of technology has shifted the balance of power between enterprises and individuals in ways that are changing the way business is done, and driving the adoption of new practices in which individual stakeholders co-create value for each other. Increasingly, this has to take place within an Outside-In mindset, where enterprises not only recognize but effectively draw on the capacities that exist outside of their organizations;
  • governments like that of the United Kingdom are taking a “Digital By Default” approach that drives citizen adoption by delivering digital services that are “so good, people no longer want to use” non-digital pathways.  The effect of this version of co-creation, which focuses obsessively on the question “what does the user need,” has been to visibly demonstrate close to a billion dollars in IT efficiency savings within two years;
  • anyone seeking to take advantage of the possibilities opening up in the Second Machine Age and the Co-Creation Paradigm would benefit from the situational awareness afforded by the Mapping techniques developed by the LEF's Simon Wardley.  Simply copying your competitors puts your future unnecessarily at risk. Instead, it has become increasingly critical to understand value chains, and to map these value chains into their appropriate categories of maturity. Once the terrain is understood, it becomes clearer how to set objectives that make sense, and design the strategies needed to achieve them.
What could this mean for government clients?
  • You live in a new world where other governments are coming to understand that the efficiency of their core services is a global, competitive differentiator.  More efficient government, easier-to-deal-with-government, and cheaper government all combine to reduce operational and economic burden on business.  Capital, innovation and high value jobs will flow to countries that understand the value of making it easy for business to do business.
  • It’s no secret that budget constraints are forcing government agencies to take a long, hard look at how they operate.  But government executives will have to overcome the tendency to simply find ways of automating what their agencies have always done, because those methods of operation come from an industrial era in which citizens are seen as the subjects of bureaucratic power, and employees are seen as human “cogs” in a corporate machine.  This traditional mentality is often colored by “us versus them”, “management versus employees,” “employees versus customers” frictions that are increasingly at odds with the characteristics of the modern, technologically-empowered individual who prefers to engage at a peer-to-peer level.  Next-generation government programs will adapt to the new mentality, and engage their clients or stakeholders as potential co-creators and collaborators in achieving a particular mission.  By leveraging the power of the human network — and the connected- and tech-enabled citizen — these programs will simultaneously find ways of improving their delivery of value, while reducing its cost.
  • Outside-In and Co-Creation thinking are relevant to the Homeland Security domain.  Take the visa, border and immigration domain, as one example.  Treating every individual as a potential threat requires an inordinate infrastructure oriented towards “keeping threats out.”  Treating every individual as a potential co-creator of visa, border and immigration integrity creates a human network aimed at a positive, shared concern.  Inviting individuals to co-create the border experience creates options for the agency to learn more about their customers, and establish a win-win relationship in which transactions become part of a shared history and build trust.  Enabling a conversation with those customers — encouraging feedback, suggestions, challenges — allows both customers and the border agency to learn from each other.  That’s not to say it’s all about getting along, and singing Kumbaya.  There’s no secret in the fact that individuals with undesirable characteristics and histories try to cross the border, and need to be stopped.  But the numbers do speak for themselves: on a typical day (in 2013), U.S. Customs and Border Protection (CBP) process almost a million (992,210) people at Ports of Entry.  Within these numbers… 137 (0.014%) are identified as national security concerns… and 366 are refused admission (0.04%).  These are small numbers, proverbial needles in the haystack.  The question for border security is whether it would be more or less efficient to target such individuals in a co-creative, digital approach that leverages voluntary exchange of information, contextually appropriate transaction histories, and analytics... and benefits 99.96% of participants by creating a friendly, seamless experience.  That question is equally appropriate for the broader homeland security enterprise, which extends across multiple Departments and agencies.
  • One way of starting on the path to Outside-In and Co-Creation is to follow the UK government’s example in addressing the inefficiency with which traditional government approaches the question of identifying its clients.  Rather than continuing the pattern of having every agency and every program solve the problem of identification for its own purposes, the UK Cabinet Office has catalyzed commercial-sector identity services that can be leveraged by any government program.  Having tried and failed with a National ID approach, today’s solution asks individuals to establish their own identity with a third party service, which can then be used to log into government services.  That's a move in the right direction.