What if your identity ecosystem caused pollution? 

A new paper describes harms rising from digital identity, and accountable responses for 2023. 

Video of Nicky Hickman and Phil Wolff introducing the paper at the Trust Over IP All Hands meeting on 25 January 2023. 20 minutes of presentation and 20 minutes of Q&A.

Trust Over IP announced today their “Overcoming Human Harm Challenges in Digital Identity Ecosystems” paper for review. Wider Team’s Phil Wolff and Come To The Edge’s Nicky Hickman wrote it with a large team of contributors including Pyrou Chung of the East-West Management Institute.

Six cases track ways people have been hurt when digital identity ecosystems were abused, done poorly, or attacked.

Many of these harms are preventable by standards bodies, developers, and makers of identity systems. Preventable by customers, the enterprises who configure, deploy, and use identity ecosystems. Preventable by public policy makers and the regulators who enforce them.

Despite human harms being preventable, few are doing anything about it. 

So what? 

  • The coming backlash may devastate open standards efforts. The identity community risks fallout as public cases of outrageous harm pile up. We all rely on vigorous open standards bodies.
  • Backlash hurts identity businesses. When biometrics and AI hit their trust and ethics crises a few years’ ago, they faced regulatory interventions, stalled sales, and investor skepticism.
  • Blamestorms hurt identity ecosystem cohesion. Media outrage blames the closest business or agency. A hospital gets blamed for a vendor’s patient data breach, for example. So shaming and blaming of any ecosystem member will tarnish the rest, and drive abandonment and defection from that ecosystem.

However…

  • Accountability mitigates backlash. The more we build accountability into ecosystem governance and technology, the smaller the backlash, hopefully. Accountability By Design could be our watchword.
  • Identity can use proven accountability tools and practices. Other technologies and industries dealt with their negative externalities. Drug tampering led to safety caps. Toxic food led to traceability through food supply chains. Racist AI led to ethical AI standards for training ML systems. Environmental pollution led to… well we’re still working on that. So, let’s adapt effective ideas and practices. 
  • Next? What can identity technologists, industry, regulators, and standards groups do now? The paper lists low-hanging fruit and suggests first steps. 

Preventable side effects offend me. 

My head hurts over these six identity harms case studies, and the many we didn’t include in the paper. I can’t count the number of aspirational manifestos and value statements by folks who work on self-sovereign identity. We frame identity as a human right, a civil right, a duty of good government, a way to empower the excluded, the poor, and the dispossessed. We champion our digital identity technologies and governance in the name of serving humanity, and business progress, and enabling society, and healing our planet. 

Despite our pure motives and hard work… 

These new, richer identity tools can be abused just as much as old ones. It is human nature that every identification technology will be abused. My cynical take on human nature is grounded in history.

Today’s widely used digital identity tools suffer from cultural blindness and dissonance. ToIP’s Asia Pacific Human Experience working group shared stories where Western models of personhood, encoded in nearly all digital identity systems, exclude and distort how identity works for millions of people. The paper tells more about this. 

We looked at identity systems that enabled predators to hound a child to suicide. To facilitate gambling addiction. To leave a refugee in limbo. All without a responsible person to blame. Without even a single company or policy or technical artifact to blame. 

These harms don’t arrive with accountability. 

Digital identity systems are fundamentally “ecosystems.” 

They are made from many parties, in a complex, diverse, fluid, multijurisdictional web. 

Like cities, identity ecosystems are built up over time, with legacies of cultural assumption and prejudices, of old choices and narrow contexts. 

And those of us who have worked with trust frameworks, who bring identity ecosystems into being or refine them, we have not held ourselves morally at fault when they do harm. No more than those who make cars or airplanes or firearms are responsible when those goods result in injury and death. 

You won’t trace those harms back to an open standard committee meeting or to an identity ecosystem rollout any more than you’d trace a spent bullet back to the factory operator that machined it. 

When chains of causality are hard to map, or are untraceable, the root causes of these harms are unaccountable. 

Inciting action, building in accountability, will be difficult. 

  • The identity technology community is in denial about downstream risks. Few in the identity professions and industries believe these human problems exist. 
  • Untested business cases. Nobody believes this matters enough or matters to them personally or matters urgently.  
  • Anecdotes, not statistics. These threats are mostly uncounted and unmeasured; a weakness in quantified policy and management circles. 
  • No ownership. Nobody owns these problems outright. Neither the toolmakers nor any of an ecosystem’s parties own the side effects of a system’s broken trust. 
  • Scopes of responsibility are too narrow. Our definitions of “ecosystem” leave out many kinds of people affected by the ecosystem, and harms inflicted on them. Although governance practices include responsibility to other members of an ecosystem, they rarely name negative externalities or speak to accountability for them. 
  • Ethics are unevenly distributed. The abilities to discover harms swiftly, make sense of them, and respond well are unequal within an identity ecosystem. They will continue to be. Those who are most ready and able to respond are unlikely to be the only parties generating those harms. And some ecosystem parties will trigger harms emitted by others. “Ethics are unevenly distributed” seems like a universal law that begs for corollaries.
  • IAM industry consolidation reduces individual companies fear of side effects. Thoma Bravo buying Forgerock, Ping,and Sailpoint cuts the odds these issues make it to any of their executive OKRs. 

This is blue ocean territory: I haven’t found anyone in our space investing to manage these business risks. No companies, government agencies, or NGOs are actively working on human harm reduction, response, or regulation. After my year on the Trust Over IP Human Experience Working Group’s Harms Task Force, I hope this paper helps folks understand the range of risks digital identity brings. And that we act now, in concert. 

https://trustoverip.org/wp-content/uploads/Overcoming-Human-Harm-Challenges-in-Digital-Identity-Ecosystems-V1.0-2022-11-16.pdf

Wider Team are experts in decentralised identity, helping clients assess risks, identify opportunities and map a path to digital trust. For more information please connect on LinkedIn or drop us a line at hello@wider.team.

Comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.