Blog

Product Safety as a Human Right: Thoughts on Elizabeth Renieris’ “Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse 

Written by Lisa LeVasseur
March 23, 2023

Familiar with Elizabeth Renieris’ keen mind and exceptional writing skills, I was excited to read her recently published book, “Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse”. Here are my key takeaways. 

  • Much of the first part of the book is an overview of the global legal landscape. For those of us in the privacy space, this will be old hat. But she does a masterful job of distilling just the right key inflections in the history of privacy related regulations.  
  • She hits with clarity a few fatal flaws with the current situation of technology governance, namely: 
      • The interpretation and intent focus on privacy as data “confidentiality and control” mechanisms. i.e. trying to corral the horse after it’s left the barn.
      • The movement now towards so-called privacy enhancing technologies that again focus on protecting consumers’ data but not consumers.
      • The looming, thornier privacy issues with neurotechnologies and affect recognition.
      • The false dichotomy of sensitive vs. non-sensitive data.
  • She also has incisive remarks insisting that human rights must be the original and preemptory consideration in any technology behavior [mainly with respect to datafication]
      • “imposing limits on things that cannot or should not be turned into data at all.”  

Renieris is correct—the fixation on data alone takes us—and regulation—to strange places that ultimately fail to have the desired effect. I’m left with two observations: 

First, while I completely agree that human rights must be the foundation and source which guides all action, human or machine, as I note above, a key facet of Renieris’ discussion is about putting limits on datafication. It’s not really “beyond data” after all—it’s more like “before [we] data[fy]”. She acknowledges that we can’t totally divorce from data governance as a necessary piece of the puzzle. And Renieris’ point on preemptive human rights considerations is crucial. We need a lot more stopping before we start [building, standardizing] and asking, “just because we can, should we?”

Secondly, I’ve been contemplating how human rights relate to law and regulation for several years now and it strikes me that laws are often (always?) embodiments of human rights. Human rights are unquestioningly the north star, but they seem to need to be interpreted into acceptable social contracts via law to be actionable, measurable, and enforceable. Maybe not. I’m certainly not any kind of legal scholar. But to a large degree, when I read the call for human rights first, I’m inclined to think: isn’t that how it [the law] works? Isn’t that how it’s always worked? (Or should work?) 

It’s failing in the tech realm for myriad reasons, but a crucial one is that our laws are myopically focused on the behavior of people, not the behavior of technology. Not to be too Asimovian here, but we are long overdue to establish and govern the acceptable behavior of technology, and not just the acceptable behavior of entities producing technology. This is the heart of software product safety: a discipline that has completely escaped the collective attention of software creators for 40 years—i.e. for as long as we’ve had consumer software.  

Safety is a human right—safety from both the natural and the man-made.  

Thus, product safety is a human right. Countries such as India have mandated this in principle and practice through consumer protection law. The US, too, has such a law but it hasn’t been updated and the US Product Safety Commission seems to be ceding responsibility for software product safety to the Federal Trade Commission and Consumer Financial Protection Bureau.  

The point is, we go to great lengths to ensure that the physical products we use are reasonably safe, but we have utterly dropped the ball on software and software-driven technology. The privacy harms, the manipulation/coercion harms—all of the current and looming harms that Renieris points out are just that: product safety risks and harms. These connected, software-driven products that we unceasingly use are simply not reasonably safe for people.
Rallying around human rights alone will not make products safer for humans and humanity. We must agree on what is reasonably safe and we must measure and enforce it. Let me put an even finer point on it: we must agree on, measure and enforce the behavior of safe/unsafe technology. Industry has rarely if ever been a first mover on product safety; it’s a demand-side force that’s been applied externally, often through insurance providers and government. The recent White House National Cybersecurity Strategy, the EU’s Cyber Resilience Act, the FTC’s enforcement of AI labeling/claims, and the pending California case citing mental health harms by social media are all setting the stage for increasing software product liability, which can only mean software product safety isn’t far behind.  

Thank goodness.