This week, US Surgeon General Dr. Vivek Murthy has called for warning labels for social media platforms. If you know anything about Internet Safety Labs (ISL) you’ll know that our primary mission is the development of free and accurate safety labels for technology. So naturally, we heartily agree with Dr. Murthy that technology needs labels—but perhaps not warning labels and definitely not just for social media platforms.
Various experts have written thoughtful responses1 to this week’s call for warning labels, and their concerns underscore the fact that a warning label may be inappropriate.
But of course we need safety labels on technology.
History of Product Labels
There are several types of product labels: ingredient labels (food), test result labels (automobile crash test ratings), warning labels from the surgeon general (cigarettes) and from other entities (OSHA’s Hazard Communication System for chemicals).
Safety labels have a long-standing history in the US as a core component of product safety and product liability. The purpose of safety labels is to illuminate innate—i.e. unavoidable—risks in a product whether it is food, vehicles, cleaning solvents, toys for children, or the technology that we use with increasing reliance for all facets of living our lives.
Safety labels almost always lag commercial product introduction, and in at least a few cases, product safety can lag by decades. For instance, for cars, product safety awareness and measures (like seatbelts) emerged 50-plus years after their mass availability. Consumer computing has been around for about 40 years now, and it will likely be another 10 years before we see product safety in full swing for software-driven technologies.
According to InComplianceMag.com, US and Canadian tort-based law makes manufacturers’ product safety obligations clear (emphasis is mine):
” Manufacturers have an obligation to provide safe products and to warn people about any hazards related to the product. Those requirements have risen up out of the original product safety/liability cases, some of which happened in the same timeframe as the Chicago World’s Fair, the middle to late 19th century, with many more to follow.
The assumption in U.S. liability law, and also typically if a case is brought in Canada, is that the manufacturer of the product is guilty and has to prove that they did everything necessary to provide a safe product. That includes warnings, user instructions, and other elements. Today, that continues to be the basic concept in product liability, that the burden lies on the manufacturer to prove that they did everything possible to make their product safe.”
https://incompliancemag.com/product-safety-and-liability-a-historical-overview/
Another interesting fact about safety labels is that they always lag commercial product introduction, and in at least a few cases, product safety can lag by decades. For instance, for cars, product safety awareness and measures (like seatbelts) emerged 50-plus years after their mass availability. Consumer computing has been around for about 40 years now, and it will likely be another 10 years before we see product safety in full swing for software-driven technologies.
If tech were food we would have never stood for the absence of product information for as long as we have. Never. We use tech with little to no visibility or awareness of what it’s actually doing. That simply must change.
We need a science of product safety for software and software driven technology. And that’s exactly what we’ve been building for five years at ISL. The current attitude of placing the onus on consumers to somehow gird themselves against invisible risks that not even vendors fully understand is absurd. Of course we need labels.
And the good news is we’ve got them started on over 1,300 EdTech related apps. Here’s an example https://appmicroscope.org/app/1614/. The image below shows just the label header and the safety facts summary.
Labels for Technology
What type of label is appropriate for technology? A warning label is appropriate when the science is irrefutable. Are we there with the physical and mental health risks due to the use of technology? Maybe. Depends on who you ask. But maybe a label more like chemical warning labels is appropriate. Or perhaps just a test results label.
In our work at Internet Safety Labs, our intention since day one was to expose invisible or difficult to recognize facts about risky behaviors of technology. As can be seen from the design of our app safety labels, we chose to emulate food nutrition labels that report measured findings. This approach of reporting measured findings works very well for this early stage of the science of product safety for technology.
For instance, in our safety labels, you can see the category averages for most of the measures in the label. Why did we do that? Because there is no concrete threshold that distinguishes safe from unsafe ranges. There’s no industry standard that says, “more than ten SDKs is bad” for example. Moreover, technology norms vary by industry, such that personal information collection in fintech and medical apps is quite different than personal information collection in retail (at least one hopes). Thus, the category averages displayed in our labels don’t necessarily mean “safe”, they just provide context as we continue to measure and quantify technology behavior. An example of the shortcomings of this approach is when, for instance, the category average number of data brokers is greater than zero for apps typically used by children. (We advocate for no data brokers in technology used by children.) But we need to start with understanding the norms. We can’t change what we can’t see.
The Devil is in the Details
The call for a congressional mandate for something (not necessarily a warning label) is a step in the right direction. Why? Because it treats software as a product and tacitly places product safety requirements on it. This is an advancement in our eyes.
Moreover, product safety is almost always the domain of government (or insurance). In the absence of a government mandate for product safety for technology, we see fragmented efforts with the FTC boldly championing privacy risks in technology, and the FCC advocating for a different type of label. So indeed, it’s encouraging that we’re starting to talk about technology in product safety terms.
But the devil is in the details of any labeling program. In the words of Shoshana Zuboff, “who decides and who decides who decides?” As in, who decides what goes on the labels? Also who oversees the integrity of the labels? The US government is a customer of data obtained by surveillance capitalism2. When it comes to technology can the government be trusted to keep people safe? (When it comes to food can the government be trusted to keep people safe? When you dig into it, the track record is spotty.)
Product safety exists in natural opposition to the industry status quo and any kind of regulation is already facing and will continue to face strong opposition3. In the early 1900s, when chemist Dr. Harvey W. Wiley began a crusade for the labeling of ingredients and identifying toxic elements in food, industries who relied on the opacity of ingredients (snake oil salesmen) or who simply didn’t want to incur the cost of change (whiskey distillers) opposed such a mandate.
“Strenuous opposition to Wiley’s campaign for a federal food and drug law came from whiskey distillers and the patent medicine firms, who were then the largest advertisers in the country. Many of these men thought they would be put out of business by federal regulation. In any case, it was argued, the federal government had no business policing what people ate, drank, or used for medicine. On the other side were strong agricultural organizations, many food packers, state food and drug officials, and the health professions. But the tide was turned, according to historians and Dr. Wiley himself, when the activist club women of the country rallied to the pure food cause.”
Product safety challenges the status quo and creates necessary growing pains for industry. But industry always survives. And more often than not, new industries emerge, such as the ongoing development of safety features for vehicles.
Let’s return to the challenge of deciding what goes in the labels. We at ISL know quite a lot about what it takes to develop safety labels in a space where the naming and measurement of risk isn’t fully baked (or worse, non-existent). Determining what goes into a previously uncharted, unmeasured safety label is extraordinarily challenging. It’s even more challenging if the measurement tools don’t exist. But our situation is even worse than that: we don’t even have agreement on what the risky behaviors in technology are. AND, we are talking about behaviors here—which is not language we typically associate with products. Products don’t typically behave. From our several years in development, these are the highly iterative steps that must occur to reach consensus on labels for technology:
As far as presentation of the data, in our case, we decided to aggregate the data into clusters based on riskiness, and we also ultimately decided to provide a single app score. This was done with some reluctance, and it will no doubt be a much-evolving scoring rubric for the next few years.
For now, we believe the best thing the labels can do is objectively report the invisible (or poorly understood) behaviors of the products until such time as definitive harm thresholds can be derived.
There’s a final vital detail regarding the establishment of any labels, and that’s having what I would characterize as exceptional diversity of participants in establishing safety standards. This isn’t lip service. A few years ago, when I started to better see how what was risky for me was very different than what was risky for people who are different from me such as a person of color, or a person with a disability, or an incarcerated person, I woke up one night from a deep sleep with the awareness that any attempt at standardizing or consensus is doomed if it doesn’t have full diversity involved6 . Why this is so is a long and complicated matter. On the one hand, everything ever done should endeavor to have exceptional inclusion of a massively diverse set of participants.
But it also has to do with the fact the software and software driven tech is “alive” and interactive in a way that other products in our lives aren’t. We have a special duty when it comes to product safety of software animated products. We may even need to reconsider what a “product” is. We have seen evidence of the hazards of animated technology not built with adequate understanding of the diversity of users with the embodiment of human bias in automated decision making or with hand dryers that don’t activate for people of color. The point is that technology acts on and with us in a different (and constantly changeable) way than other products. So labeling is both harder and matters more than ever.
Conclusion
Overall, I remain optimistic that the lens is happily starting to focus on product safety, implicit though it may be. People will be thinking more about labels for technology. And they will see that ISL is already providing labels with privacy risks. We can call out the presence of infinite scroll, and like buttons and other widely recognized as addictive user interface patterns in labels today.
As I mentioned above, confusion stems from Dr. Murthy’s call for a “warning label” instead of a safety or ingredients label. Technology is cigarettes7. We use the metaphor all the time. Technology today is cigarettes in the 1940s/1950s when just about everybody chain smoked and the harms were likely all anecdotal and pooh-poohed. It took decades to assemble causal evidence. But tech is also much more complicated than cigarettes and a warning label is premature. This is not a compelling argument to say that we don’t deserve to have accurate information on tech’s risky behaviors. As it is right now, we don’t even have an ingredient label for technology. We are flying (tech-ing?) blind.
Of course we need labels. Industry would do well to proactively embrace label enablers like software bills of material, consent receipts, and machine-readable record of processing activities (ROPAs). Because there can be no doubt that labels are imminent.
Earlier, I said that we’ve “started”. I say that because our labels only include privacy risks at present. Our labels are deliberately modular and we’ve scoped additional sections:
-
- Risky UI Patterns –like deliberately addictive UI patterns of the sort Dr. Murthy is calling for exposing. Our Safe Software Specification for Websites and Mobile Apps already describes measurement of these kinds of risks.
- Automated Decision-Making Risks
- Security [client side only] Risks
- Differences between observed tech behavior and privacy policy and/or terms of service.
All of these are on our roadmap. We know exactly how to add these sections to the label, it’s strictly a resource and funding issue. If they sound good to you, please consider supporting our mission.
Because of course we need labels.
Footnotes:
- https://www.wsj.com/us-news/u-s-surgeon-general-calls-for-warning-labels-on-social-media-platforms-473db8a8?st=gmnjmhotka7febm&reflink=desktopwebshare_permalink
https://technosapiens.substack.com/p/should-social-media-have-a-warning - https://arstechnica.com/tech-policy/2024/01/nsa-finally-admits-to-spying-on-americans-by-purchasing-sensitive-data/
https://www.nbcnews.com/tech/security/us-government-buys-data-americans-little-oversight-report-finds-rcna89035
https://www.vice.com/en/article/jgqm5x/us-military-location-data-xmode-locate-x - https://www.politico.com/news/2023/08/16/tech-lobbyists-state-privacy-laws-00111363
- We have ongoing work with our Digital Harms Dictionary.
- They will be wrong, and you will have to find a different measure.
- We welcome everyone, whether you are technical or not, to participate in our open Software Safety Standards Panel where we define the content of the safety labels, and name hazards and harms.
- Tech may actually be worse than cigarettes because it has the capability of inflicting every kind of harm people can experience, either directly or indirectly, in a multitude of increasingly creative ways: financial, reputational, social, emotional/psychological, and even physical.