While we have seen unparalleled innovation in the 21st century, we have also witnessed the damaging unintended consequences of unchecked technologies.
Mark Zuckerberg, for example, didn’t start Facebook intending for third-party abuse and political interference to run rampant on the platform. Yet, fueled by the mantra of “move fast and break things,” a platform intended to “give people the power to share and make the world more open and connected” ended up having devastating unintended consequences, such as the recent storming of the Capitol.
The unintended consequences of technology are not a 21st-century revelation. In the 1930s, Robert Merton proposed a framework for understanding the different types of unintended consequences — unforeseen benefits, perverse results, and unexpected drawbacks. Indeed, throughout history, we have seen how significant advances, such as the industrial revolution or high fructose corn syrup, can have lasting harmful effects on society, like air pollution and diabetes. However, the consequences of today’s technologies are more nefarious because the rate at which they compound has increased exponentially. The rapid scaling catalyzed by Moore’s and Metcalfe’s laws has both benefited the technology industry and undermined it by exacerbating its unintended consequences.
Are these unintended consequences inevitable, a necessary cost of human progress on other fronts? Or, can we anticipate and mitigate them?
The very word “unintended” suggests consequences we simply can’t envision, hard as we may try. Our natural limit in predicting the future implies there may not be much we can practically do in advance. It seems we have to settle for a utilitarian trade-off, hoping that any novel benefits of technology, both anticipated and unintended, are greater than its costs. We may be dismayed by Google’s unintended consequences, like search bias, but do we really want to relinquish the ability to access the world’s information at our fingertips? Although a utilitarian calculus has pragmatic appeal, it begets its own unintended consequence — distancing entrepreneurs and their investors from taking responsibility. Why should they be held accountable for a harmful consequence they did not intend to create, especially when their business also generated a lot of social good?
Despite its difficulty, we believe entrepreneurs and investors must step up and own the unintended consequences of their businesses. As Hemant has previously written, a founder’s mindset is integral to catalyzing change with regard to how companies think about intended and unintended consequences. Without a founder’s upfront willingness to confront these challenging questions — and surround themselves with diverse thinkers to buffer their blind spots — it is unlikely that an organization will see the ways in which their products may affect society or have the wherewithal to build appropriate checks and balances.
Leveraging Algorithmic Canaries
In the past, preventing the unintended adverse effects of innovation was challenging. Without computers to assist them, businesses could only rely on human foresight to predict what would happen and build appropriate guard rails. Or, they had to assign teams to closely monitor how their technology’s consequences evolved as it proliferated. In most cases, neither that premonition nor that job function sufficed. Course corrections came too late because problems only surfaced when they became headline news. Moreover, once a technology became deeply entrenched, the businesses operating them had already deeply calcified economic interests that were challenging to unwind.
While today’s technologies are more complex and potentially harder to mitigate, we finally have a tool that allows us to identify issues that risk spiraling out of control — artificial intelligence.
Deep learning AI can help identify patterns that humans may not readily discern and gives us new found predictive ability. Unleashing algorithmic canaries into our technologies is the first step we must take to anticipate and mitigate unintended consequences.
We’ve seen, for example, the development of AI models, such as The Allen Institute for AI’s Grover, that searches for “fake news” and blocks misinformation before it reaches a mass audience. The Brookings Institute recently profiled several other examples of AI models that can generate and spot fake news. Their studies concluded that Grover had a 92% accuracy in terms of detecting human vs. machine-written news.
We suggest that similar algorithmic canaries can be developed to mitigate a broad range of unintended consequences. The issue is that, currently, we create these AI algorithms retrospectively. Going forward, we believe that founders must incorporate these systems at the earliest stages of the product development process. Taking a systems design approach to responsibility and clearly articulating it as an OKR (Objective and Key Result) allows engineering teams to embed canaries deeply into their technologies and track them as KPIs (Key Performance Indicators). In this way, companies can begin to measure what really matters beyond their own business success — the potential unintended consequences of their technologies and their leaders’ responsibility to mitigate them.
Articulating Types of Unintended Consequences
While the example of fake news as an unintended consequence of media platforms seems obvious today, the challenge founders face when developing algorithmic canaries is what to train them to catch. We need these algorithms to dynamically anticipate unintended consequences that might emerge from actions undertaken by firms themselves, such as the challenges to consumer privacy when the business model relies on the provision of data that can be monetized through advertising. They also need to identify consequences of events that might occur outside any given firm’s control but can be mitigated if anticipated, such as losing segments of an entire generation because of lack of access to education in the event of a pandemic. Ultimately, while the types of unintended consequences will vary company by company, we must begin to develop a typology to guide our thinking collectively.
The ESG framework many impact investors now advocate for is a useful starting point as it encourages us to think of unintended consequences that span across the environmental, social, and governance spectrum. However, given the detail needed to develop algorithmic canaries, this typology will require more specificity for it to be actionable. Some examples of the types of specificity we should look out for include:
- Propagation of misinformation
- The concentration of information and market power
- Breaches of privacy and personal information
- Increasing inequality in the workforce
- Reduced access to essential goods and services
- Alienation or social isolation
- Damage to the environment
This list is by no means comprehensive, but it outlines the types of unintended consequences we should watch out for.
Managing Unintended Consequences
A framework for categorizing unintended consequences is only useful if it is backed by disciplined practice. Algorithms can do a lot of the work humans can’t; however, it is up to an organization’s leaders to push this effort beyond an intellectual exercise. We offer some suggestions below on how founders, investors, and regulators can systematically work together to reduce unintended consequences in practice:
- Elevate consideration of unintended consequences from the outset. Entrepreneurs and investors should insist on an in-depth analysis of unintended consequences when founding companies. Founders should include these considerations in their pitch materials, and investors should dig deep into them during diligence. Anticipating unintended consequences should assume as much significance as any other business metric when entrepreneurs and investors contemplate a partnership.
- Orient corporate governance around mitigating unintended consequences. The mainstay of corporate governance are boards of directors that help businesses make important decisions and fulfill their fiduciary responsibilities. Increasingly, many firms also have independent advisory boards to help guide specific questions around technology development. In a similar vein, companies should consider creating sub-committees of their existing boards or perhaps even create independent bodies, as Facebook is now exploring, to govern how well they are managing unintended consequences. Doing so ensures unintended consequences are given as much importance as other factors that good governance demands.
- Partner with regulators to create accountability. To manage unintended consequences, we must be open to sensible regulation that protects our collective interests. We would benefit from innovators getting together to propose frameworks for self-regulation, though regulatory agencies may also play a useful role. The FDA is a good example of an agency that carefully considers the unintended medical harms that a new drug or device may have before they approve it for distribution. One can imagine that other agencies might play a similar role in launching new technologies, though we will want them to be less cumbersome and time-consuming.
Currently, the ethos that guides those at the intersection of technology, policy, and capital is to build businesses that can leverage new technologies, scale them as rapidly as possible, and keep regulation at bay. We have celebrated disruptive companies, but we have not indicted the unintended disruption they can cause. The result has been the formation of companies that have become ubiquitous in our lives but have also unleashed a wide range of harmful unintended consequences. We advocate a new ethos of innovation, one in which unintended consequences are rigorously considered at the outset and monitored over time to mitigate them significantly. We believe we can accomplish this by technology innovators building software algorithms that can serve as canaries for emerging harms, capital providers insisting on assessing and governing unintended consequences, and policymakers evaluating unintended consequences to assure compliance. It’s a very different ethos, but it is essential to embrace if we want to avoid living in a dystopian world.
This article originally appeared on Harvard Business Review.