Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

Paradox of the Black Box: Inverse Relationship Between AI Accuracy and Transparency

Nitant Narang

In 2019, Apple partnered with Goldman Sachs to make its widely anticipated foray into the payments ecosystem. Launched to much fanfare from iOS loyalists and early adopters, the Apple Card sought to “reinvent the credit card.” Technology reporters raved about its attendant privacy and security benefits and predicted that it would catalyze the industry’s transition to a digital-first and cardless future.

But a few days into the launch, the product’s reception in the market took a different turn after a series of tweets from a California-based entrepreneur. In the tweetstorm, he alleged that even though his wife’s credit score was higher than his own, the Apple Card cleared her for only one-twentieth of the credit limit that he was approved for. He further alleged that the “blackbox algorithm” being used to determine creditworthiness of card applicants was biased against women. According to his tweets, the company’s reps insisted that there was no discrimination; at the same time, they could not explain the reason behind the gap between the couple’s spending limits.   

The focus of attention soon turned to Goldman Sachs, the underwriter of Apple Card, and in subsequent weeks, an investigation was undertaken by the New York State Department of Financial Services (NYSDFS). The ensuing investigative report exonerated Goldman Sachs and found that the algorithms and models used to make credit decisions did not consider “prohibited characteristics of applicants” such as gender or marital status. The news cycle blew over after the report found that the bank, in each of the cases concerned, made credit determinations based on legal considerations such as income, credit utilization, and missed payments.

But what if neither regulators nor highly qualified AI engineers or data scientists could identify the reasons or trace the decision points that led to the AI model’s output? What if they looked inside the model and struggled to untangle the sprawling network of variables and complex web of non-linear relationships between them?     

The Invisible Hand of Deep Learning 

Today, some of the most sophisticated and accurate AI models—from self-driving cars and customer segmentation to early disease detection—are inscrutable. For example, take Deep Patient, an AI model that was trained on data from the health records of over 700 patients at New York’s Mount Sinai hospital. When it was deployed on records of new patients, Deep Mind was able to accurately predict their susceptibility to a wide range of diseases and health conditions (from liver cancer to psychiatric disorders like schizophrenia). And yet, notwithstanding its accuracy, doctors at the hospitals couldn’t divine the reasons behind the AI model’s predictions; they could only chalk it up to the AI’s superior ability to find hidden patterns in patient data that they couldn’t see.

Notably, the AI model behind Deep Patient did not require any human help or intervention. Like some of the best performing AI models out there, it was built using deep learning—a new paradigm in machine learning that emerged concomitant to the Big Data explosion of the last decade. What’s distinctive about deep learning is that it shares similarities with how the human brain works; also referred to as artificial neural networks, deep learning models take inspiration from the biological neural circuits of the human brain. 

But, like the inexorable mysteries of the human brain, the internal dynamics of deep learning models cannot be fully explained. Their computational sophistication and ability to see the complex relationships between variables are unmatched; as an example, consider AlphaFold, Google’s deep learning program, which uncovered unknown patterns in protein folding in 2020 and shifted the paradigm for drug discovery and biology. 

Transparency or Accuracy: What Do We Value More?

On the one hand, we have an AI model that renders an output based on a limited set of variables; its output can be readily deconstructed and analyzed in detail. We can, in other words, look inside it and understand the reasoning behind its predictions.   

On the other hand, we have a deep learning model whose inner workings cannot be questioned or examined, but it can discover invisible relationships between thousands of variables and return a prediction with an accuracy that is uncontested.

Which one would we prefer? Should we sacrifice accuracy at the altar of transparency or vice versa?

I think the answer depends on the risk factors and the ethical nuances of the specific situation in which AI is applied. For example, any AI model used to make decisions on credit or loan applications ought to be transparent and explainable.

Contrariwise, when it comes to cancer detection, a deep learning model would arguably be preferable to a transparent AI model that fails to identify possible signs of tumor—even if the latter is a more transparent model.

If we were to map this dichotomy to the justice system, it can be argued that an AI model used in forensic DNA profiling should over-index on transparency and lend itself to examination; algorithmic transparency would help obviate situations where a DNA match is a false positive and leads to a wrongful conviction.

On the other hand, organizations involved in a lawsuit can use a deep learning model to identify privileged documents and trade secrets during discovery; doing so would enable them to prevent human oversight and jeopardizing their legal defense. 

AI and Legal: Four Trends to Watch in 2023

Nitant Narang is a member of the marketing team at Relativity

The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.