Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

Machine Learning Can Save Your Life – Here's How

Kristy Esparza

Netflix is perhaps the most commonly cited source of “machine learning in the real world.” But has the subscription service ever saved a life? I haven’t put my M.D. from the Grey’s Anatomy school of medicine to good use yet, so I personally can’t say that it has.

But machine learning in general? Oh yeah. It saves lives.

Being in e-discovery, you probably know machine learning for its impacts on review speed and cost. However, there are—dare we say—more exciting and groundbreaking ways it’s being used outside our industry. Here are the top three.

1. Improving the accuracy of breast cancer screening.

The American Cancer Society estimates that one in 38 women will die from breast cancer. And though death rates dropped 39 percent from 1989 to 2015, thanks in part to early detection, screening is far from perfect.

When a “high-risk” lesion is detected on a mammogram, doctors often advise patients to undergo surgery or painful biopsies to determine if the lesion is benign or malignant. Ninety percent of the time, test results come back normal. But when a doctor tells you it could be cancer, would you want to take the chance?

Researchers at MIT are tapping into machine learning to help women and their doctors make more informed decisions and avoid unnecessary, painful procedures. The model uses scans from more than 600 high-risk lesions and data points such as demographics, family history, past biopsies, and pathology reports to find patterns among the scans that resulted in cancer. So far, the model has led to fewer unnecessary surgeries, as well as more accurate diagnoses of cancerous lesions than traditional methods.

MIT isn’t the only research body applying machine learning to breast cancer; Google is also working on improving the accuracy rate of screening and has so far seen an 89 percent accuracy rate of its model—significantly higher than the 73 percent average for a human pathologist. 

2. Pinpointing exactly when an earthquake will hit.

Unlike hurricanes and tornadoes, scientists cannot predict when an earthquake will come rumbling into town—which leaves zero time for residents to evacuate. A team of UK- and US-based researchers are using machine learning to change that.

They’ve recreated earthquakes in a lab setting—what they call “labquakes”—using steel blocks to mimic the physical forces and seismic signals and sounds that are emitted from the fault during real earthquakes. Their machine learning model then determines the relationship between the sounds coming from the fault and how close the fault is to failing. When it fails, we have an earthquake.

By analyzing the signals, the machine learning model can predict the amount of the time remaining before the fault fails, as well as how severe the earthquake may be.

Of course, there are differences between the “labquakes” and real earthquakes, but the researchers are already scaling their efforts and applying their model to real systems that most closely resemble their small, repeating labquakes—like the San Andreas fault line. The hope is to one day predict earthquakes like we can predict other natural disasters. If all goes well, they may even be able to use the model on avalanches and landslides, too.

3. Keeping the planet green to save us all.

Sustainability is a cornerstone of Google’s culture, so it’s not surprising they use machine learning for more than their consumer products. Most recently, they’ve turned their minds toward making their data centers more efficient.

The tech giant reports that the world’s existing data centers use roughly two percent of the population’s electricity. Though Google had optimized their own facilities to run as efficiently as possible—50 percent more efficient than the industry average—dozens of variables, such as different combinations of cooling towers, water pumps, control systems, and air quality, stalled further refinement … at least for the human mind.

But, after 18 months of work, they’ve built a machine learning model that detects patterns and delivers recommendations for achieving maximum energy conservation. The model has piloted in multiple Google data centers and has so far produced a 40 percent reduction in energy used for cooling and 15 percent reduction in overall energy overhead.

And Google isn’t keeping the secret to their success under wraps. Rather, they plan to make their algorithm open source to help other data centers—as well as power plants and factories—lower their energy usage.

Learn How Active Learning Will Change Your e-Discovery Review


Kristy Esparza is a member of the marketing team at Relativity, specializing in content creation and copywriting.

The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.