We’ve Been Diagnosing Disease the Same Way for 100 Years. Here’s Why That Might Finally Be Changing

Share Now

Share Now

Here’s a quick thought experiment. Imagine walking outside your office right now. It’s just a regular Wednesday, nothing special. You’d see some cars, that coffee shop with the sour espresso, pedestrians strolling by. You know, something like this:

busy city street

Is there anything in that scene you can’t immediately identify? Is there a single object you couldn’t name with barely a second’s thought? Without knowing who’s reading this blog post, the answer is simple: you know everything in that photo. It’s all familiar. Every single bit of it.

For a machine, however, this is tough. Nigh near impossible, in fact. Pedestrians occluded by street signs, light posts blending into stop lights, flags half out of frame, signage crawling in unfamiliar directions: any of that could confuse a computer vision algorithm, even bleeding-edge models with millions of dollars and thousands of hours invested in them. If that wasn’t the case, we’d all be trundling to work in brand new fool-proof self-driving cars.

The reason this is tough for machines is simple: streets are visually noisy. They’re cluttered. Every scene is different, fluid, complex. Machines need examples to learn and it’s difficult to provide a machine with every example it needs to make smart predictions about a busy space.

Now, let’s look at a different scene, one you’re probably a bit less familiar with. How much can you identify in this image?

microscopic germs/diseaseUnless you helped your kid brush up for her recent AP Bio exam, chances are your verdict is something like: “here are some cells. And then, here are some other cells. And behold: various other cells.”

Now, interestingly, this is a visual problem machines actually excel at. Instead of a busy street full of half-visible objects and stray dross, a microscopy image is ordered and relatively predictable–nobody’s worried about a stray pedestrian showing up in a biopsy image. Simply put: when there’s more order and less possibilities, machines learn and see better.

microscopic disease microscopic disease

This is all a shorthand for why there are some domains where computer vision is far superior to others. In those domains, the question becomes: can machines actually see as well as humans?

The answer might surprise you. Because not only can machines judge microscopy images nearly as well as humans, sometimes they actually see better. And, most importantly, if you combine a system trained with quality data with expert pathologists, what you get is something that’s staggeringly accurate. To the tune of 99.4%.

That’s what Dr. Humayun Irshad uncovered with his colleagues at Harvard Medical School and MIT.

His team’s most recent research starts with a simple observation: we’ve been diagnosing diseases the same way for the past century. Basically, a trained pathologist hovers over a microscope and makes a diagnosis. And while our understanding of diseases has obviously blossomed in the past hundred years, fundamentally, the medical community is doing the same thing it’s always done. Namely, relying on experts.

And there’s nothing wrong with that. After all, untrained eyes need a ton of training to pick out the sometimes minute discrepancies that distinguish cancerous cells from non-cancerous ones. But the process can be tedious. It can involve hours of sitting and millions of cells viewed under a microscope. It’s the exact sort of tedious–but incredibly important–work at which machines typically excel.

In fact, the Dr. Irshad and his colleagues found that their Deep Learning model predicted a correct diagnosis about 92% of the time. That’s impressive in its own right; expert pathologists achieved about a 96% accuracy rate on their own.

So the deep learning method here was really solid to begin with. But as we’ve seen time and time again, when you combine human intelligence with machine intelligence, you end up with something that’s better than either alone. In fact, let’s hear it from one of the study’s authors:

“The truly exciting thing was when we combined the pathologist’s analysis with our automated computational diagnostic method, the result improved to 99.4 percent accuracy. Combining these two methods yielded a major reduction in errors.”

99.4% is a staggeringly accurate diagnosis rate. And again, it’s superior to either machines or experts by themselves. And though Dr. Irshad’s study is notable for its accuracy, it’s not an outlier. Industry publications believe radiologists who embrace AI will replace those who do not. Stephen Chan, an associate clinical professor of radiology at Columbia University Medical Center’s Harlem Hospital, talked recently about how Human-AI teams performed better than machines or experts alone. He even borrowed a “centaur” analogy here from Gary Kasparov, a big believer in AI-human partnerships, who’s written about these collaborations for years and will be speaking at Train AI this year (speaking of, tickets are available, and you should totally come).

But the list goes on. Because medical imagery is more uniform than the noisy, busy world of streets, personal photographs, or, really most other computer vision applications, the progress here is coming faster than almost anywhere else. Which is, of course, a great thing. Because as lovely as self-driving cars will be, they’re a nice-to-have. Better disease diagnosis is much more than that. The combination of human and machine intelligence has the chance to change how we diagnose disease forever. And it’s very promising.

Schedule a demo with Figure Eight Federal to take your AI from Pilot to Production and increase the accuracy of cellular research.

 

Get Started

Fully customizable AI solutions will help your organizations work faster and with more accuracy.