Why We Shouldn’t Fear AI in Healthcare

Vijay Pande Posted July 7, 2020

Vijay Pande reads his essay on why we shouldn’t fear AI in healthcare/medicine

***

Alongside the excitement and hype about our growing reliance on artificial intelligence, there’s intense fear about the way the technology works. A 2017 MIT Technology Review article titled “The Dark Secret at the Heart of AI” warned, “No one really knows how the most advanced algorithms do what they do. That could be a problem.” Thanks to this uncertainty and lack of accountability, a report by the AI Now Institute at NYU recommended that public agencies responsible for criminal justice, health care, welfare, and education shouldn’t use such technology.

Given these types of concerns, the unseeable space between where data goes in and answers come out is often referred to as a “black box” — seemingly a reference to the hardy (and in fact orange, not black) data recorders mandated on aircraft and often examined after accidents. In the context of AI, the term more broadly suggests an image of being in the “dark” about how the technology works: We put in and provide the data and models and architectures, and then computers provide us answers while continuing to learn on their own, in a way that’s seemingly impossible — and certainly too complicated — for us to understand.

There’s particular concern about this in health care, where AI is used to classify which skin lesions are cancerous, to identify very early stage cancer from blood, to predict heart disease, to determine what compounds in people and animals could extend healthy life spans, and more. But these fears about the implications of black box are misplaced. AI is no less transparent than the way in which doctors have always worked — and in many cases it represents an improvement, augmenting what hospitals can then do for patients and the entire health care system. After all, the black box in AI isn’t a new problem due to new tech: Human intelligence itself is — and always has been — a black box.

Let’s take the example of a human doctor making a diagnosis. Afterward, a patient might ask that doctor how she made that diagnosis, and she would probably share some of the data she used to draw her conclusion. But could she really explain how and why she made that decision, what specific data from what studies she drew on, what observations from her training or mentors influenced her, what tacit knowledge she gleaned from her own and her colleagues’ shared experiences, and how all of this combined into that precise insight? Sure, she’d probably give us a few indicators about what pointed her in a certain direction — but there would also be an element of guessing, of following hunches. And even if there weren’t, we still wouldn’t know that there weren’t other factors involved, of which she wasn’t even consciously aware.

If the same diagnosis had been made with AI, we could draw from all available information on that particular patient — as well as data anonymously aggregated across time and from countless other relevant patients everywhere, in order to make the strongest evidence-based decision possible. It would be a diagnosis with a direct connection to the data, rather than human intuition based on limited data and derivative summaries of anecdotal experiences with a relatively small number of local patients.

But we make decisions in areas that we don’t fully understand every day — often very successfully — from the predicted economic impacts of policies and weather forecasts to how we conduct much of science in the first place. We either oversimplify things or accept that they’re too complex for us to break down linearly, let alone explain fully. It’s just like the black box of AI: human intelligence can reason and make arguments for a given conclusion, but it can’t explain the complex, underlying basis for how we arrived at a particular conclusion. Think of what happens when a couple gets divorced because of one stated cause — “infidelity” — when in reality there’s an entire unseen universe of intertwined causes, forces, and events that contributed to that outcome. Why did they choose to split up when another couple in a similar situation didn’t? Even those inside of it can’t fully explain it. It’s a black box.

The irony is that compared to human intelligence, AI is actually the more transparent of intelligences! Unlike the human mind, AI can — and should — be interrogated and interpreted. From the ability to audit and refine models and expose knowledge gaps in deep neural nets to the debugging tools that will inevitably be built and the potential ability to augment human intelligence via brain-computer interfaces, there are many technologies that could help interpret artificial intelligence in a way we can’t do in interpreting the human brain. In the process, we may even learn more about how human intelligence itself works.

Perhaps the real source of critics’ concerns isn’t that we can’t “see” AI’s reasoning — it’s that as AI gets more powerful, the human mind becomes the limiting factor. It’s that, in the future, we’ll basically need AI to understand AI. In health care as well as in other fields, this means we will soon see the creation of a new category of human professionals who don’t have to make the moment-to-moment decisions themselves, but instead manage a team of AI workers — just like commercial airplane pilots who engage autopilots to land in poor weather conditions. Doctors will no longer “drive” the primary diagnosis; instead, they’ll ensure that the diagnosis is relevant and meaningful for a patient, and oversee when and how to offer more clarification and more narrative explanations. The doctor’s office of the future will very likely include computer assistants, on both the doctor’s side and the patient’s side, as well as data inputs that come from far beyond the office walls.

When this happens, it will become clear that the so-called “black box” of AI is more of a feature, not a bug — because it’s more possible to capture and explain what’s going on there than it is in the human mind. None of this dismisses or ignores the need for AI oversight. It’s just that instead of worrying about the black box, we should focus on the opportunity — and therefore better address a future — where AI not only augments human intelligence and intuition, but perhaps even sheds light on and redefines what it means to be human in the first place.

This op-ed originally appeared in The New York Times.

The a16z Show

The a16z Show discusses the most important ideas within technology with the people building it. Each episode aims to put listeners ahead of the curve, covering topics like AI, energy, genomics, space, and more.

Learn More
Recommended For You
Infra

Performance and Passion: Fal’s Approach to AI Inference

Burkay Gur, Batuhan Taskaya, and Jennifer Li
Infra

How to Vibe Code Securely

Feross Aboukhadijeh and Joel de la Garza
Infra

AI Is Upending SaaS Pricing

Scott Woody and Martin Casado
Infra

AI’s Unsung Hero: Data Labeling and Expert Evals

Manu Sharma and Matt Bornstein
Infra

AI, Data Engineering, and the Modern Data Stack

Tristan Handy, Jennifer Li, and Matt Bornstein

Expert News by a16z

We have built a network of experts who are deeply rooted in technology and how it’s shaping our future. Subscribe to our newsletters to receive their perspectives.

Views expressed in “posts” (including podcasts, videos, and social media) are those of the individual a16z personnel quoted therein and are not the views of a16z Capital Management, L.L.C. (“a16z”) or its respective affiliates. a16z Capital Management is an investment adviser registered with the Securities and Exchange Commission. Registration as an investment adviser does not imply any special skill or training. The posts are not directed to any investors or potential investors, and do not constitute an offer to sell — or a solicitation of an offer to buy — any securities, and may not be used or relied upon in evaluating the merits of any investment.

The contents in here — and available on any associated distribution platforms and any public a16z online social media accounts, platforms, and sites (collectively, “content distribution outlets”) — should not be construed as or relied upon in any manner as investment, legal, tax, or other advice. You should consult your own advisers as to legal, business, tax, and other related matters concerning any investment. Any projections, estimates, forecasts, targets, prospects and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Any charts provided here or on a16z content distribution outlets are for informational purposes only, and should not be relied upon when making any investment decision. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, posts may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein. All content speaks only as of the date indicated.

Under no circumstances should any posts or other information provided on this website — or on associated content distribution outlets — be construed as an offer soliciting the purchase or sale of any security or interest in any pooled investment vehicle sponsored, discussed, or mentioned by a16z personnel. Nor should it be construed as an offer to provide investment advisory services; an offer to invest in an a16z-managed pooled investment vehicle will be made separately and only by means of the confidential offering documents of the specific pooled investment vehicles — which should be read in their entirety, and only to those who, among other requirements, meet certain qualifications under federal securities laws. Such investors, defined as accredited investors and qualified purchasers, are generally deemed capable of evaluating the merits and risks of prospective investments and financial matters.

There can be no assurances that a16z’s investment objectives will be achieved or investment strategies will be successful. Any investment in a vehicle managed by a16z involves a high degree of risk including the risk that the entire amount invested is lost. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by a16z is available here: https://a16z.com/investments/. Past results of a16z’s investments, pooled investment vehicles, or investment strategies are not necessarily indicative of future results. Excluded from this list are investments (and certain publicly traded cryptocurrencies/ digital assets) for which the issuer has not provided permission for a16z to disclose publicly. As for its investments in any cryptocurrency or token project, a16z is acting in its own financial interest, not necessarily in the interests of other token holders. a16z has no special role in any of these projects or power over their management. a16z does not undertake to continue to have any involvement in these projects other than as an investor and token holder, and other token holders should not expect that it will or rely on it to have any particular involvement.

With respect to funds managed by a16z that are registered in Japan, a16z will provide to any member of the Japanese public a copy of such documents as are required to be made publicly available pursuant to Article 63 of the Financial Instruments and Exchange Act of Japan. Please contact compliance@a16z.com to request such documents.

For other site terms of use, please go here. Additional important information about a16z, including our Form ADV Part 2A Brochure, is available at the SEC’s website: http://www.adviserinfo.sec.gov.