Q&A

This article has been saved to your Favorites!

Abridge GC Talks AI's Future In Healthcare Industry

By Yeji Jesse Lee · 2024-10-08 16:09:39 -0400 ·

smiling man in glasses and dress shirt
Tim Hwang
As healthcare organizations continue to incorporate artificial intelligence into their systems, interfacing with AI at your doctor's office is going to eventually feel as natural and expected as turning on a tap to get running water in your home, or so says Tim Hwang, general counsel at generative AI healthcare company Abridge.

Hwang told Law360 that in a few years time, he thinks we'll begin to see AI as just another part of our healthcare experience, a technology so integrated into the infrastructure that it would seem strange to not use it.

Founded in 2018, Abridge is one of the biggest names in the healthcare AI space today. In February, it closed a $150 million funding round to accelerate research and development. The financing included a strategic investment from AI giant Nvidia Corp. The company uses generative AI to turn conversations between physicians and patients into clinical notes, a process that Abridge says is usually a long and time-consuming process for medical professionals.

The company's backers include Union Square Ventures, Lightspeed Venture Partners, Kaiser Permanente Ventures and CVS Health Ventures.

Over the past few years, Abridge has signed deals with dozens of health systems, including Yale New Haven Health System and UChicago Medicine, effectively rolling out its technology to more than 10,000 medical health professionals in the U.S.

Hwang, 37, a software engineer turned attorney, joined Abridge in August as the company's legal head. Hwang previously worked at Inflection AI, Substack and Google, where he was the global public policy lead on AI.

AI has steadily become a major point of focus for the healthcare industry, which has slowly begun to consider new ways to invest in and use the technology.

Law360 caught up with Hwang this week to chat about the future of AI in healthcare and the legal issues that he grapples with as general counsel of Abridge.

The interview has been edited for length and clarity.

What are the biggest legal questions around AI that you deal with on a day-to-day basis?

AI has gone through many sorts of waves of hype over the decades, and not surprisingly, we're in the middle of one of them right now. I think in every era there's these amazing capabilities that become available and there's a lot of excitement about them. Then reality will set in, and the reality is that it's really hard to implement these technologies well and have them be beneficial for the people that use them.

And I think one of the reasons I'm so excited about Abridge and about the current way of technology is that we're just now entering that "hard work phase" with the technology. And one of the hardest places for this to work is healthcare, because the data is sensitive, the decisions are very high stakes, and the people you're involved with are professionals who have very strong ideas about how their practice should work.

And so I think the thing that keeps me up at night is: Can we craft a product that is a major value-add to the people who use the technology? And can we understand the actual day-to-day practice of medical enough to do a technical intervention that is going to be helpful?

I think there are lots of ways of just naively deploying technology, and I think those often go wrong. The challenge for any company, like Abridge and also other companies, is really making sure that we understand and work with the user enough to build tools that they really can rely on on a day-to-day basis.

We're beginning to see a patchwork of regulations come up on the state level. Is that something that has been difficult to navigate as a larger organization working across states?

We are seeing a lot more state legislation. In general, we're seeing a lot more regulator interest at every level, federal and state. And as a GC, I think that's an important part of my work here, just keeping track of everything that's going on but also playing a role in helping to inform some of the discourse.

I think a big part of our work is not just making sure that the company's aware of new regulations that are coming out, but also having it be a dialogue with policymakers — to basically say, let's walk you through the technology, let's demo the technology for you. Because once there's a much stronger sense of what it is that's actually happening in the tech, I'm convinced that the regulations will be a lot better.

I hear a lot of talk about the need for balance when it comes to regulations for AI, to ensure that they're not too strict to hinder innovation and also not too loose so that there are protections in place. What would that balance look like for you?

I think there are two things.

We are learning a lot about how these systems work on the ground. And it's not just us, I think all the other ambient companies are putting these systems into hospital systems, and they're getting a lot of feedback from doctors about this is working, this is not working, I wish you had this, I wish you didn't have that. So there's almost a whole body of knowledge that's emerging as we are rolling these systems out.

I think the first thing is just to make sure that there's a good enough conversation that exists between policymakers and companies and hospital systems to be able to share these findings, because it is very hard to create good regulation if you're just speculating, and suddenly I think we're having a lot of information about how things are playing out on the ground. So one thing that would be really important to me is to make sure that that sort of discussion is actively happening.

The second one is that regulation has the ability not just to shape how the technology evolves, but also to shape how doctors are going to use the technology.

I think the fact that Abridge is run by a guy who literally still sees patients every week means that our fundamental vision of the technology still has the idea that the doctors are at the center of it. All this technology is basically just tools to help them move faster and better. And we really think that there's a provider-centered view on this technology.

So I would hope that any regulation that emerges affirms that. I think that still gets the safest outcomes and the best outcomes, to really allow the human judgment to be at the center of this.

On the flip side, is there anything you hope to not see in regulations as they come online?

I think that there are rules that you might imagine that would probably be categorical in nature. And a lot of those often come from the idea of AI being a thing that doesn't change.

So I think one of the things I don't want to see are laws that enshrine a certain way that AI works today. We desperately need flexibility, because the state of the practice is going to be changing so much.

The danger is always that you're going to write some law that just kind of freezes the technology in place. And I don't think that does anyone any good.

Abridge has collaborated and partnered with a lot of organizations over the past few years. What are some of the legal questions or issues that have come up across these various partnerships?

One thing I've been very, very struck by is the degree to which different hospital systems, for instance, have very radically different views of how they measure success around technology.

We will literally contract over what the success metrics are. How you articulate that, how you craft that in the contract context of an agreement is a big part of what we do.

And it's just incredible. We'll do a pilot, and it'll be like, what are the success metrics for whether this pilot has succeeded or not succeeded? It ends up being a very active discussion, both on the business side and on the legal side.

What that suggests to me is, how do we create good fora for both technology producers and hospital systems to get a better sense of what are the shared norms and what are the best practices around measuring success in the space.

I think we're interested in it from the point of view of developing the industry. It also points to just how much of this has been developed in silos, and the need for people to be talking with one another.

What are some of the biggest legal questions you think startups just starting out now in AI in healthcare should be thinking about?

I think particularly for startups that are coming into this space, one of the things that we think is important for everybody in the space to be thinking about is how they de-identify data.

I think that particularly in the ambient space, the data that you're handling is very valuable data. It's very important data. It's very private data, and I think a lot of startups that come from more of a launch and iterate, "we just hack it out" approach, I think are not quite ready for how important of a responsibility that is.

And so I think one best practice, also a bit of advice, is that even small startups should be thinking very carefully about how they manage their data and making sure that they're doing so in a responsible way.

How do you expect AI and healthcare to change over, let's say, the next five years?

This may be a funny response, but in five years, I don't know if we're going to be talking about AI in healthcare, but not in the way that you expect.

There's an old adage in AI, which is once you've solved it, it isn't AI anymore. And I really happen to believe that.

No one really thinks about Google search as an AI product, but Google search is the biggest AI deployment in the world. When you use search, you literally are using neural networks, but we don't really think about it because it is just part of how we live now.

I think the same thing really might be the case in the healthcare space as well. It'll just be part of your experience, and we won't really talk about it as AI anymore. It's just part of healthcare. And so I do think that that's where it's going and where my strongest aspiration for it is, is that it becomes like infrastructure.

In the future, I hope people are just like, oh yeah, of course. Of course we use these tools, because to not do so would be unthinkable. And I think that is where we're headed.

--Editing by Haylee Pearl.

For a reprint of this article, please contact reprints@law360.com.