Digital health, artificial intelligence (AI), machine learning and more — these concepts continue to generate buzz in the medtech world.
Last month, the FDA published guidance on clinical decision support (CDS) software. It helped to clear up what constitutes a medical device and what doesn’t. Early last year, the agency published a predetermined change control plan (PCCP) to help build a regulatory structure for such technology.
These topics and more spurred intriguing commentary on a panel at AdvaMed’s MedTech Conference in Boston today.
The panel featured viewpoints across all angles of the space. Dr. Yuri Maricich, CMO and head of development at Pear Therapeutics, offered thoughts from the developer of digital therapeutics. Brendan O’Leary, acting director of the Digital Health Center of Excellence at the FDA, provided the regulatory vantage point.
Cybil Roehrenbeck, a partner at Hogan Lovells, offered thoughts from the reimbursement side. Cassie Scherer, senior director of digital health policy and regulatory strategy at Medtronic, provided an industry perspective. Diane Johnson, senior director, strategic regulatory, MD&D at Johnson & Johnson, moderated the panel.
How the medical device industry is approaching AI
Scherer explained that the PCCP allows a manufacturer to go to the FDA on a premarket submission with its AI-based offering. You can provide the FDA with your submission, your product and the specific changes you intend to make down the line at the same time.
According to Scherer, this process allows the FDA to put “guardrails” around ensuring safety. It protects the agency and allows the industry side to put out a product with next-generation versions following in quick fashion to benefit patients.
Medtronic went through the PCCP process, which Scherer said was difficult. It’s not for everyone, she said, but it “provides a great opportunity.” She called it a “forward-thinking” effort from the FDA.
Despite progress on the regulatory side from her view, Scherer still sees “unique issues” with AI. That includes ensuring that AI remains representative of intended populations and geographies.
Another potential issue she sees is transparency. Doctors aren’t necessarily trained in reading labeling for AI-based products and digital therapeutics. The labeling framework remains robust, too, she said, and she isn’t sure if there is a way to standardize that because of the desire to ensure that different users’ different needs are reflected.
“I think that some uniqueness around AI is really building the patient trust,” said Scherer. “How do we make sure that we give the information in a way that builds that trust and that also is meaningful and actionable?”
Gaining a patient-centric view
Maricich said the FDA’s progressive decisions may help expand even further to allow for more patient-centered opportunities.
The COVID-19 pandemic hammered home that point, he said. We saw the ability of patients to do everything digitally, because “that’s how they do just about everything else in their life” right now.
“The challenge for us as the healthcare delivery sector is for manufacturers, for regulators, for payers to catch up,” Maricich said. “Do what patients are already expecting and integrate this very much into their part of treatment. Part of the engagement areas with patients is making sure that we are thinking about patient populations very broadly.”
A trend that Maricich has noticed centers around the need for evidence. Given that different types of organizations and individuals are entering the digital health space — which he thinks is a good thing — there’s more diversity in the people trying to solve problems. However, challenges persist around because people who haven’t been in healthcare suddenly have to solve problems they don’t understand.
Maricich said it’s not enough to say something in healthcare — you must show something, too.
He said developers can’t just believe their assertions and evidence. Real-world evidence with connected devices can provide important insights. Maricich believes that, even with some difficulties in recruiting underserved and minority populations into clinical trials, the different evidence is “most important to make sure we are able to answer the questions most appropriately.”
“I’m excited to see one of the key trends in combining these different types of datasets to answer those appropriate questions,” Maricich said. “But, I’m also excited to answer them much faster.”
Regulating medtech AI: ‘I think international harmonization is key.’
Another intriguing part of the ongoing digital health adoption is that it extends around the world. Johnson broached the question to O’Leary: What’s going on globally related to digital health?
O’Leary said he’s seen comments on clinical decision support and final guidance around the subject. That provides an opportunity to improve the agency’s approach to AI and digital health and make it that little bit easier.
The FDA has engaged in multilateral efforts through machine learning practices and guiding principles shared with Health Canada and the Medicines and Healthcare products Regulatory Agency (MHRA). He called it an important step forward and something that offers momentum.
“I think international harmonization is key,” O’Leary said. “You can’t have disjointed approaches to these technologies across jurisdictions.”
Implementation challenges remain from a regulatory perspective, O’Leary said. That never came in clearer than when COVID-19 came around.
O’Leary spent five months directly supporting the FDA’s COVID-19 response in 2020. He said a lot of software developers came to the agency claiming they had the solution to solve the testing shortage: software that can diagnose SARS-CoV-2. Better yet, the developers believed their product wouldn’t need regulation because of the CDS draft guidance.
“That was a really scary thing to see, frankly,” O’Leary said. “Because software for diagnosing SARS-CoV-2 is a diagnostic device that FDA regulates.”
The agency wants to make sure it takes a careful approach, O’Leary said. It must be one that doesn’t lead to public health outcomes that the FDA doesn’t want to see. That continues to apply to AI and digital health outside of COVID-19, too.
‘Not all AI is great.’
According to Roehrenbeck, reimbursement for AI services comes much closer to what medical service reimbursement would be. She said that represents a real consideration for businesses pursuing certain codes. Right now, the HHS Office of Civil Rights put out a proposed rule to expand on non-discrimination provisions in the Affordable Care Act.
The rule would include a new section on clinical algorithms. Roehrenbeck says questions remain over where the liability rests for AI.
“It potentially puts the onus on providers, folks who are going to adopt these tools and use them for their patients, to ensure that there is no discriminatory effect based on the use of a clinical algorithm,” said Roehrenbeck. “There are not a lot of guardrails around how this is being proposed.”
As HHS continues its development, Roehrenbeck said she thinks the FDA could provide clear pathways so people can understand these tools.
She put, bluntly, that “not all AI is great.” Roehrenbeck said one colleague calls a certain form of AI — one that doesn’t really do anything — “glamour AI.” That offers information that doesn’t provide any help.
Meanwhile, other AI can completely transform a patient’s experience and save them years on their life.
“As we try to understand what has value and what doesn’t, more standardization and contextualization guardrails would be helpful in that regard,” Roehrenbeck said. “Especially, I think, for clinicians who ultimately are, again, tasked with making sure that these are appropriate for their patients.”