Forum Examines Promises and Limits of AI in Clinical Medicine May 14, 2021

The confluence of medicine and artificial intelligence stands to create truly high-performance, specialized care for patients, with enhanced precision diagnosis and personalized disease management. But to supercharge these systems we need massive amounts of personal health data, coupled with a delicate balance of privacy, transparency, and trust.

 

To navigate these technical and ethical challenges, scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), MIT’s Institute for Medical and Engineering Science (IMES), The French National Academy of Medicine, and the Health Data Hub came together to explore different case studies using tools from both nations, with the goal of advancing AI clinical solutions. For these technologies to be adequately integrated into healthcare workflows, we need to identify the right tools for the right tasks.

 

AI and Clinical Practice

 

MIT professor and CSAIL director Daniela Rus began the discussion by acknowledging the seismic shift in resource efforts brought on by the coronavirus pandemic, from disinfecting robots, to 3-D printed personal protective gear, and powerful predictive models.

 

Rus also discussed broader efforts in the field related to the role of using robots in surgical settings, stressing the importance of surveillance in medicine, where AI functions as somewhat of a “guardian angel.”

 

One particular use case that some might find hard to swallow is orally ingestible medical devices. Specifically, Rus’ team created an origami robot that unfolds from ingestible capsules, and, steered by external magnetic fields, can crawl across a simulated stomach to remove batteries and potentially deliver drugs.

 

“Implementing robots in a surgical or medical setting has to be conservative, as the environment can be high stakes, with no margin for error,” says Rus. “To tackle both the promises and perils of AI in medicine related to ethics, we need scalable technology and adequate regulation, and to identify the right tools for the right tasks.”

 

In a world where surgeons can control the machine, Dr. Ozanan Meireles, director of Surgical Artificial Intelligence and Innovation Laboratory at Mass General Hospital, explained during his panel the benefits of exploiting robots to help with precision medicine and small tasks like suturing or stapling minor skin wounds for laparoscopic or endoscopic surgery. Even with sufficient situational awareness and hardware, Meireles still cautioned of technical limitations: the need for large datasets and supervised learning with proper annotations.

 

Later on at the symposium, Dr. Ninon Burgos, a CNRS Researcher at the Paris Brain Institute (ICM), touched on using AI for the boundless quest of better understanding the brain, including a disease that’s equal parts complex and harrowing: dementia.

 

For computer-aided diagnosis of cognitive decline, Burgos explained that scientists have made significant improvements from previous standard-care practices such as clinical and cognitive tests and structural MRI’s. Now, Burgos said that deep learning techniques have enabled better frameworks for individual analysis of PET data to identify patterns of abnormality. However, to allow for wider adoption, Burgos elucidated the need for validation on diverse clinical environments to avoid bias and produce consistent predictions.

 

Underlying the illustrated advances in image-guided clinical practices was talk of an asset that has arguably surpassed oil in value: data, and the subsequent technical challenges surrounding data sharing agreements, anonymization, and explainability and trust of AI models.

 

To power these models, many questions remained: who will have a global view of a patient’s health data, and who will be in charge of the AI process on it? How can the US and France learn from their very different approaches to protecting sensitive medical data, while also advancing medicine to become “superhuman”?

 

Ethical and Regulatory Issues

 

While historically the US and France have often developed different solutions related to best medical practices, the crux of the symposium centered on how the two countries must not keep their frameworks siloed, but instead facilitate collaboration to better understand approaches on both sides of the Atlantic.

 

Dr. Cédric Villani, a Fields Medalist, member of the Academy of Sciences, and member of the French Parliament, opened the second discussion by encouraging fruitful dialogue between practitioners, researchers, academics, and engineers from both disciplines of medical computer science and mathematics.

 

Dr. Daniel Weitzner, the founding director of the MIT Internet Policy Research Initiative and Principal Investigator at CSAIL, moderated a discussion on data regulation and policies, and the likely trajectory of access and use for personal health data for innovative research and clinical access.

 

“Behavioral economists have pointed out the limits of rational choice, particularly while living in a consumer, data-driven environment,” said Weitzner. “The reality is that we underestimate the profiling and data collection that goes on, as it can be difficult to grapple.”

 

Since, he noted, “data flows like mud,” interoperability incentives — the ability of computer systems or software to exchange and make use of information — are relatively low in the US.

 

Professor Nicholson Price of University of Michigan Law School elaborated on this point, explaining the lack of a universal Electronic Health Record (EHR) and what he calls the “underprotective and overprotective” nature of US HIPPA law, the main federal rule that governs patient health data.

 

Price said the “underprotective” nature relates to the fact that large tech companies that have access to big data but aren’t governed by HIPPA can easily re-identify information that’s been previously de-identified using AI. That’s seemingly contrasted with an “overprotective” nature, where large health systems have no research exceptions, and can do in-house research if they have the resources. That’s coupled with strict guidelines around patient consent — which can slow the research process, or provide a narrower scope if that information is completely de-identified.

 

France’s approach, while also focusing on the protection of people’s rights and confidentiality in health data, is guided by timely regulation that came about in 2018: the “General Data Protection Regulation” (GDPR), a framework that sets guidelines for the collection and processing of personal information from individuals who live in the European Union (EU).

 

The GDPR  aims to simplify policies for allowing data processing for big data strategies and the use of AI. Towards that point, Jeanne Bossi Malafosse of law firm Delsol Avocats noted that while the GDPR is very rich and protective for the individual, there are specific standards that researchers and clinicians establish to make it easier to avoid getting approval from a regulatory body — while still transparently defining the overall conditions for the individual surrounding their data.

 

“Health management has become a geopolitical issue,” said Professor Bernard Nordlinger of the National Academy of Medicine in Paris. “We need to ascertain the balance between data regulations facilitating cooperation and the visible obstacles. Is AI compatible with ethics? Should automatic decision-making always be validated by a human? I believe trust is the key to AI in medicine.”

 

 

Source: https://www.technology.org/2021/05/14/forum-examines-promises-and-limits-of-ai-in-clinical-medicine/