The placenta, the temporary organ grown to support fetal development, plays a vital role during pregnancy and contains crucial information about the health of both the parent and the baby. Yet, it is often not thoroughly examined at birth, especially in areas with limited medical resources. This can lead to missed opportunities for early detection and intervention of critical conditions such as neonatal sepsis, a life-threatening infection that affects millions of newborns globally.
A multi-national, multi-institutional team led by Penn State researchers developed a new tool that enables doctors to examine placentas right at the bedside using just a phone. The tool harnesses computer vision and artificial intelligence to make placenta examination more accessible for low-resource and more-advanced health care institutions alike. The work was published in the Dec. 13 print edition of Patterns and featured on the journal’s cover.
This research could save lives and improve health outcomes. It could make placental examination more accessible, benefitting research and care for future pregnancies, especially for mothers and babies at higher risk of complications.”
Yimu Pan, doctoral candidate in the informatics program in the College of Information Sciences and Technology and lead author on the study
Most placentas are quickly discarded without thorough analysis, according to the researchers. This means potentially vital health information that clinicians could use to identify concerns earlier is often missed.
The researchers’ goal was to create an accurate, robust tool based on data-driven learning that could be used to reduce complications and improve outcomes across a range of medical demographics, according to James Z. Wang, distinguished professor in the College of IST and one of the principal investigators on the study.
“We developed PlacentCLIP+, a robust machine learning model that can analyze photos of placentas to detect abnormalities and risks such as neonatal sepsis and other critical conditions,” Wang said. “This early identification might enable clinicians to take prompt actions, such as administering antibiotics to the parent or baby and closely monitoring the newborn for signs of infection.”
The researchers used cross-modal contrastive learning, an artificial intelligence method for aligning and understanding the relationship between different types of data -; in this case visual images and textual pathological reports -; to teach a computer program how to analyze pictures of placentas. They developed a large dataset of more than 31,700 anonymized placental images and accompanying pathological reports spanning a 12-year period from the United States and Uganda and studied how the images relate to health outcomes. With this understanding, they built the PlacentaCLIP+ model to make predictions based on new images.
“In low-resource areas -; places where hospitals don’t have pathology labs or specialists -; this tool could help doctors quickly spot issues like infections from a placenta,” Pan said. “In well-equipped hospitals, the tool can help doctors determine which placentas need further, detailed examination, making the process more efficient and prioritizing the most important cases.”
According to the researchers, the PlacentaCLIP+ program is designed to be easy to use and could potentially work through a smartphone app or be integrated into medical record software so doctors can get quick answers after delivery. The team tested the program under different conditions to see how it handled real-world challenges, like blurry or poorly lit photos, and validated it cross-nationally, confirming consistent performance across populations.
“Our next steps include embedding this model into a larger program that we created, called PlacentaVision, to offer medical professionals in clinics or hospitals with limited resources, where neonatal health outcomes are poor, a user-friendly mobile app,” Pan said. “The app would require minimal training and allow doctors and nurses to photograph placentas and get immediate feedback and improve care.”
The researchers said they plan to make the tool even smarter by including more types of placental features and adding clinical data to improve predictions while also contributing to research on long-term health. They’ll also test the tool in a variety of settings across different hospitals.
“This tool has the potential to transform how placentas are examined after birth, especially in parts of the world where these exams are rarely done,” said Alison D. Gernand, associate professor the Penn State College of Health and Human Development (HHD) Department of Nutritional Sciences and the corresponding author on the project. “This innovation promises greater accessibility in both low- and high-resource settings. With further refinement, it has the potential to transform neonatal and maternal care by enabling early, personalized interventions that prevent severe health outcomes and improve the lives of mothers and infants worldwide.”
According to Jeffery A. Goldstein, director of perinatal pathology at Northwestern University Feinberg School of Medicine and a principal investigator on the study, placenta is one of the most common specimens seen in his lab.
“When the neonatal intensive care unit is treating a sick kid, even a few minutes can make a difference in medical decision making,” he said. “With a diagnosis from these photographs, we can have an answer days earlier than we would in our normal process.”
In addition to Gernand, Pan and Wang, Penn State contributors included Kelly Gallagher, assistant research professor in the Ross and Carol Neese College of Nursing; Manas Mehta, a doctoral student in the College of IST; and Rachel Walker, a postdoctoral scholar in the College of HHD Department of Nutritional Sciences.
Researchers from Boston Children’s Hospital, Harvard Medical School, Massachusetts General Hospital and Mbara University of Science and Technology contributed to this work.
The National Institutes of Health National Institute of Biomedical Imaging and Bioengineering supported this research. The researchers used supercomputing resources provided through the Advanced Cyberinfrastructure Coordination Ecosystem: Services and Support (ACCESS) program, funded by the U.S. National Science Foundation.
Source:
Journal reference:
Pan, Y., et al. (2024). Cross-modal contrastive learning for unified placenta analysis using photographs. Patterns. doi.org/10.1016/j.patter.2024.101097.
Source link : News-Medica