2024 | OriginalPaper | Buchkapitel
Towards Unified Multi-modal Dataset Creation for Deep Learning Utilizing Structured Reports
verfasst von : Malte Tölle, Lukas Burger, Halvar Kelm, Sandy Engelhardt
Erschienen in: Bildverarbeitung für die Medizin 2024
Verlag: Springer Fachmedien Wiesbaden
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
The unification of electronic health records promises interoperability of medical data. Divergent data storage options, inconsistent naming schemes, varied annotation procedures, and disparities in label quality, among other factors, pose significant challenges to the integration of expansive datasets especially across instiutions. This is particularly evident in the emerging multi-modal learning paradigms where dataset harmonization is of paramount importance. Leveraging the DICOMstandard,we designed a data integration and filter tool that streamlines the creation of multi-modal datasets. This ensures that datasets from various locations consistently maintain a uniform structure. We enable the concurrent filtering of DICOMdata (i.e. images andwaveforms) and corresponding annotations (i.e. segmentations and structured reports) in a graphical user interface. The graphical interface as well as example structured report templates is openly available at https://github.com/Cardio-AI/fl-multi-modal-dataset-creation.