Deep generative models, and data augmentation, labelling, and imperfections : first Workshop, DGM4MICCAI 2021, and first Workshop, DALI 2021, held in conjunction with MICCAI 2021, Strasbourg, France, October 1, 2021, Proceedings / Sandy Engelhardt, Ilkay Oksuz, Dajiang Zhu, Yixuan Yuan, Anirban Mukhopadhyay, Nicholas Heller, Sharon Xiaolei Huang, Hien Nguyen, Raphael Sznitman, Yuan Xue (eds.).

This book constitutes the refereed proceedings of the First MICCAI Workshop on Deep Generative Models, DG4MICCAI 2021, and the First MICCAI Workshop on Data Augmentation, Labelling, and Imperfections, DALI 2021, held in conjunction with MICCAI 2021, in October 2021. The workshops were planned to tak...

Full description

Saved in:
Bibliographic Details
Corporate Authors: DGM4MICCAI (Workshop) Online), DALI (Workshop), International Conference on Medical Image Computing and Computer-Assisted Intervention
Other Authors: Engelhardt, Sandy, Oksuz, Ilkay, Zhu, Dajiang, Yuan, Yixuan, Mukhopadhyay, Anirban, Heller, Nicholas (Doctoral student), Huang, Sharon Xiaolei, Nguyen, Hien, Sznitman, Raphael, Xue, Yuan
Format: eBook
Language:English
Published: Cham : Springer, 2021.
Series:Lecture notes in computer science ; 13003.
LNCS sublibrary. Image processing, computer vision, pattern recognition, and graphics.
Subjects:
Online Access:Click for online access
Table of Contents:
  • Intro
  • DGM4MICCAI 2021 Preface
  • DGM4MICCAI 2021 Organization
  • DALI 2021 Preface
  • DALI 2021 Organization
  • Contents
  • Image-to-Image Translation, Synthesis
  • Frequency-Supervised MR-to-CT Image Synthesis
  • 1 Introduction
  • 2 Method
  • 2.1 Frequency-Supervised Synthesis Network
  • 2.2 High-Frequency Adversarial Learning
  • 3 Experiments and Results
  • 3.1 Experimental Setup
  • 3.2 Results
  • 4 Conclusion
  • References
  • Ultrasound Variational Style Transfer to Generate Images Beyond the Observed Domain
  • 1 Introduction
  • 2 Methods
  • 2.1 Style Encoder
  • 2.2 Content Encoder
  • 2.3 Decoder
  • 2.4 Loss Functions
  • 2.5 Implementation Details
  • 3 Experiments
  • 3.1 Qualitative Results
  • 3.2 Quantitative Results
  • 4 Conclusion
  • References
  • 3D-StyleGAN: A Style-Based Generative Adversarial Network for Generative Modeling of Three-Dimensional Medical Images
  • 1 Introduction
  • 2 Methods
  • 2.1 3D-StyleGAN
  • 3 Results
  • 4 Discussion
  • References
  • Bridging the Gap Between Paired and Unpaired Medical Image Translation
  • 1 Introduction
  • 2 Methods
  • 3 Experiments
  • 3.1 Comparison with Baselines
  • 3.2 Ablation Studies
  • 4 Conclusion
  • References
  • Conditional Generation of Medical Images via Disentangled Adversarial Inference
  • 1 Introduction
  • 2 Method
  • 2.1 Overview
  • 2.2 Dual Adversarial Inference (DAI)
  • 2.3 Disentanglement Constrains
  • 3 Experiments
  • 3.1 Generation Evaluation
  • 3.2 Style-Content Disentanglement
  • 3.3 Ablation Studies
  • 4 Conclusion
  • A Disentanglement Constrains
  • A.1 Content-Style Information Minimization
  • A.2 Self-supervised Regularization
  • B Implementation Details
  • B.1 Implementation Details
  • B.2 Generating Hybrid Images
  • C Datasets
  • C.1 HAM10000
  • C.2 LIDC
  • D Baselines
  • D.1 Conditional InfoGAN
  • D.2 cAVAE
  • D.3 Evaluation Metrics
  • E Related Work
  • E.1 Connection to Other Conditional GANs in Medical Imaging
  • E.2 Disentangled Representation Learning
  • References
  • CT-SGAN: Computed Tomography Synthesis GAN
  • 1 Introduction
  • 2 Methods
  • 3 Datasets and Experimental Design
  • 3.1 Dataset Preparation
  • 4 Results and Discussion
  • 4.1 Qualitative Evaluation
  • 4.2 Quantitative Evaluation
  • 5 Conclusions
  • A Sample Synthetic CT-scans from CT-SGAN
  • B Nodule Injector and Eraser
  • References
  • Applications and Evaluation
  • Hierarchical Probabilistic Ultrasound Image Inpainting via Variational Inference
  • 1 Introduction
  • 2 Methods
  • 2.1 Learning
  • 2.2 Inference
  • 2.3 Objectives
  • 2.4 Implementation
  • 3 Experiments
  • 3.1 Inpainting on Live-Pig Images
  • 3.2 Filling in Artifact Regions After Segmentation
  • 3.3 Needle Tracking
  • 4 Conclusion
  • References
  • CaCL: Class-Aware Codebook Learning for Weakly Supervised Segmentation on Diffuse Image Patterns
  • 1 Introduction
  • 2 Methods
  • 2.1 Class-Aware Codebook Based Feature Encoding
  • 2.2 Loss Definition
  • 2.3 Training Strategy