Computational Imaging, balancing image formation and information extraction between the physical and computational domains, brings distinctively new insights into biomedical optical imaging, such as digital holographic microscopy. However, the imaging depth is still limited, and imaging through scattering media with decent spatial and temporal resolution remains a big challenge. We are devoted to developing physics-informed deep learning reconstruction methods to restore the target visual information for digital holographic, which greatly overcome the physical limits of optics. We will also explore the frontiers of computing imaging technology in multi-dimensional and multi-scale imaging for 3D pathology and live-cell applications. Computational imaging will extend the frontiers in our observation abilities and decrease the cost of various high-performance imaging setups， contributing significantly to the biomedical optical imaging field.
Modern Radiotherapy is becoming increasingly conformal with the development of advanced beam delivery techniques such as VMAT, IMRT, and SABR, as well as image-guided radiotherapy (IGRT). However, routine radiotherapy treatment planning heavily depended on manual target delineation and planning simulation. The goal of this research project is to develop AI-based algorithms in target delineation and dose prediction for automatic radiotherapy treatment planning and applied aspects of clinical practice of radiotherapy. Our research mainly focus on AI-based approaches to provide better target definition, the usage of AI-based methods for multi-modality image segmentation, and target localization based on 2D and 3D imaging using AI approaches that rapidly provide high-quality personalized treatment for cancer patients.
Pathology specimens depicting cell morphology, tissue architecture and tumour–immune system at the microscopic scale. With the increasing prevalence of digital scanning technology---whole slide imaging (WSI), and as well as artificial intelligence (AI), AI-based computational pathology for interpreting digital images of slides has resulted in an explosion of interest in detection, diagnosis, and prognosis of several cancer subtypes. However, WSIs have tremendously large size, in the range of 1–10GB per image, leading to computational burden, and complexity of cancer characteristics hampers model performance for clinical routinely applications. In this project, we aim to develop AI-based methods for segmentation and classification of pathology images to build a clinical decision support tool for precision medicine. Tasks we focus on include: standard and normalization, image segmentation (e.g., delineation of tumor area, cells and cell nuclei), spatial pattern feature extraction to identify disease phenotypes, and deep learning to develop models that predict response with therapy and prognosis.
With the rapid development of digital medical imaging technologies such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound imaging (US), and digital pathology, advanced imaging technology has played an essential role in solving the challenges of tumor diagnosis and treatment for clinicians. However, malignant tumor is a complex and heterogeneous disease; it varies significantly from patient to patient or from different tumor sites. A single imaging method has its limitations in resolution, sensitivity, and contrast due to the physical, chemical, and biological characteristics of varying imaging principles. This project aims to develop novel multi-modal fusion learning methods for segmentation and classification, leveraging across-scale and across modality information to create repeatable, reproducible, interpretable tumor characteristic models in clinical diagnosis and treatment.
Mixed reality (MR) techniques enable physicians to interact with 3D virtual medical holograms and the real-world environment simultaneously, providing an intuitive understanding of the positional relationships between organs, blood vessels, and lesions within a patient's body. MR technology with head-mounted display provides a revolutionary tool in assisting surgical planning, intraoperative referencing, and navigation of minimally invasive surgery. However, the real-time motion of patient 3D model reconstructions and objective evaluation are still great challenges for MR-based simulation and navigation. We are developing learning-based methods for the 3D model of the patient’s anatomy and quantitative training evaluation. Tasks we focus on include: image segmentation for various anatomical structures and the tumor target, surgical instrument tracking, and multi-modal qualitative assessment strategy for objective evaluation in training.