I am a Ph.D. student at The Chinese University of Hong Kong, advised by Prof. Yixuan Yuan. I received my M.Eng from Xiamen University under Prof. Xinghao Ding and Prof. Yue Huang, where I also earned my B.Eng.
Recently, I am working on creative and efficient vision algorithms to perceive and manipulate the physical world.
The endeavours unveil valuable insights and sheds light on the prospect that with U-KAN, you can make strong backbone for medical image segmentation and generation.
An initial exploration into embedding customizable, imperceptible, and recoverable information within the renders produced by off-the-line 3D generative models, while ensuring minimal impact on the rendered content's quality.
An pioneering foray into the intriguing realm of embedding, relating and perceiving the heterogeneous patterns from various biomedical modalities holistically via a graph theory.
Recent advances in Neural Radiance Field (NeRF) imply a future of widespread visual data distributions through sharing NeRF model weights.
In StegaNeRF,
we signify an initial exploration into the novel problem of instilling customizable, imperceptible, and recoverable information to NeRF renderings, with minimal impact to rendered images.
We sincerely hope this work can promote the concerns about the intellectual property of INR/NeRF.
Knowledge distillation (KD) plays a key role in developing lightweight deep networks by transferring the dark knowledge from a high-capacity teacher network to strengthen a smaller student one.
In
KCD (ECCV'22),
we explore an efficient knowledge distillation framework by co-designing model distillation and knowledge condensation,
which dynamically identifies and summarizes the informative knowledge points as a compact knowledge set across the knowledge transfer.
In HKD,
we investigate the diverse guidance
effect from the knowledge of teacher model in different instances and learning stages.
The existing literature keeps the fixed learning fashion to handle these knowledge hints.
In comparison, we present to leverage the merits of meta-learning to customize a specific distillation fashion for each instance adaptively and dynamically.
Data-Efficient Learning for Medical Imaging Analysis
Pseudo-Healthy Synthesis: As a variant of style-transfer task, synthesizing the healthy counterpart from the lesion regions is a important problem in clinical practice.
In GVS (MICCAI'21), we leverage the more accurate lesion attribution by constructing an adversarial learning framework between the pseudo-healthy generator and lesion segmentor.
Domain Adaptation/Generalization:
Generalizing the deep models trained on one data source to other datasets is essential issue in practical medical imaging analysis.
We present a domain adaptive approach by leveraging the self-supervised strategy called Vessel-Mixing (ICIP'21),
which is driven by the geometry characteristics of retinal vessels.
We also attempt tp address the domain generalization problem in medical imaging via Task-Aug (CBM'21). We investigate the neglected issue summarized as task over-fitting, that is, the meta-learning framework gets over-fitting to the simulated meta-tasks, and present a task augmentation strategy.
Semi-Supervised Learning:
The existing semi-supervised methods mainly exploit the unlabeled data via a self-labeling strategy.
In UAST (NCA'21),
we present to decouple the unreliable connect between the decision boundary learning and pseudo-label evaluation.
We instead leverage an uncertainty-aware self-training paradigm by modeling the accuracy of pseudo-labels via uncertainty modeling.
Few-shot Learning:
Existing few-shot segmentation methods tend to fail in the incongruous foreground regions of support and query images.
We present a few-shot learning method called GCN-DE (CBM'21) which leverages a global correlation capture and discriminative embedding to address the above issue.