Hai Shu

Hai Shu
Hai Shu
Scroll

Assistant Professor of Biostatistics

Professional overview

Dr. Hai Shu is an Assistant Professor in the Department of Biostatistics at New York University. He earned a Ph.D. in Biostatistics from University of Michigan and a B.S. in Information and Computational Science from Harbin Institute of Technology in China.

His research interests include high-dimensional data analysis (esp. data integration), machine/deep learning, medical image analysis (e.g., PET, MRI, Mammography), and their applications in Alzheimer’s disease, brain tumors, breast cancer, etc. He has published relevant papers in top-tier journals and conference, such as The Annals of Statistics, Journal of the American Statistical Association, Biometrics, and AAAI Conference on Artificial Intelligence. He has also served as a reviewer on related topics for Journal of the American Statistical Association, Statistica Sinica, International Joint Conference on Artificial Intelligence, etc.

Prior to joining NYU, Dr. Hai Shu was a Postdoctoral Fellow in the Department of Biostatistics at The University of Texas MD Anderson Cancer Center. 

View Dr. Hai Shu's website at https://wp.nyu.edu/haishu

Education

Postdoctoral Fellow, Department of Biostatistics, The University of Texas MD Anderson Cancer Center, USA
Ph.D. in Biostatistics, Department of Biostatistics, University of Michigan, Ann Arbor, USA
M.S. in Biostatistics, Department of Biostatistics, University of Michigan, Ann Arbor, USA
B.S. in Information and Computational Science, Department of Mathematics, Harbin Institute of Technology (哈尔滨工业大学), China

Areas of research and study

Alzheimer’s disease
Brain tumors
Breast cancer
Deep learning
High-dimensional data analysis/integration
Machine learning
Medical image analysis
Spatial/temporal data analysis

Publications

Publications

Multi-Scale Tokens-Aware Transformer Network for Multi-Region and Multi-Sequence MR-to-CT Synthesis in a Single Model

Zhong, L., Chen, Z., Shu, H., Zheng, K., Li, Y., Chen, W., Wu, Y., Ma, J., Feng, Q., & Yang, W. (n.d.).

Publication year

2024

Journal title

IEEE Transactions on Medical Imaging

Volume

43

Issue

2

Page(s)

794-806
Abstract
Abstract
The superiority of magnetic resonance (MR)-only radiotherapy treatment planning (RTP) has been well demonstrated, benefiting from the synthesis of computed tomography (CT) images which supplements electron density and eliminates the errors of multi-modal images registration. An increasing number of methods has been proposed for MR-to-CT synthesis. However, synthesizing CT images of different anatomical regions from MR images with different sequences using a single model is challenging due to the large differences between these regions and the limitations of convolutional neural networks in capturing global context information. In this paper, we propose a multi-scale tokens-aware Transformer network (MTT-Net) for multi-region and multi-sequence MR-to-CT synthesis in a single model. Specifically, we develop a multi-scale image tokens Transformer to capture multi-scale global spatial information between different anatomical structures in different regions. Besides, to address the limited attention areas of tokens in Transformer, we introduce a multi-shape window self-attention into Transformer to enlarge the receptive fields for learning the multi-directional spatial representations. Moreover, we adopt a domain classifier in generator to introduce the domain knowledge for distinguishing the MR images of different regions and sequences. The proposed MTT-Net is evaluated on a multi-center dataset and an unseen region, and remarkable performance was achieved with MAE of 69.33 ± 10.39 HU, SSIM of 0.778 ± 0.028, and PSNR of 29.04 ± 1.32 dB in head & neck region, and MAE of 62.80 ± 7.65 HU, SSIM of 0.617 ± 0.058 and PSNR of 25.94 ± 1.02 dB in abdomen region. The proposed MTT-Net outperforms state-of-the-art methods in both accuracy and visual quality.

A generic fundus image enhancement network boosted by frequency self-supervised representation learning

Li, H., Liu, H., Fu, H., Xu, Y., Shu, H., Niu, K., Hu, Y., & Liu, J. (n.d.).

Publication year

2023

Journal title

Medical Image Analysis

Volume

90
Abstract
Abstract
Fundus photography is prone to suffer from image quality degradation that impacts clinical examination performed by ophthalmologists or intelligent systems. Though enhancement algorithms have been developed to promote fundus observation on degraded images, high data demands and limited applicability hinder their clinical deployment. To circumvent this bottleneck, a generic fundus image enhancement network (GFE-Net) is developed in this study to robustly correct unknown fundus images without supervised or extra data. Levering image frequency information, self-supervised representation learning is conducted to learn robust structure-aware representations from degraded images. Then with a seamless architecture that couples representation learning and image enhancement, GFE-Net can accurately correct fundus images and meanwhile preserve retinal structures. Comprehensive experiments are implemented to demonstrate the effectiveness and advantages of GFE-Net. Compared with state-of-the-art algorithms, GFE-Net achieves superior performance in data dependency, enhancement performance, deployment efficiency, and scale generalizability. Follow-up fundus image analysis is also facilitated by GFE-Net, whose modules are respectively verified to be effective for image enhancement.

Cross-Task Feedback Fusion GAN for Joint MR-CT Synthesis and Segmentation of Target and Organs-at-Risk

Zhang, Y., Zhong, L., Shu, H., Dai, Z., Zheng, K., Chen, Z., Feng, Q., Wang, X., & Yang, W. (n.d.).

Publication year

2023

Journal title

IEEE Transactions on Artificial Intelligence

Volume

4

Issue

5

Page(s)

1246-1257
Abstract
Abstract
The synthesis of computed tomography (CT) images from magnetic resonance imaging (MR) images and segmentation of target and organs-at-risk (OARs) are two important tasks in MR-only radiotherapy treatment planning (RTP). Some methods have been proposed to utilize the paired MR and CT images for MR-CT synthesis or target and OARs segmentation. However, these methods usually handle synthesis and segmentation as two separate tasks, and ignore the inevitable registration errors in paired images after standard registration. In this article, we propose a cross-task feedback fusion generative adversarial network (CTFF-GAN) for joint MR-CT synthesis and segmentation of target and OARs to enhance each task's performance. Specifically, we propose a cross-task feedback fusion (CTFF) module to feedback the semantic information from the segmentation task to the synthesis task for the anatomical structure correction in synthetic CT images. Besides, we use CT images synthesized from MR images for multimodal segmentation to eliminate the registration errors. Moreover, we develop a multitask discriminator to urge the generator to devote more attention to the organ boundaries. Experiments on our nasopharyngeal carcinoma dataset show that CTFF-GAN achieves impressive performance with MAE of 70.69 ± 10.50 HU, SSIM of 0.755 ± 0.03, and PSNR of 27.44 ± 1.20 dB in synthetic CT, and the mean dice of 0.783 ± 0.075 in target and OARs segmentation. Our CTFF-GAN outperforms state-of-the-art methods in both the synthesis and segmentation tasks.

QACL: Quartet attention aware closed-loop learning for abdominal MR-to-CT synthesis via simultaneous registration

Zhong, L., Chen, Z., Shu, H., Zheng, Y., Zhang, Y., Wu, Y., Feng, Q., Li, Y., & Yang, W. (n.d.).

Publication year

2023

Journal title

Medical Image Analysis

Volume

83
Abstract
Abstract
Synthesis of computed tomography (CT) images from magnetic resonance (MR) images is an important task to overcome the lack of electron density information in MR-only radiotherapy treatment planning (RTP). Some innovative methods have been proposed for abdominal MR-to-CT synthesis. However, it is still challenging due to the large misalignment between preprocessed abdominal MR and CT images and the insufficient feature information learned by models. Although several studies have used the MR-to-CT synthesis to alleviate the difficulty of multi-modal registration, this misalignment remains unsolved when training the MR-to-CT synthesis model. In this paper, we propose an end-to-end quartet attention aware closed-loop learning (QACL) framework for MR-to-CT synthesis via simultaneous registration. Specifically, the proposed quartet attention generator and mono-modal registration network form a closed-loop to improve the performance of MR-to-CT synthesis via simultaneous registration. In particular, a quartet-attention mechanism is developed to enlarge the receptive fields in networks to extract the long-range and cross-dimension spatial dependencies. Experimental results on two independent abdominal datasets demonstrate that our QACL achieves impressive results with MAE of 55.30±10.59 HU, PSNR of 22.85±1.43 dB, and SSIM of 0.83±0.04 for synthesis, and with Dice of 0.799±0.129 for registration. The proposed QACL outperforms the state-of-the-art MR-to-CT synthesis and multi-modal registration methods.

United multi-task learning for abdominal contrast-enhanced CT synthesis through joint deformable registration

Zhong, L., Huang, P., Shu, H., Li, Y., Zhang, Y., Feng, Q., Wu, Y., & Yang, W. (n.d.).

Publication year

2023

Journal title

Computer Methods and Programs in Biomedicine

Volume

231
Abstract
Abstract
Synthesizing abdominal contrast-enhanced computed tomography (CECT) images from non-enhanced CT (NECT) images is of great importance, in the delineation of radiotherapy target volumes, to reduce the risk of iodinated contrast agent and the registration error between NECT and CECT for transferring the delineations. NECT images contain structural information that can reflect the contrast difference between lesions and surrounding tissues. However, existing methods treat synthesis and registration as two separate tasks, which neglects the task collaborative and fails to address misalignment between images after the standard image pre-processing in training a CECT synthesis model. Thus, we propose an united multi-task learning (UMTL) for joint synthesis and deformable registration of abdominal CECT. Specifically, our UMTL is an end-to-end multi-task framework, which integrates a deformation field learning network for reducing the misalignment errors and a 3D generator for synthesizing CECT images. Furthermore, the learning of enhanced component images and the multi-loss function are adopted for enhancing the performance of synthetic CECT images. The proposed method is evaluated on two different resolution datasets and a separate test dataset from another center. The synthetic venous phase CECT images of the separate test dataset yield mean absolute error (MAE) of 32.78±7.27 HU, mean MAE of 24.15±5.12 HU on liver region, mean peak signal-to-noise rate (PSNR) of 27.59±2.45 dB, and mean structural similarity (SSIM) of 0.96±0.01. The Dice similarity coefficients of liver region between the true and synthetic venous phase CECT images are 0.96±0.05 (high-resolution) and 0.95±0.07 (low-resolution), respectively. The proposed method has great potential in aiding the delineation of radiotherapy target volumes.

A Comparative Study of non-deep Learning, Deep Learning, and Ensemble Learning Methods for Sunspot Number Prediction

Dang, Y., Chen, Z., Li, H., & Shu, H. (n.d.).

Publication year

2022

Journal title

Applied Artificial Intelligence

Volume

36

Issue

1
Abstract
Abstract
Solar activity has significant impacts on human activities and health. One most commonly used measure of solar activity is the sunspot number. This paper compares three important non-deep learning models, four popular deep learning models, and their five ensemble models in forecasting sunspot numbers. In particular, we propose an ensemble model called XGBoost-DL, which uses XGBoost as a two-level nonlinear ensemble method to combine the deep learning models. Our XGBoost-DL achieves the best forecasting performance (RMSE (Formula presented.) and MAE (Formula presented.)) in the comparison, outperforming the best non-deep learning model SARIMA (RMSE (Formula presented.) and MAE (Formula presented.)), the best deep learning model Informer (RMSE (Formula presented.) and MAE (Formula presented.)) and the NASA’s forecast (RMSE (Formula presented.) and MAE (Formula presented.)). Our XGBoost-DL forecasts a peak sunspot number of 133.47 in May 2025 for Solar Cycle 25 and 164.62 in November 2035 for Solar Cycle 26, similar to but later than the NASA’s at 137.7 in October 2024 and 161.2 in December 2034. An open-source Python package of our XGBoost-DL for the sunspot number prediction is available at https://github.com/yd1008/ts_ensemble_sunspot.

Big Data and Machine Learning in Oncology

Wei, P., & Shu, H. (n.d.). In The MD Anderson Manual of Medical Oncology, 4th Edition (1–).

Publication year

2022

CDPA: Common and distinctive pattern analysis between high-dimensional datasets

Shu, H., & Qu, Z. (n.d.).

Publication year

2022

Journal title

Electronic Journal of Statistics

Volume

16

Issue

1

Page(s)

2475-2517
Abstract
Abstract
A representative model in integrative analysis of two high-dimensional correlated datasets is to decompose each data matrix into a low-rank common matrix generated by latent factors shared across datasets, a low-rank distinctive matrix corresponding to each dataset, and an additive noise matrix. Existing decomposition methods claim that their common matrices capture the common pattern of the two datasets. However, their so-called common pattern only denotes the common latent factors but ig-nores the common pattern between the two coefficient matrices of these common latent factors. We propose a new unsupervised learning method, called the common and distinctive pattern analysis (CDPA), which appro-priately defines the two types of data patterns by further incorporating the common and distinctive patterns of the coefficient matrices. A consistent estimation approach is developed for high-dimensional settings, and shows reasonably good finite-sample performance in simulations. Our simulation studies and real data analysis corroborate that the proposed CDPA can provide better characterization of common and distinctive patterns and thereby benefit data mining.

D-GCCA: Decomposition-based Generalized Canonical Correlation Analysis for Multi-view High-dimensional Data.

Shu, H., Qu, Z., & Zhu, H. (n.d.).

Publication year

2022

Journal title

Journal of Machine Learning Research

Volume

23
Abstract
Abstract
Modern biomedical studies often collect multi-view data, that is, multiple types of data measured on the same set of objects. A popular model in high-dimensional multi-view data analysis is to decompose each view's data matrix into a low-rank common-source matrix generated by latent factors common across all data views, a low-rank distinctive-source matrix corresponding to each view, and an additive noise matrix. We propose a novel decomposition method for this model, called decomposition-based generalized canonical correlation analysis (D-GCCA). The D-GCCA rigorously defines the decomposition on the L2 space of random variables in contrast to the Euclidean dot product space used by most existing methods, thereby being able to provide the estimation consistency for the low-rank matrix recovery. Moreover, to well calibrate common latent factors, we impose a desirable orthogonality constraint on distinctive latent factors. Existing methods, however, inadequately consider such orthogonality and may thus suffer from substantial loss of undetected common-source variation. Our D-GCCA takes one step further than generalized canonical correlation analysis by separating common and distinctive components among canonical variables, while enjoying an appealing interpretation from the perspective of principal component analysis. Furthermore, we propose to use the variable-level proportion of signal variance explained by common or distinctive latent factors for selecting the variables most influenced. Consistent estimators of our D-GCCA method are established with good finite-sample numerical performance, and have closed-form expressions leading to efficient computation especially for large-scale data. The superiority of D-GCCA over state-of-the-art methods is also corroborated in simulations and real-world data examples.

A deep learning approach to re-create raw full-field digital mammograms for breast density and texture analysis

Shu, H., Chiang, T., Wei, P., Do, K. A., Lesslie, M. D., Cohen, E. O., Srinivasan, A., Moseley, T. W., Chang Sen, L. Q., Leung, J. W., Dennison, J. B., Hanash, S. M., & Weaver, O. O. (n.d.).

Publication year

2021

Journal title

Radiology: Artificial Intelligence

Volume

3

Issue

4
Abstract
Abstract
Purpose: To develop a computational approach to re-create rarely stored for-processing (raw) digital mammograms from routinely stored for-presentation (processed) mammograms. Materials and Methods: In this retrospective study, pairs of raw and processed mammograms collected in 884 women (mean age, 57 years ± 10 [standard deviation]; 3713 mammograms) from October 5, 2017, to August 1, 2018, were examined. Mammograms were split 3088 for training and 625 for testing. A deep learning approach based on a U-Net convolutional network and kernel regression was developed to estimate the raw images. The estimated raw images were compared with the originals by four image error and similarity metrics, breast density calculations, and 29 widely used texture features. Results: In the testing dataset, the estimated raw images had small normalized mean absolute error (0.022 ± 0.015), scaled mean absolute error (0.134 ± 0.078) and mean absolute percentage error (0.115 ± 0.059), and a high structural similarity index (0.986 ± 0.007) for the breast portion compared with the original raw images. The estimated and original raw images had a strong correlation in breast density percentage (Pearson r = 0.946) and a strong agreement in breast density grade (Cohen k = 0.875). The estimated images had satisfactory correlations with the originals in 23 texture features (Pearson r ≥ 0.503 or Spearman r ≥ 0.705) and were well complemented by processed images for the other six features. Conclusion: This deep learning approach performed well in re-creating raw mammograms with strong agreement in four image evaluation metrics, breast density, and the majority of 29 widely used texture features.

(TS)2WM: Tumor Segmentation and Tract Statistics for Assessing White Matter Integrity with Applications to Glioblastoma Patients

Zhong, L., Li, T., Shu, H., Huang, C., Michael Johnson, J., Schomer, D. F., Liu, H. L., Feng, Q., Yang, W., & Zhu, H. (n.d.).

Publication year

2020

Journal title

NeuroImage

Volume

223
Abstract
Abstract
Glioblastoma (GBM) brain tumor is the most aggressive white matter (WM) invasive cerebral primary neoplasm. Due to its inherently heterogeneous appearance and shape, previous studies pursued either the segmentation precision of the tumors or qualitative analysis of the impact of brain tumors on WM integrity with manual delineation of tumors. This paper aims to develop a comprehensive analytical pipeline, called (TS)2WM, to integrate both the superior performance of brain tumor segmentation and the impact of GBM tumors on the WM integrity via tumor segmentation and tract statistics using the diffusion tensor imaging (DTI) technique. The (TS)2WM consists of three components: (i) A dilated densely connected convolutional network (D2C2N) for automatically segment GBM tumors. (ii) A modified structural connectome processing pipeline to characterize the connectivity pattern of WM bundles. (iii) A multivariate analysis to delineate the local and global associations between different DTI-related measurements and clinical variables on both brain tumors and language-related regions of interest. Among those, the proposed D2C2N model achieves competitive tumor segmentation accuracy compared with many state-of-the-art tumor segmentation methods. Significant differences in various DTI-related measurements at the streamline, weighted network, and binary network levels (e.g., diffusion properties along major fiber bundles) were found in tumor-related, language-related, and hand motor-related brain regions in 62 GBM patients as compared to healthy subjects from the Human Connectome Project.

D-CCA: A Decomposition-Based Canonical Correlation Analysis for High-Dimensional Datasets

Shu, H., Wang, X., & Zhu, H. (n.d.).

Publication year

2020

Journal title

Journal of the American Statistical Association

Volume

115

Issue

529

Page(s)

292-306
Abstract
Abstract
A typical approach to the joint analysis of two high-dimensional datasets is to decompose each data matrix into three parts: a low-rank common matrix that captures the shared information across datasets, a low-rank distinctive matrix that characterizes the individual information within a single dataset, and an additive noise matrix. Existing decomposition methods often focus on the orthogonality between the common and distinctive matrices, but inadequately consider the more necessary orthogonal relationship between the two distinctive matrices. The latter guarantees that no more shared information is extractable from the distinctive matrices. We propose decomposition-based canonical correlation analysis (D-CCA), a novel decomposition method that defines the common and distinctive matrices from the (Formula presented.) space of random variables rather than the conventionally used Euclidean space, with a careful construction of the orthogonal relationship between distinctive matrices. D-CCA represents a natural generalization of the traditional canonical correlation analysis. The proposed estimators of common and distinctive matrices are shown to be consistent and have reasonably better performance than some state-of-the-art methods in both simulated data and the real data analysis of breast cancer data obtained from The Cancer Genome Atlas. Supplementary materials for this article are available online.

Assessment of network module identification across complex diseases

Failed generating bibliography.

Publication year

2019

Journal title

Nature methods

Volume

16

Issue

9

Page(s)

843-852
Abstract
Abstract
Many bioinformatics methods have been proposed for reducing the complexity of large gene or protein networks into relevant subnetworks or modules. Yet, how such methods compare to each other in terms of their ability to identify disease-relevant modules in different types of network remains poorly understood. We launched the ‘Disease Module Identification DREAM Challenge’, an open competition to comprehensively assess module identification methods across diverse protein–protein interaction, signaling, gene co-expression, homology and cancer-gene networks. Predicted network modules were tested for association with complex traits and diseases using a unique collection of 180 genome-wide association studies. Our robust assessment of 75 module identification methods reveals top-performing algorithms, which recover complementary trait-associated modules. We find that most of these modules correspond to core disease-relevant pathways, which often comprise therapeutic targets. This community challenge establishes biologically interpretable benchmarks, tools and guidelines for molecular network analysis to study human disease biology.

Estimation of large covariance and precision matrices from temporally dependent observations

Shu, H., & Nan, B. (n.d.).

Publication year

2019

Journal title

Annals of Statistics

Volume

47

Issue

3

Page(s)

1321-1350
Abstract
Abstract
We consider the estimation of large covariance and precision matrices from high-dimensional sub-Gaussian or heavier-tailed observations with slowly decaying temporal dependence. The temporal dependence is allowed to be long-range so with longer memory than those considered in the current literature. We show that several commonly used methods for independent observations can be applied to the temporally dependent data. In particular, the rates of convergence are obtained for the generalized thresholding estimation of covariance and correlation matrices, and for the constrained l 1 minimization and the l 1 penalized likelihood estimation of precision matrix. Properties of sparsistency and sign-consistency are also established. A gap-block cross-validation method is proposed for the tuning parameter selection, which performs well in simulations. As a motivating example, we study the brain functional connectivity using resting-state fMRI time series data with long-range temporal dependence.

Multiple testing for neuroimaging via hidden Markov random field

Shu, H., Nan, B., & Koeppe, R. (n.d.).

Publication year

2015

Journal title

Biometrics

Volume

71

Issue

3

Page(s)

741-750
Abstract
Abstract
Traditional voxel-level multiple testing procedures in neuroimaging, mostly p-value based, often ignore the spatial correlations among neighboring voxels and thus suffer from substantial loss of power. We extend the local-significance-index based procedure originally developed for the hidden Markov chain models, which aims to minimize the false nondiscovery rate subject to a constraint on the false discovery rate, to three-dimensional neuroimaging data using a hidden Markov random field model. A generalized expectation-maximization algorithm for maximizing the penalized likelihood is proposed for estimating the model parameters. Extensive simulations show that the proposed approach is more powerful than conventional false discovery rate procedures. We apply the method to the comparison between mild cognitive impairment, a disease status with increased risk of developing Alzheimer's or another dementia, and normal controls in the FDG-PET imaging study of the Alzheimer's Disease Neuroimaging Initiative.

Contact

hai.shu@nyu.edu 708 Broadway New York, NY, 10003