Hai Shu

Hai Shu
Hai Shu

Assistant Professor

Professional overview

Dr. Hai Shu is an Assistant Professor in the Department of Biostatistics at New York University. He earned a Ph.D. in Biostatistics from University of Michigan and a B.S. in Information and Computational Science from Harbin Institute of Technology in China.

His research interests include high-dimensional data analysis (esp. data integration), machine/deep learning, medical image analysis (e.g., PET, MRI, Mammography), and their applications in Alzheimer’s disease, brain tumors, breast cancer, etc. He has published relevant papers in top-tier journals and conference, such as The Annals of Statistics, Journal of the American Statistical Association, Biometrics, and AAAI Conference on Artificial Intelligence. He has also served as a reviewer on related topics for Journal of the American Statistical Association, Statistica Sinica, International Joint Conference on Artificial Intelligence, etc.

Prior to joining NYU, Dr. Hai Shu was a Postdoctoral Fellow in the Department of Biostatistics at The University of Texas MD Anderson Cancer Center. 

View Dr. Hai Shu's website at https://wp.nyu.edu/haishu

Education

Postdoctoral Fellow, Department of Biostatistics, The University of Texas MD Anderson Cancer Center, USA
Ph.D. in Biostatistics, Department of Biostatistics, University of Michigan, Ann Arbor, USA
M.S. in Biostatistics, Department of Biostatistics, University of Michigan, Ann Arbor, USA
B.S. in Information and Computational Science, Department of Mathematics, Harbin Institute of Technology (哈尔滨工业大学), China

Areas of research and study

Alzheimer’s disease
Brain tumors
Breast cancer
Deep learning
High-dimensional data analysis/integration
Machine learning
Medical image analysis
Spatial/temporal data analysis

Publications

Publications

A deep learning approach to re-create raw full-field digital mammograms for breast density and texture analysis

Shu, H., Chiang, T., Wei, P., Do, K. A., Lesslie, M. D., Cohen, E. O., Srinivasan, A., Moseley, T. W., Chang Sen, L. Q., Leung, J. W., Dennison, J. B., Hanash, S. M., & Weaver, O. O.

Publication year

2021

Journal title

Radiology: Artificial Intelligence

Volume

3

Issue

4
Abstract
Abstract
Purpose: To develop a computational approach to re-create rarely stored for-processing (raw) digital mammograms from routinely stored for-presentation (processed) mammograms. Materials and Methods: In this retrospective study, pairs of raw and processed mammograms collected in 884 women (mean age, 57 years ± 10 [standard deviation]; 3713 mammograms) from October 5, 2017, to August 1, 2018, were examined. Mammograms were split 3088 for training and 625 for testing. A deep learning approach based on a U-Net convolutional network and kernel regression was developed to estimate the raw images. The estimated raw images were compared with the originals by four image error and similarity metrics, breast density calculations, and 29 widely used texture features. Results: In the testing dataset, the estimated raw images had small normalized mean absolute error (0.022 ± 0.015), scaled mean absolute error (0.134 ± 0.078) and mean absolute percentage error (0.115 ± 0.059), and a high structural similarity index (0.986 ± 0.007) for the breast portion compared with the original raw images. The estimated and original raw images had a strong correlation in breast density percentage (Pearson r = 0.946) and a strong agreement in breast density grade (Cohen k = 0.875). The estimated images had satisfactory correlations with the originals in 23 texture features (Pearson r ≥ 0.503 or Spearman r ≥ 0.705) and were well complemented by processed images for the other six features. Conclusion: This deep learning approach performed well in re-creating raw mammograms with strong agreement in four image evaluation metrics, breast density, and the majority of 29 widely used texture features.

(TS)2WM: Tumor Segmentation and Tract Statistics for Assessing White Matter Integrity with Applications to Glioblastoma Patients

Zhong, L., Li, T., Shu, H., Huang, C., Michael Johnson, J., Schomer, D. F., Liu, H. L., Feng, Q., Yang, W., & Zhu, H.

Publication year

2020

Journal title

NeuroImage

Volume

223
Abstract
Abstract
Glioblastoma (GBM) brain tumor is the most aggressive white matter (WM) invasive cerebral primary neoplasm. Due to its inherently heterogeneous appearance and shape, previous studies pursued either the segmentation precision of the tumors or qualitative analysis of the impact of brain tumors on WM integrity with manual delineation of tumors. This paper aims to develop a comprehensive analytical pipeline, called (TS)2WM, to integrate both the superior performance of brain tumor segmentation and the impact of GBM tumors on the WM integrity via tumor segmentation and tract statistics using the diffusion tensor imaging (DTI) technique. The (TS)2WM consists of three components: (i) A dilated densely connected convolutional network (D2C2N) for automatically segment GBM tumors. (ii) A modified structural connectome processing pipeline to characterize the connectivity pattern of WM bundles. (iii) A multivariate analysis to delineate the local and global associations between different DTI-related measurements and clinical variables on both brain tumors and language-related regions of interest. Among those, the proposed D2C2N model achieves competitive tumor segmentation accuracy compared with many state-of-the-art tumor segmentation methods. Significant differences in various DTI-related measurements at the streamline, weighted network, and binary network levels (e.g., diffusion properties along major fiber bundles) were found in tumor-related, language-related, and hand motor-related brain regions in 62 GBM patients as compared to healthy subjects from the Human Connectome Project.

D-CCA: A Decomposition-Based Canonical Correlation Analysis for High-Dimensional Datasets

Shu, H., Wang, X., & Zhu, H.

Publication year

2020

Journal title

Journal of the American Statistical Association

Volume

115

Issue

529

Page(s)

292-306
Abstract
Abstract
A typical approach to the joint analysis of two high-dimensional datasets is to decompose each data matrix into three parts: a low-rank common matrix that captures the shared information across datasets, a low-rank distinctive matrix that characterizes the individual information within a single dataset, and an additive noise matrix. Existing decomposition methods often focus on the orthogonality between the common and distinctive matrices, but inadequately consider the more necessary orthogonal relationship between the two distinctive matrices. The latter guarantees that no more shared information is extractable from the distinctive matrices. We propose decomposition-based canonical correlation analysis (D-CCA), a novel decomposition method that defines the common and distinctive matrices from the (Formula presented.) space of random variables rather than the conventionally used Euclidean space, with a careful construction of the orthogonal relationship between distinctive matrices. D-CCA represents a natural generalization of the traditional canonical correlation analysis. The proposed estimators of common and distinctive matrices are shown to be consistent and have reasonably better performance than some state-of-the-art methods in both simulated data and the real data analysis of breast cancer data obtained from The Cancer Genome Atlas. Supplementary materials for this article are available online.

Assessment of network module identification across complex diseases

Failed generating bibliography.

Publication year

2019

Journal title

Nature methods

Volume

16

Issue

9

Page(s)

843-852
Abstract
Abstract
Many bioinformatics methods have been proposed for reducing the complexity of large gene or protein networks into relevant subnetworks or modules. Yet, how such methods compare to each other in terms of their ability to identify disease-relevant modules in different types of network remains poorly understood. We launched the ‘Disease Module Identification DREAM Challenge’, an open competition to comprehensively assess module identification methods across diverse protein–protein interaction, signaling, gene co-expression, homology and cancer-gene networks. Predicted network modules were tested for association with complex traits and diseases using a unique collection of 180 genome-wide association studies. Our robust assessment of 75 module identification methods reveals top-performing algorithms, which recover complementary trait-associated modules. We find that most of these modules correspond to core disease-relevant pathways, which often comprise therapeutic targets. This community challenge establishes biologically interpretable benchmarks, tools and guidelines for molecular network analysis to study human disease biology.

Estimation of large covariance and precision matrices from temporally dependent observations

Shu, H., & Nan, B.

Publication year

2019

Journal title

Annals of Statistics

Volume

47

Issue

3

Page(s)

1321-1350
Abstract
Abstract
We consider the estimation of large covariance and precision matrices from high-dimensional sub-Gaussian or heavier-tailed observations with slowly decaying temporal dependence. The temporal dependence is allowed to be long-range so with longer memory than those considered in the current literature. We show that several commonly used methods for independent observations can be applied to the temporally dependent data. In particular, the rates of convergence are obtained for the generalized thresholding estimation of covariance and correlation matrices, and for the constrained l 1 minimization and the l 1 penalized likelihood estimation of precision matrix. Properties of sparsistency and sign-consistency are also established. A gap-block cross-validation method is proposed for the tuning parameter selection, which performs well in simulations. As a motivating example, we study the brain functional connectivity using resting-state fMRI time series data with long-range temporal dependence.

Multiple testing for neuroimaging via hidden Markov random field

Shu, H., Nan, B., & Koeppe, R.

Publication year

2015

Journal title

Biometrics

Volume

71

Issue

3

Page(s)

741-750
Abstract
Abstract
Traditional voxel-level multiple testing procedures in neuroimaging, mostly p-value based, often ignore the spatial correlations among neighboring voxels and thus suffer from substantial loss of power. We extend the local-significance-index based procedure originally developed for the hidden Markov chain models, which aims to minimize the false nondiscovery rate subject to a constraint on the false discovery rate, to three-dimensional neuroimaging data using a hidden Markov random field model. A generalized expectation-maximization algorithm for maximizing the penalized likelihood is proposed for estimating the model parameters. Extensive simulations show that the proposed approach is more powerful than conventional false discovery rate procedures. We apply the method to the comparison between mild cognitive impairment, a disease status with increased risk of developing Alzheimer's or another dementia, and normal controls in the FDG-PET imaging study of the Alzheimer's Disease Neuroimaging Initiative.

Contact

hs120@nyu.edu 708 Broadway 7FL New York, NY, 10003