2nd Workshop on Big Biomedical Data in Deep Learning Models (B2D2LM)

In conjunction with 14th IEEE/ACM International Conference on Utility and Cloud Computing (UCC) & 8th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies (BDCAT)


Due to the proliferation of biomedical imaging modalities such as Photoacoustic Tomography, Computed Tomography (CT), Optical Microscopy and Tomography, Single Photon Emission Computed Tomography (SPECT), Magnetic Resonance (MR) Imaging, Ultrasound, and Positron Emission Tomography (PET), Magnetic Particle Imaging, EE/MEG, Electron Tomography and Atomic Force Microscopy, massive amounts of biomedical data are being generated on a daily basis. How can we utilize such big data to build better health profiles and better predictive models so that we can better diagnose and treat diseases and provide a better life for humans? In the past years, many successful learning methods such as deep learning were proposed to answer this crucial question, which has social, economic, as well as legal implications.


Several significant problems plague the processing of big biomedical data, such as data heterogeneity, data incompleteness, data imbalance, and high dimensionality. What is worse is that many data sets exhibit multiple such problems. Most existing learning methods can only deal with homogeneous, complete, class-balanced, and moderate-dimensional data. Therefore, data preprocessing techniques including data representation learning, dimensionality reduction, and missing value imputation should be developed to enhance the applicability of deep learning methods in real-world applications of biomedicine.


This workshop aims to provide a forum for a diverse, but complementary, set of contributions to demonstrate new developments and applications that cover existing above issues in data processing of big biomedical data. We would also like to accept successful applications of the new methods, including but not limited to data processing, analysis, and knowledge discovery of big biomedical data.


Link for the main conference:


Link to submission:




l  Feature extraction by deep learning or sparse codes for biomedical data

l  Data representation of biomedical data

l  Dimensionality reduction techniques (subspace learning, feature selection,

l  sparse screening, feature screening, feature merging, etc.) for biomedical data

l  Information retrieval for biomedical data

l  Kernel-based learning for multi-source biomedical data

l  Incremental learning or online learning for biomedical data

l  Data fusion for multi-source biomedical data

l  Missing data imputation for multi-source biomedical data

l  Data management and mining in biomedical data

l  Web search and meta-search for biomedical data

l  Biomedical data quality assessment

l  Transfer learning of biomedical data


Publication will be by ACM. The paper format should be ACM this year.


Prof. Yu-Dong Zhang, University of Leicester, yudongzhang@ieee.org

Share this page: