Shadi Albarqouni is a Palestinian-German Professor of Computational Medical Imaging Research. He received his B.Sc. and M.Sc. in Electrical Engineering from the IU Gaza, Palestine, in 2005, and 2010, respectively. In 2012, he received the DAAD research grant to pursue his Ph.D. at the Chair for Computer Aided Medical Procedures (CAMP), Technical University of Munich (TUM), Germany. During his Ph.D., Albarqouni worked with Prof. Nassir Navab on developing machine learning algorithms to handle noisy labels, coming from crowdsourcing, in medical imaging. His AggNet paper, published at the Special Issue on Deep Learning at the IEEE Transaction on Medcial Imaging (IF: 10.048), was among the first ones on Medical Imaging with Deep Learning and has been featured as the top downloaded article for a couple of years at IEEEXplore.
Right before he received his Ph.D. in Computer Science with summa cum laude in 2017, Albarqouni worked as a Senior Research Scientist & Team Lead at CAMP leading the Medical Image Analysis (MedIA). Together with his team addressed the common challenges concern the nature of medical data, namely heterogeneity, severe class-imbalance, few amounts of annotated data, inter-/intra-scanners variability (domain shift), inter-/intra-observer disagreement (noisy annotations). In 2019, Albarqouni received the prestigious P.R.I.M.E. fellowship for a one-year international mobility, where he worked as a Visiting Scientist at the Department of Information Technology and Electrical Engineering (D-ITET) at ETH Zürich, Switzerland. He worked with Prof. Ender Konukoglu on Modeling Uncertainty in Medical Imaging, in particular, the one associated with inter-/intra-raters variability. Afterwards, Albarqouni worked as a Visting Scientist with Prof. Daniel Rueckert at the Department of Computing at Imperial College London, United Kingdom.
Since Nov. 2020, Albarqouni has been appointed as an AI Young Investigator Group Leader at Helmholtz AI. The aim of Albarqouni’s Lab. is to develop innovative deep Federated Learning algorithms that can distill and share the knowledge among AI agents in a robust and privacy-preserved fashion. Since Jan. 2022, Albarqouni has been appointed as a W2 Professor of Computational Medical Imaging Research at the Faculty of Medicine, University of Bonn.
Albarqouni has more than 100 peer-reviewed publications in both Medical Imaging Computing and Computer Vision published in high impacted journals and top-tier conferences. He serves as a reviewer for many journals, e.g., IEEE TPAMI, MedIA, IEEE TMI, IEEE JBHI, IJCARS and Pattern Recognition, and top-tier conferences, e.g., ECCV, MICCAI, MIDL, BMVC, IPCAI, and ISBI among others. He is also an active member of MICCAI, BMVA, IEEE EMBS, IEEE CS, and ESR society. Recently, Albarqouni has been elected as a member for the European Lab for Learning and Intelligent Systems ( ELLIS), the Arab German Young Academy ( AGYA), and the Higher Council for Innovation and Excellence in Diaspora ( HCIE). Since 2015, he has been serving as a PC member for a couple of MICCAI workshops, e.g., COMPAY, DART, DCL, FAIR among others. Since 2019, Albarqouni has been serving as an Area Chair in Advance Machine Learning Theory at MICCAI. Recently, he has been serving as a Program Co-Chair at MIDL'22 in Swizterland, and as an Organizing Committee Member at ISBI'22 in India, MICCAI'24 in Morocco.
His current research interests include Interpretable ML, Robustness, Uncertainty, and Federated Learning. He is also interested in Entrepreneurship and Startups for Innovative Medical Solutions with limited resources.
Postdoc, 2020
Imperial College London, United Kingdom
Postdoc, 2019
ETH Zürich, Swizterland
Ph.D. in Computer Science, 2017
Technical University of Munich, Germany
M.Sc. in Electrical Engineering, 2010
Islamic University of Gaza, Palestine
B.Sc. in Electrical Engineering, 2005
Islamic University of Gaza, Palestine
Academic and Professional Experience
I have been just appointed as a W2 Professor of Computational Medical Imaging Research at the University Hospital Bonn, University of Bonn. The lab will be located at Department of Radiology, University Hospital Bonn.
Research:
Computational Medical Imaging: We will continue our research lines to develop fully automated, high accurate solutions that save export labor and efforts, and mitigate the challenges in medical imaging, i.e. i) the availability of a few annotated data, ii) low inter-/intra-observers agreement, iii) high-class imbalance, iv) inter-/intra-scanners variability and v) domain shift.
Federated Learning in Healthcare: We will focus our research on developing innovative deep Federated Learning algorithms that can distill and share the knowledge among AI agents in a robust and privacy-preserved fashion. Research topics include, but are not limited to, i) handling distributed DL models with data heterogeneity including non-i.i.d, and domain shifts, ii) developing explainability and quality control tools, and iii) robustness to poisoning models.
Affordable AI and Healthcare: In addition, we are also interested in developing affordable AI solutions suitable for poor-quality data generated by low infrastructure and point-of-care diagnosis.
I will be leading Albarqouni Lab focusing our research on developing innovative deep Federated Learning algorithms that can distill and share the knowledge among AI agents in a robust and privacy-preserved fashion. The lab will be hosted at Helmholtz AI and The Department of Computational Health at Helmholtz Center Munich allowing us to have access to huge databases of Genetics, Microscopic data, and Medical Imaging, such as Cooperative Health Research in the Augsburg Region (KORA), and German National Cohort (NAKO).
Roles:
Research:
Funded Projects:
Community Contribution:
I am affiliated with the Faculty of Informatics and TUM School of Medicine with the Chair for Artificial Intelligence in Healthcare and Medicine (Prof. Rueckert), and Chair for Computer Aided Medical Procedures (Prof. Navab).
Teaching:
I worked with Prof. Ender Konukoglu on Modeling Uncertainty in Medical Imaging, in particular, the one associated with inter-/intra-raters variability
Supervision:
I led the Medical Image Analysis team and worked toegther with a couple of PhD students on Deep Learning for Medical Applications.
Research: We have focused our research directions to develop fully-automated, high accurate solutions that save export labor and efforts, and mitigate the challenges in medical imaging, i.e. i) the availability of a few annotated data, ii) low inter-/intra-observers agreement, iii) high-class imbalance, iv) inter-/intra-scanners variability and v) domain shift. Our research portfolio can be categorized into Learn to Recognize, Adapt, Learn, Reason and Explain, incorporate prior knowledge, and collaborate with other AI agents
Teaching:
Fundraising:
Supervision:
My tasks:
I worked as a Research Scientist with Prof. Nassir Navab, Dr. Stefanie Demirci, and Dr. Tobias Lasser, on developing machine learning methods for biomedical imaging. My duties were:
Research:
Medical Image Analysis: Iterative reconstruction methods, Laplacian Graph Regularization, and sparse coding.
Machine Learning: Dictionary Learning, SVM, Random Forests, and Convolutional Neural Networks
Teaching:
Supervision:
I worked as a Networks Engineer for two years before being promoted to the head of the IT department at the hospital. My tasks were:
Funded and Active Projects. Thanks to our great collaborators!
Professional Services and Invited Talks in the last two years
Deep unsupervised representation learning has recently led to new approaches in the field of Unsupervised Anomaly Detection (UAD) in brain MRI. The main principle behind these works is to learn a model of normal anatomy by learning to compress and recover healthy data. This allows to spot abnormal structures from erroneous recoveries of compressed, potentially anomalous samples. The concept is of great interest to the medical image analysis community as it i) relieves from the need of vast amounts of manually segmented training data—a necessity for and pitfall of current supervised Deep Learning—and ii) theoretically allows to detect arbitrary, even rare pathologies which supervised approaches might fail to find. To date, the experimental design of most works hinders a valid comparison, because i) they are evaluated against different datasets and different pathologies, ii) use different image resolutions and iii) different model architectures with varying complexity. The intent of this work is to establish comparability among recent methods by utilizing a single architecture, a single resolution and the same dataset(s). Besides providing a ranking of the methods, we also try to answer questions like i) how many healthy training subjects are needed to model normality and ii) if the reviewed approaches are also sensitive to domain shift. Further, we identify open challenges and provide suggestions for future community efforts and research directions.
Recent advances in Deep Learning (DL) and the increased use of brain MRI have provided a great opportunity and interest in automated anomaly segmentation to support human interpretation and improve clinical workflow. However, medical imaging must be curated by trained clinicians, which is time-consuming and expensive. Further, data is often scattered across multiple institutions, with privacy regulations limiting its access. Here, we present FedDis (Federated Disentangled representation learning for unsupervised brain pathology segmentation) to collaboratively train an unsupervised deep convolutional neural network on 1532 healthy MR scans from four different institutions, and evaluate its performance in identifying abnormal brain MRIs including multiple sclerosis (MS), vascular lesions, low-grade tumors (LGG), and high-grade tumors/glioblastoma (HGG/GB) on a total of ~538 scans from 6 different institutions and datasets. To mitigate the statistical heterogeneity between the different institutes, we disentangle the parameter space into global, i.e., shape and local, i.e., appearance. We train the shape parameters jointly from four institutes to learn a global model of the healthy anatomical brain structure. The appearance parameters are trained locally on every institute and allow for personalization of the global domain-invariant features with client-specific information, such as scanner or acquisition parameter. We have shown that our collaborative approach, FedDis, improves anomaly segmentation results by 99.74% for MS, 83.33% for vascular lesions,and 40.45% for tumors over locally trained models without the need for annotations or sharing private local data. We found out that FedDis is especially beneficial for clients that share both healthy and anomaly data coming from the same institute, improving their local anomaly detection performance by up to 227% for MS lesions and 77% for brain tumors.
Skin cancer is one of the most deadly cancers worldwide. Yet, it can be reduced by early detection. Recent deep-learning methods have shown a dermatologist-level performance in skin cancer classification. Yet, this success demands a large amount of centralized data, which is oftentimes not available. Federated learning has been recently introduced to train machine learning models in a privacy-preserved distributed fashion demanding annotated data at the clients, which is usually expensive and not available, especially in the medical field. To this end, we propose FedPerl, a semi-supervised federated learning method that utilizes peer learning from social sciences and ensemble averaging from committee machines to build communities and encourage its members to learn from each other such that they produce more accurate pseudo labels. We also propose the peer anonymization (PA) technique as a core component of FedPerl. PA preserves privacy and reduces the communication cost while maintaining the performance without additional complexity. We validated our method on 38,000 skin lesion images collected from 4 publicly available datasets. FedPerl achieves superior performance over the baselines and state-of-the-art SSFL by 15.8%, and 1.8% respectively.
oking stained images preserving the inter-cellular structures, crucial for the medical experts to perform classification. We achieve better structure preservation by adding auxiliary tasks of segmentation and direct reconstruction. Segmentation enforces that the network learns to generate correct nucleus and cytoplasm shape, while direct reconstruction enforces reliable translation between the matching images across domains. Besides, we build a robust domain agnostic latent space by injecting the target domain label directly to the generator, i.e., bypassing the encoder. It allows the encoder to extract features independently of the target domain and enables an automated domain invariant classification of the white blood cells. We validated our method on a large dataset composed of leukocytes of 24 patients, achieving state-of-the-art performance on both digital staining and classification tasks.
Organ segmentation in CT volumes is an important pre-processing step in many computer assisted intervention and diagnosis methods. In recent years, convolutional neural networks have dominated the state of the art in this task. However, since this problem presents a challenging environment due to high variability in the organ’s shape and similarity between tissues, the generation of false negative and false positive regions in the output segmentation is a common issue. Recent works have shown that the uncertainty analysis of the model can provide us with useful information about potential errors in the segmentation. In this context, we proposed a segmentation refinement method based on uncertainty analysis and graph convolutional networks. We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem that is solved by training a graph convolutional network. To test our method we refine the initial output of a 2D U-Net. We validate our framework with the NIH pancreas dataset and the spleen dataset of the medical segmentation decathlon. We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen, with respect to the original U-Net’s prediction. Finally, we perform a sensitivity analysis on the parameters of our proposal and discuss the applicability to other CNN architectures, the results, and current limitations of the model for future work in this research direction. For reproducibility purposes, we make our code publicly available at https://github.com/rodsom22/gcn_refinement
Data-driven Machine Learning has emerged as a promising approach for building accurate and robust statistical models from medical data, which is collected in huge volumes by modern healthcare systems. Existing medical data is not fully exploited by ML primarily because it sits in data silos and privacy concerns restrict access to this data. However, without access to sufficient data, ML will be prevented from reaching its full potential and, ultimately, from making the transition from research to clinical practice. This paper considers key factors contributing to this issue, explores how Federated Learning (FL) may provide a solution for the future of digital health and highlights the challenges and considerations that need to be addressed.
We present a multimodal camera relocalization framework that captures ambiguities and uncertainties with continuous mixture models defined on the manifold of camera poses. In highly ambiguous environments, which can easily arise due to symmetries and repetitive structures in the scene, computing one plausible solution (what most state-of-the-art methods currently regress) may not be sufficient. Instead we predict multiple camera pose hypotheses as well as the respective uncertainty for each prediction. Towards this aim, we use Bingham distributions, to model the orientation of the camera pose, and a multivariate Gaussian to model the position, with an end-to-end deep neural network. By incorporating a Winner-Takes-All training scheme, we finally obtain a mixture model that is well suited for explaining ambiguities in the scene, yet does not suffer from mode collapse, a common problem with mixture density networks. We introduce a new dataset specifically designed to foster camera localization research in ambiguous environments and exhaustively evaluate our method on synthetic as well as real data on both ambiguous scenes and on non-ambiguous benchmark datasets.
Learning discriminative powerful representations is a crucial step for machine learning systems. Introducing invariance against arbitrary nuisance or sensitive attributes while performing well on specific tasks is an important problem in representation learning. This is mostly approached by purging the sensitive information from learned representations. In this paper, we propose a novel disentanglement approach to invariant representation problem. We disentangle the meaningful and sensitive representations by enforcing orthogonality constraints as a proxy for independence. We explicitly enforce the meaningful representation to be agnostic to sensitive information by entropy maximization. The proposed approach is evaluated on five publicly available datasets and compared with state of the art methods for learning fairness and invariance achieving the state of the art performance on three datasets and comparable performance on the rest. Further, we perform an ablative study to evaluate the effect of each component.
Stain virtualization is an application with growing interest in digital pathology allowing simulation of stained tissue images thus saving lab and tissue resources. Thanks to the success of Generative Adversarial Networks (GANs) and the progress of unsupervised learning, unsupervised style transfer GANs have been successfully used to generate realistic, clinically meaningful and interpretable images. The large size of high resolution Whole Slide Images (WSIs) presents an additional computational challenge. This makes tilewise processing necessary during training and inference of deep learning networks. Instance normalization has a substantial positive effect in style transfer GAN applications but with tilewise inference, it has the tendency to cause a tiling artifact in reconstructed WSIs. In this paper we propose a novel perceptual embedding consistency (PEC) loss forcing the network to learn color, contrast and brightness invariant features in the latent space and hence substantially reducing the aforementioned tiling artifact. Our approach results in more seamless reconstruction of the virtual WSIs. We validate our method quantitatively by comparing the virtually generated images to their corresponding consecutive real stained images.We compare our results to state-of-the-art unsupervised style transfer methods and to the measures obtained from consecutive real stained tissue slide images. We demonstrate our hypothesis about the effect of the PEC loss by comparing model robustness to color, contrast and brightness perturbations and visualizing bottleneck embeddings. We validate the robustness of the bottleneck feature maps by measuring their sensitivity to the different perturbations and using them in a tumor segmentation task. Additionally, we propose a preliminary validation of the virtual staining application by comparing interpretation of 2 pathologists on real and virtual tiles and inter-pathologist agreement
Digitized Histological diagnosis is in increasing demand. However, color variations due to various factors are imposing obstacles to the diagnosis process. The problem of stain color variations is a well-defined problem with many proposed solutions. Most of these solutions are highly dependent on a reference template slide. We propose a deep-learning solution inspired by cycle consistency that is trained end-to-end, eliminating the need for an expert to pick a representative reference slide. Our approach showed superior results quantitatively and qualitatively against the state of the art methods. We further validated our method on a clinical use-case, namely Breast Cancer tumor classification, showing 16% increase in AUC
Segmentation of the left atrium and deriving its size can help to predict and detect various cardiovascular conditions. Automation of this process in 3D Ultrasound image data is desirable, since manual delineations are time-consuming, challenging and observer-dependent. Convolutional neural networks have made improvements in computer vision and in medical image analysis. They have successfully been applied to segmentation tasks and were extended to work on volumetric data. In this paper we introduce a combined deep-learning based approach on volumetric segmentation in Ultrasound acquisitions with incorporation of prior knowledge about left atrial shape and imaging device. The results show, that including a shape prior helps the domain adaptation and the accuracy of segmentation is further increased with adversarial learning.