Shadi Albarqouni is a Palestinian-German Computer Scientist. He received his B.Sc. and M.Sc. in Electrical Engineering from the IU Gaza, Palestine, in 2005, and 2010, respectively. In 2012, he received a prestigious DAAD research grant to pursue his Ph.D. at the Chair for Computer Aided Medical Procedures (CAMP), Technical University of Munich (TUM), Germany. During his Ph.D., Albarqouni worked with Prof. Nassir Navab on developing machine learning algorithms to handle noisy labels, coming from crowdsourcing, in medical imaging. Albarqouni received his Ph.D. in Computer Science with summa cum laude in 2017.
Since then, Albarqouni has been working as a Senior Research Scientist & Team Lead at CAMP leading the Medical Image Analysis (MedIA) team with an emphasis on developing deep learning methods for medical applications. In 2019, he received the P.R.I.M.E. fellowship for one-year international mobility. During the period from Nov. 2019 to Jul. 2020, worked as a Visiting Scientist at the Department of Information Technology and Electrical Engineering (D-ITET) at ETH Zürich, Switzerland. He worked with Prof. Ender Konukoglu on Modeling Uncertainty in Medical Imaging, in particular, the one associated with inter-/intra-raters variability. During the period Aug.-Oct. 2020, Albarqouni worked as a Visting Scientist at the Department of Computing at Imperial College London, United Kingdom. He worked with Prof. Daniel Rueckert on Federated Learning.
Since Nov. 2020, Albarqouni is holding an AI Young Investigator Group Leader position at Helmholtz AI. The aim of Albarqouni Lab is to develop innovative deep Federated Learning algorithms that can distill and share the knowledge among AI agents in a robust and privacy-preserved fashion.
Albarqouni has around 100 peer-reviewed publications in both Medical Imaging Computing and Computer Vision published in high impacted journals and top-tier conferences. He serves as a reviewer for many journals, e.g., IEEE TPAMI, MedIA, IEEE TMI, IEEE JBHI, IJCARS and Pattern Recognition, and top-tier conferences, e.g., ECCV, MICCAI, MIDL, BMVC, IPCAI, and ISBI among others. Albarqouni serves as an expert and evaluator at the German Research Foundation (DFG), the Federal Ministry of Education and Research (BMBF), and the European Commission. He has been also elected as a member of the European Lab for Learning & Intelligent Systems (ELLIS), and Arab German Young Academy (AGYA), in addition to his membership at MICCAI, BMVA, IEEE EMBS, IEEE CS, and ESR society. Since 2015, he has been serving as a PC member for a couple of MICCAI workshops, e.g., COMPAY, and DART among others. Since 2019, Albarqouni has been serving as an Area Chair in Advance Machine Learning Theory at MICCAI.
His current research interests include Interpretable ML, Robustness, Uncertainty, and recently Federated Learning. He is also interested in Entrepreneurship and Startups for Innovative Medical Solutions.
Ph.D. in Computer Science, 2017
Technical University of Munich, Germany
M.Sc. in Electrical Engineering, 2010
Islamic University of Gaza, Palestine
B.Sc. in Electrical Engineering, 2005
Islamic University of Gaza, Palestine
Academic and Professional Experience
I will be leading Albarqouni Lab focusing our research on developing innovative deep Federated Learning algorithms that can distill and share the knowledge among AI agents in a robust and privacy-preserved fashion. The lab will be hosted at Helmholtz AI and The Department of Computational Health at Helmholtz Center Munich allowing us to have access to huge databases of Genetics, Microscopic data, and Medical Imaging, such as Cooperative Health Research in the Augsburg Region (KORA), and German National Cohort (NAKO).
I led the Medical Image Analysis team and worked toegther with a couple of PhD students on Deep Learning for Medical Applications.
Research: We have focused our research directions to develop fully-automated, high accurate solutions that save export labor and efforts, and mitigate the challenges in medical imaging, i.e. i) the availability of a few annotated data, ii) low inter-/intra-observers agreement, iii) high-class imbalance, iv) inter-/intra-scanners variability and v) domain shift. Our research portfolio can be categorized into Learn to Recognize, Adapt, Learn, Reason and Explain, incorporate prior knowledge, and collaborate with other AI agents
I worked as a Networks Engineer for two years before being promoted to the head of the IT department at the hospital. My tasks were:
Funded and Active Projects. Thanks to our great collaborators!
Professional Services and Invited Talks in the last two years
Deep unsupervised representation learning has recently led to new approaches in the field of Unsupervised Anomaly Detection (UAD) in brain MRI. The main principle behind these works is to learn a model of normal anatomy by learning to compress and recover healthy data. This allows to spot abnormal structures from erroneous recoveries of compressed, potentially anomalous samples. The concept is of great interest to the medical image analysis community as it i) relieves from the need of vast amounts of manually segmented training data—a necessity for and pitfall of current supervised Deep Learning—and ii) theoretically allows to detect arbitrary, even rare pathologies which supervised approaches might fail to find. To date, the experimental design of most works hinders a valid comparison, because i) they are evaluated against different datasets and different pathologies, ii) use different image resolutions and iii) different model architectures with varying complexity. The intent of this work is to establish comparability among recent methods by utilizing a single architecture, a single resolution and the same dataset(s). Besides providing a ranking of the methods, we also try to answer questions like i) how many healthy training subjects are needed to model normality and ii) if the reviewed approaches are also sensitive to domain shift. Further, we identify open challenges and provide suggestions for future community efforts and research directions.
oking stained images preserving the inter-cellular structures, crucial for the medical experts to perform classification. We achieve better structure preservation by adding auxiliary tasks of segmentation and direct reconstruction. Segmentation enforces that the network learns to generate correct nucleus and cytoplasm shape, while direct reconstruction enforces reliable translation between the matching images across domains. Besides, we build a robust domain agnostic latent space by injecting the target domain label directly to the generator, i.e., bypassing the encoder. It allows the encoder to extract features independently of the target domain and enables an automated domain invariant classification of the white blood cells. We validated our method on a large dataset composed of leukocytes of 24 patients, achieving state-of-the-art performance on both digital staining and classification tasks.
Organ segmentation in CT volumes is an important pre-processing step in many computer assisted intervention and diagnosis methods. In recent years, convolutional neural networks have dominated the state of the art in this task. However, since this problem presents a challenging environment due to high variability in the organ’s shape and similarity between tissues, the generation of false negative and false positive regions in the output segmentation is a common issue. Recent works have shown that the uncertainty analysis of the model can provide us with useful information about potential errors in the segmentation. In this context, we proposed a segmentation refinement method based on uncertainty analysis and graph convolutional networks. We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem that is solved by training a graph convolutional network. To test our method we refine the initial output of a 2D U-Net. We validate our framework with the NIH pancreas dataset and the spleen dataset of the medical segmentation decathlon. We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen, with respect to the original U-Net’s prediction. Finally, we perform a sensitivity analysis on the parameters of our proposal and discuss the applicability to other CNN architectures, the results, and current limitations of the model for future work in this research direction. For reproducibility purposes, we make our code publicly available at https://github.com/rodsom22/gcn_refinement
Data-driven Machine Learning has emerged as a promising approach for building accurate and robust statistical models from medical data, which is collected in huge volumes by modern healthcare systems. Existing medical data is not fully exploited by ML primarily because it sits in data silos and privacy concerns restrict access to this data. However, without access to sufficient data, ML will be prevented from reaching its full potential and, ultimately, from making the transition from research to clinical practice. This paper considers key factors contributing to this issue, explores how Federated Learning (FL) may provide a solution for the future of digital health and highlights the challenges and considerations that need to be addressed.
We present a multimodal camera relocalization framework that captures ambiguities and uncertainties with continuous mixture models defined on the manifold of camera poses. In highly ambiguous environments, which can easily arise due to symmetries and repetitive structures in the scene, computing one plausible solution (what most state-of-the-art methods currently regress) may not be sufficient. Instead we predict multiple camera pose hypotheses as well as the respective uncertainty for each prediction. Towards this aim, we use Bingham distributions, to model the orientation of the camera pose, and a multivariate Gaussian to model the position, with an end-to-end deep neural network. By incorporating a Winner-Takes-All training scheme, we finally obtain a mixture model that is well suited for explaining ambiguities in the scene, yet does not suffer from mode collapse, a common problem with mixture density networks. We introduce a new dataset specifically designed to foster camera localization research in ambiguous environments and exhaustively evaluate our method on synthetic as well as real data on both ambiguous scenes and on non-ambiguous benchmark datasets.
Learning discriminative powerful representations is a crucial step for machine learning systems. Introducing invariance against arbitrary nuisance or sensitive attributes while performing well on specific tasks is an important problem in representation learning. This is mostly approached by purging the sensitive information from learned representations. In this paper, we propose a novel disentanglement approach to invariant representation problem. We disentangle the meaningful and sensitive representations by enforcing orthogonality constraints as a proxy for independence. We explicitly enforce the meaningful representation to be agnostic to sensitive information by entropy maximization. The proposed approach is evaluated on five publicly available datasets and compared with state of the art methods for learning fairness and invariance achieving the state of the art performance on three datasets and comparable performance on the rest. Further, we perform an ablative study to evaluate the effect of each component.
Stain virtualization is an application with growing interest in digital pathology allowing simulation of stained tissue images thus saving lab and tissue resources. Thanks to the success of Generative Adversarial Networks (GANs) and the progress of unsupervised learning, unsupervised style transfer GANs have been successfully used to generate realistic, clinically meaningful and interpretable images. The large size of high resolution Whole Slide Images (WSIs) presents an additional computational challenge. This makes tilewise processing necessary during training and inference of deep learning networks. Instance normalization has a substantial positive effect in style transfer GAN applications but with tilewise inference, it has the tendency to cause a tiling artifact in reconstructed WSIs. In this paper we propose a novel perceptual embedding consistency (PEC) loss forcing the network to learn color, contrast and brightness invariant features in the latent space and hence substantially reducing the aforementioned tiling artifact. Our approach results in more seamless reconstruction of the virtual WSIs. We validate our method quantitatively by comparing the virtually generated images to their corresponding consecutive real stained images.We compare our results to state-of-the-art unsupervised style transfer methods and to the measures obtained from consecutive real stained tissue slide images. We demonstrate our hypothesis about the effect of the PEC loss by comparing model robustness to color, contrast and brightness perturbations and visualizing bottleneck embeddings. We validate the robustness of the bottleneck feature maps by measuring their sensitivity to the different perturbations and using them in a tumor segmentation task. Additionally, we propose a preliminary validation of the virtual staining application by comparing interpretation of 2 pathologists on real and virtual tiles and inter-pathologist agreement
Digitized Histological diagnosis is in increasing demand. However, color variations due to various factors are imposing obstacles to the diagnosis process. The problem of stain color variations is a well-defined problem with many proposed solutions. Most of these solutions are highly dependent on a reference template slide. We propose a deep-learning solution inspired by cycle consistency that is trained end-to-end, eliminating the need for an expert to pick a representative reference slide. Our approach showed superior results quantitatively and qualitatively against the state of the art methods. We further validated our method on a clinical use-case, namely Breast Cancer tumor classification, showing 16% increase in AUC
Segmentation of the left atrium and deriving its size can help to predict and detect various cardiovascular conditions. Automation of this process in 3D Ultrasound image data is desirable, since manual delineations are time-consuming, challenging and observer-dependent. Convolutional neural networks have made improvements in computer vision and in medical image analysis. They have successfully been applied to segmentation tasks and were extended to work on volumetric data. In this paper we introduce a combined deep-learning based approach on volumetric segmentation in Ultrasound acquisitions with incorporation of prior knowledge about left atrial shape and imaging device. The results show, that including a shape prior helps the domain adaptation and the accuracy of segmentation is further increased with adversarial learning.
Albarqouni Lab. @Helmholtz AI
Machine Learning in Medical Imaging, Semi-Supervised Learning, Federated Learning
Deep Learning for Medical Image Analysis, Anomaly Detection, Federated Learning, Image Understanding
Federated Learning, Machine Learning for healthcare, Computer Vision