Geometric deep learning provides a principled and versatile manner for the integration of imaging and non-imaging modalities in the medical domain. Graph Convolutional Networks (GCNs) in particular have been explored on a wide variety of problems …
Semantic segmentation is an import task in the medical field to identify the exact extent and orientation of significant structures like organs and pathology. Deep neural networks can perform this task well by leveraging the information from a large …
Learning Interpretable representation in medical applications is becoming essential for adopting data-driven models into clinical practice. It has been recently shown that learning a disentangled feature representation is important for a more compact …
Neural networks are proven to be remarkably successful for classification and diagnosis in medical applications. However, the ambiguity in the decision-making process and the interpretability of the learned features is a matter of concern. In this …
Deep learning techniques are recently being used in fundus image analysis and diabetic retinopathy detection. Microaneurysms are an important indicator of diabetic retinopathy progression. We introduce a two-stage deep learning approach for …
Multi-modal data comprising imaging (MRI, fMRI, PET, etc.) and non-imaging (clinical test, demographics, etc.) data can be collected together and used for disease prediction. Such diverse data gives complementary information about the patient's …
Recent progress has shown that few-shot learning can be improved with access to unlabelled data, known as semi-supervised few-shot learning(SS-FSL). We introduce an SS-FSL approach, dubbed as Prototypical Random Walk Networks(PRWN), built on top of …
We demonstrate the feasibility of a fully automatic computer-aided diagnosis (CAD) tool, based on deep learning, that localizes and classifies proximal femur fractures on X-ray images according to the AO classification. The proposed framework aims to …
One of the major challenges facing researchers nowadays in applying deep learning (DL) models to Medical Image Analysis is the limited amount of annotated data. Collecting such ground-truth annotations requires domain knowledge (expertise), cost, and time, making it infeasible for large-scale databases. We presented a novel concept for training DL models from noisy annotations collected through crowdsourcing platforms, i.e., Amazon Mechanical Turk, Crowdflower, by introducing a robust aggregation layer to the convolutional neural networks. Our proposed method was validated on a publicly available database on Breast Cancer Histology Images showing interesting results of our robust aggregation method compared to baseline methods, i.e., Majority Voting. In follow-up work, we introduced a novel concept of an image to game-object translation in biomedical Imaging allowing medical images to be represented as star-shaped objects that can be easily embedded to readily available game canvas. The proposed method reduces the necessity of domain knowledge for annotations. Exciting and promising results were reported compared to the conventional crowdsourcing platforms.
A key component to the success of deep learning is the availability of massive amounts of training data. Building and annotating large datasets for solving medical image classification problems is today a bottleneck for many applications. Recently, …