One of the major challenges facing researchers nowadays in applying deep learning (DL) models to Medical Image Analysis is the limited amount of annotated data. Collecting such ground-truth annotations requires domain knowledge (expertise), cost, and time, making it infeasible for large-scale databases. We presented a novel concept for training DL models from noisy annotations collected through crowdsourcing platforms, i.e., Amazon Mechanical Turk, Crowdflower, by introducing a robust aggregation layer to the convolutional neural networks. Our proposed method was validated on a publicly available database on Breast Cancer Histology Images showing interesting results of our robust aggregation method compared to baseline methods, i.e., Majority Voting. In follow-up work, we introduced a novel concept of an image to game-object translation in biomedical Imaging allowing medical images to be represented as star-shaped objects that can be easily embedded to readily available game canvas. The proposed method reduces the necessity of domain knowledge for annotations. Exciting and promising results were reported compared to the conventional crowdsourcing platforms.
One of the major challenges currently facing researchers in applying deep learning (DL) models to medical image analysis is the limited amount of annotated data. Collecting such ground-truth annotations requires domain knowledge, cost, and time, …