With the advent of deep learning and increasing use of brain MRIs, a great amount of interest has arisen in automated anomaly segmentation to improve clinical workflows; however, it is time-consuming and expensive to curate medical imaging. Moreover, data are often scattered across many institutions, with privacy regulations hampering its use. Here we present FedDis to collaboratively train an unsupervised deep convolutional autoencoder on 1,532 healthy magnetic resonance scans from four different institutions, and evaluate its performance in identifying pathologies such as multiple sclerosis, vascular lesions, and low- and high-grade tumours/glioblastoma on a total of 538 volumes from six different institutions. To mitigate the statistical heterogeneity among different institutions, we disentangle the parameter space into global (shape) and local (appearance). Four institutes jointly train shape parameters to model healthy brain anatomical structures. Every institute trains appearance parameters locally to allow for client-specific personalization of the global domain-invariant features. We have shown that our collaborative approach, FedDis, improves anomaly segmentation results by 99.74% for multiple sclerosis, 83.33% for vascular lesions and 40.45% for tumours over locally trained models without the need for annotations or sharing of private local data. We found out that FedDis is especially beneficial for institutes that share both healthy and anomaly data, improving their local model performance by up to 227% for multiple sclerosis lesions and 77% for brain tumours.