We present an Unsupervised Domain Adaptation strategy to compensate for domain shifts on Electron Microscopy volumes. Our method aggregates visual correspondences—motifs that are visually similar across different acquisitions—to infer changes on the parameters of pretrained models, and enable them to operate on new data. In particular, we examine the annotations of an existing acquisition to determine pivot locations that characterize the reference segmentation, and use a patch matching algorithm to find their candidate visual correspondences in a new volume. We aggregate all the candidate correspondences by a voting scheme and we use them to construct a consensus heatmap: a map of how frequently locations on the new volume are matched to relevant locations from the original acquisition. This information allows us to perform model adaptations in two different ways: either by a) optimizing model parameters under a Multiple Instance Learning formulation, so that predictions between reference locations and their sets of correspondences agree, or by b) using high-scoring regions of the heatmap as soft labels to be incorporated in other domain adaptation pipelines, including deep learning ones. We show that these unsupervised techniques allow us to obtain high-quality segmentations on unannotated volumes, qualitatively consistent with results obtained under full supervision, for both mitochondria and synapses, with no need for new annotation effort.