Abstract

Context. The standard cosmological model is based on the fundamental assumptions of a spatially homogeneous and isotropic universe on large scales. An observational detection of a violation of these assumptions at any redshift would immediately indicate the presence of new physics. Aims. We quantify the ability of the Euclid mission, together with contemporary surveys, to improve the current sensitivity of null tests of the canonical cosmological constant ? and the cold dark matter (?CDM) model in the redshift range 0zMethods. We considered both currently available data and simulated Euclid and external data products based on a ?CDM fiducial model, an evolving dark energy model assuming the Chevallier-Polarski-Linder parameterization or an inhomogeneous Lemaitre-Tolman-Bondi model with a cosmological constant ?, and carried out two separate but complementary analyses: a machine learning reconstruction of the null tests based on genetic algorithms, and a theory-agnostic parametric approach based on Taylor expansion and binning of the data, in order to avoid assumptions about any particular model. Results. We find that in combination with external probes, Euclid can improve current constraints on null tests of the ?CDM by approximately a factor of three when using the machine learning approach and by a further factor of two in the case of the parametric approach. However, we also find that in certain cases, the parametric approach may be biased against or missing some features of models far from ?CDM. Conclusions. Our analysis highlights the importance of synergies between Euclid and other surveys. These synergies are crucial for providing tighter constraints over an extended redshift range for a plethora of different consistency tests of some of the main assumptions of the current cosmological paradigm.

Details

Actions