Abstract
Iterative testing is essential to exploring complex phenomena, especially in computationally intensive fields, where no analytical solutions or reliable observations can guide the models’ development. Hence, its success cannot be justified by reference to a correct solution, but only by its capacity to self-correct despite initial imperfect ingredients. How can then scientists safeguards this autonomy while ensuring its productive interplay with observation, guaranteeing that models remain informed by and evaluated against new empirical findings? We analyse how insights from astrochemistry, an emerging field with steady influxes of new observations, whose models are developed iteratively, apply to other fields.