Researchers in several scientific disciplines are struggling to cope with the masses of data resulting from either increasingly precise instruments or from simulation runs on ever more powerful supercomputers. Efficiently managing this deluge of data has become key to understand the phenomena they are studying. Scientists in the simulation sciences, for example, build increasingly big and detailed models, as detailed as the hardware allows, but they lack the efficient technology to update and analyze them. In this paper we discuss how innovative data management techniques we have developed, enable scientists to build and analyze bigger and more detailed spatial models and how these techniques ultimately accelerate discovery in the simulation sciences. These include spatial join methods (in memory and on disk), techniques for the efficient navigation in detailed meshes, an index for range query execution on complex and detailed spatial data as well as in-memory mesh indexes. The evaluation of these techniques using real neuroscience datasets shows a considerable performance improvement over the state of the art, and that the indexes we proposed scale substantially better for the purpose of the analysis of bigger and denser spatial models.