The work of this thesis centers around the research subject of distributed robotic search. Within the field of distributed robotic systems, a task of particular interest is attempting to locate one or more targets in a possibly unknown environment. While numerous studies have proposed and analyzed methods to accomplish this, there is currently a lack of a strong foundation of tools and techniques that can be used to facilitate the development and evaluation of different approaches to distributed robotic search. In this work, we aim to provide such a foundation through tools, methods, models, and analysis of experimentation. An often overlooked aspect of the distributed robotic research process is the development and analysis of tools and modules to be used with robotic systems. These may include plug-ins for realistic robotic simulators, software/hardware systems to track multiple mobile robots in real-time, extension boards for robotic platforms, and the robots themselves. Along with other tools developed and used for this work, we focus particularly on the development, characterization, and validation of a fast, accurate on-board system for relative positioning and communication between robots, a capability which is critical for effective distributed robotic search. Designing individual robot controllers to generate a specific group behavior is a difficult and often counter-intuitive process. A possible alternative to hand-crafting distributed search controllers is automatic synthesis using machine-learning techniques. We explore the effectiveness of using a noise-resistant version of the Particle Swarm Optimization algorithm to optimize the weights of an embedded Artificial Neural Network, allowing the robot to learn obstacle avoidance behavior (a common benchmark for robotic learning techniques); we find that this technique appears to offer superior performance as compared to the canonical approach of using Genetic Algorithms for this type of learning. A method for faster learning using distributed evaluation in a robot team is tested and is found to offer comparable performance using only a fraction of the original learning time. This technique can be used for fast, effective learning and adaptation in a distributed robotic system performing search. The process of designing and analyzing algorithms for distributed robotic systems can be greatly facilitated if models are available to describe the dynamics of the algorithm at an abstract level. Inspired by previous examples in the distributed robotic field, we work to design a model of robotic search that captures the system at different levels of abstraction, ranging from accurately recreating the details of individual robots to describing the entire system as an indivisible whole. To capture the entire search process, we model both the exploration phase, where robots cover an environment in an effort to detect traces of targets, and the localization phase, where robots use target emission sensing to navigate towards the target. The utility of our models is demonstrated by using them to develop an effective technique for the declaration phase of search, where robots decide that a target has been accurately localized and announce its position. In distributed robotics research, it is important that techniques developed with abstracted simulations and models are ultimately validated using real robotic platforms in order to verify their correctness. In that spirit, we run systematic sound search experiments using teams of up to ten real robots. These experiments utilize the tools developed throughout the research process, demonstrate the utility of our learning technique for fast search adaptation, and serve to validate our models of distributed robotic localization. In addition, they allow us to analyze and better understand the subtle dynamics of the search process, providing information which should be useful for any future work on distributed robotic search.