Size uniformity is one of the prominent features of superpixels. However, size uniformity rarely conforms to the varying content of an image. The chosen size of the superpixels therefore represents a compromise - how to obtain the fewest superpixels without losing too much important detail. We present an image segmentation technique that generates compact clusters of pixels grown sequentially, which automatically adapt to the local texture and scale of an image. Our algorithm liberates the user from the need to choose of the right superpixel size or number. The algorithm is simple and requires just one input parameter. In addition, it is computationally very efficient, approaching real-time performance, and is easily extensible to three-dimensional image stacks and video volumes. We demonstrate that our superpixels superior to the respective state-of-the-art algorithms on quantitative benchmarks.