The appearance of real world objects is the result of the combination of their own texture and the light rays striking them before reaching our eyes, or our camera. This appearance therefore brings very rich information about the objects shapes, materials, motions, and also about the environment that surrounds them. However, while the visual cortex of humans is extremely powerful at sorting out these different sources intricately fused in the objects appearance, it is very challenging to reproduce this capability on a computer. As a result, Computer Vision algorithms tend to make strong simplifying assumptions. For example, one may consider only objects made of Lambertian material, or untextured or static objects, or lighting environments limited to a single point source. For this thesis, we want to overcome these limitations, and be able to consider the most general case: textured objects exhibiting both Lambertian and specular reflectance, in motion in a general lighting environment. We first discuss the limits of a purely Lambertian approach, and show that specularities carry strong information, if one is able to properly exploit them. We then consider two different problems, for which we show how to use specularities even in presence of textured surfaces and uncalibrated lighting. The first problem is the recovery of photometric parameters of object surfaces as well as the lighting environment. We show how it can be done even in case of extended specularities reflections. The second problem we tackle is 3D objects tracking. This is a well-explored problem but for which specularities have never been used, and treated as noise. We show that they can actually improve accuracy to an order of magnitude that is not possible with commonly used cues such as texture alone.