Several computer vision problems, like segmentation, tracking and shape modeling, are increasingly being solved using level set methodologies. But the critical issues of stability and convergence have always been neglected in most of the level set implementations. This often leads to either complete breakdown or premature/delayed termination of the curve evolution process, resulting in unsatisfactory results. We present a generic convergence criterion and also a means of determining the optimal time-step involved in the numerical solution of the level set equation. The significant improvement in the performance of level set algorithms, as a result of the proposed changes, is demonstrated using object tracking and shape-contour extraction results.