An Integrated Framework for Improving the Quality and Reliability of Software Upgrades

Despite major advances in the engineering of maintainable and robust software over the years, upgrading software remains a primitive and error-prone activity. In this dissertation, we argue that several problems with upgrading software are caused by a poor integration between upgrade deployment, testing, and problem reporting. To support this argument, we present a characterization of software upgrades resulting from a survey we conducted of 50 system administrators. Motivated by the survey results, we present Mirage, a distributed framework for integrating upgrade deployment, testing, and problem reporting into the overall upgrade development process. Mirage's deployment subsystem allows the vendor to deploy its upgrades in stages over clusters of users sharing similar environments. Staged deployment incorporates testing of the upgrade on the users' machines. It is effective in allowing the vendor to detect problems early and limit the dissemination of buggy upgrades. Oasis, the testing subsystem of Mirage, improves on current state-of-the-art concolic and symbolic engines by implementing a new heuristic to prioritize the exploration of new or affected code in the upgrade. Furthermore, interactive symbolic execution, a new approach exposing the problem of path exploration to the tester using a graphical user interface, can be used to develop new search heuristics or manually guide testing to important areas of the source code. In spite of all of these efforts, some bugs are bound to remain in the software when it is deployed, and will be discovered and reported only later by the users. With the last component of Mirage, we consider the problem of instrumenting programs to reproduce bugs effectively, while keeping user data private. In particular, we develop static and dynamic analysis techniques to minimize the amount of instrumentation, and therefore the overhead incurred by the users, while considerably speeding up debugging. By combining up-front testing, stage deployment, testing on user machines, and efficient reporting, Mirage successfully reduces the number of problems, minimizes the number of users affected, and shortens the time needed to fix remaining problems.

Zwaenepoel, Willy
Lausanne, EPFL
Other identifiers:
urn: urn:nbn:ch:bel-epfl-thesis5087-3

 Record created 2011-05-26, last modified 2018-01-28

Rate this document:

Rate this document:
(Not yet reviewed)