Files

Abstract

The role of evaluation in software development has been well established. It is of first importance in usability engineering. Over the last years, many interactive systems have benefited from user evaluation, including e- learning applications. As with other interactive systems, designers of Web- based training environments can benefit from the results of an evaluation to detect the strengths and weaknesses of their application, and to prepare the next version. However, it is not obvious to evaluate educational software. It requires the transposition of the usability concepts of effectiveness, efficiency and satisfaction to a domain that is at the crossroads of multiple sub-domains: interactive systems, collaborative work and learning theories. In this article, we present a use-case of the evaluation of a flexible learning environment for hands on experiments. The first part of the article presents the flexible environment proposed to engineering students at the Swiss federal institute of Technology. It also defines the objectives of a user evaluation according to some of the design choices we made. In this evaluation, we have two types of objectives. First, we want to check if the flexibility permitted by a Web-based training environment is really put into practice by the students, if it has some effects on the place and the time at which they choose to work. Second, we want to check if a new component that we have introduced in our environment for sharing and commenting experimental results, the eJournal, really supports the flexible learning scenario and enhances collaborative learning. The second part of the article examines some of the most common evaluation methods used in interactive system design and in Computer Supported Cooperative Work. For each of these methods, such as checklists, questionnaires, log analysis and interviews, we describe how they could contribute to meet our evaluation objectives as defined in the first part. Finally, the third part of the article reports on our user evaluation of the flexible environment described in the first part. This reports describes first the evaluation methodology that we have selected. It describes the evaluation tools selected among the tools introduced in the second part, and the adaptations we made to fit our purposes. In particular, it describes some of the constraints imposed by experimenting with a panel of students pursuing their academic year at the same time. The reports then give some of the results of the evaluation we made with 30 students in automatic control using our prototype during the winter semester 2002/2003. The contribution of the article is to expose, through a use-case, the challenges of the evaluation of a Web-based training environment for sharing laboratory experiments. We think this is a necessary step towards the development of standard practices for evaluating e-learning applications.

Details

Actions