Opportunistic Search with Semantic Fisheye Views
Search goals are often too complex or poorly defined to be solved in a single query. While refining their search goals, users are likely to apply a variety of strategies, such as searching for more general or more specific concepts in reaction to the information and structures they encounter in the results. This is called opportunistic search. In this paper we describe how semantic fisheye views (SFEV) can be designed to effectively support this search process by enabling rapid, interactive exploration of the multiple contexts that are useful for different opportunistic search strategies. Similar to other focus + context techniques, SFEVs visually emphasize and increase the detail of information related to the focus and de-emphasize or filter less important information. The contribution of the SFEV approach is the flexible definition of context as a combination of interest metrics, which can be reconfigured and combined to support a wide range of information visualizations and leading to the discovery of diverse new search goals. To further characterize the effectiveness of this technique for opportunistic search, we have developed a visual information retrieval interface for a large collection of annotated images that implements two distinctly different SFEVs; the first one uses similarity metrics to guide exploration over the images and keywords in the collection, and the second uses metrics that derive conceptual distance from an external general semantic model, WordNet. The results of a formal user experiment suggest that semantic-guided search is significantly more effective than similarity based search for complex opportunistic search and sensemaking tasks.