Abstract

Due to conservative design models and safe construction practices, infrastructure often has significant yet-unknown reserve capacity that greatly exceeds code requirements. Reserve-capacity assessments lead to better asset-management decisions through either avoiding unnecessary replacement or lowering maintenance expenses. Field measurements have the potential to improve the accuracy of model predictions. To fulfil this potential, measurements, must be associated with an adequate structural-identification methodology. Error-domain model falsification is an intuitive model-based methodology that explicitly represents systematic uncertainties that are typically associated with structural models. Additionally, model-updating outcomes depend on the design of the measurement system. Engineers usually select sensor types and place sensors based on experience and signal-to-noise estimations. The development of more rational strategies for measurement-system design has recently received research attention. Quantitative sensor-placement strategies differ either in the objective function for sensor placement or in the optimization algorithm used. This study presents a comparison of greedy-search (hierarchical) and global-search (such as genetic algorithm or Probabilistic global-search Lausanne) methodologies in terms of joint-entropy evaluations, recommended sensor configurations and qualitative characteristics using a full-scale test study, theRockingham Bridge (Australia). Results show, for low number of sensors, that global-search algorithms only slightly over-perform the greedy-search algorithm in terms of information gain. However, this is at the expense of a longer computational time compared with greedy search. Nevertheless, global-search strategies provide other advantages such as finding multiple near-optimal sensor configurations. These advantages are illustrated using the full-scale bridge case.

Details

Actions