Files

Abstract

Most infrastructure in the western world was built in the second half of the 20th century. Transportation structures, water distribution networks and energy production systems are now aging and this leads to safety and serviceability issues. In situations where conservative routine assessments fail to justify adequate safety and serviceability, advanced structural evaluation methodologies are attractive. These advanced methodologies employ measurements to help understand more accurately structural behavior. A better understanding typically results in more accurate reserve capacity evaluations along with other advantages. Many of the approaches available originate from the fields of statistics, signal processing and control engineering, where it is common to assume that modeling errors can be treated as Gaussian noise. Such an assumption is not generally applicable to civil infrastructure because in these systems, systematic biases in models can be very important and their effects often vary with location. Most importantly, little is known of the dependencies between the errors. This thesis includes a proposal for a model-based data-interpretation methodology that builds on the concept of probabilistic falsification. This approach can identify the properties of structures in situations where there are aleatory and systematic errors, without requiring definitions of dependencies between uncertainties. Prior knowledge is used to bound the ranges of parameters to be identified and to build an initial set of possible model instances. Then, predictions from each model are compared with measurements gathered on-site so that inadequate model instances and model classes are falsified using threshold bounds. These bounds are defined using measurement and modeling uncertainties. The probability of discarding a valid model instance is regulated using the S ̆idák correction to account for multiple measurements. A new metric called “expected identifiability” quantifies probabilistically, the utility of monitoring interventions. Expected identifiability quantifies the effect of hypotheses and choices such as the uncertainty level, model-class refinement, measurement locations, measurement types and sensor accuracy. Results show that using too many measurements may decrease data-interpretation performance. Probabilistic model falsification, expected identifiability and measurement-system design methodologies are applied to several full-scale case studies. The work shows that data interpretation is limited by factors such as robustness with respect to inaccurate uncertainty definitions and the exponential complexity of exploring high-dimensional solution spaces. Paths for tackling these issues are proposed as guidance for future research.

Details

Actions