Files

Abstract

The measurement performance of the baseline system design for the ITER high-frequency magnetic diagnostic has been analyzed using an algorithm based on the sparse representation of signals. This algorithm, derived from the SparSpec code [S.Bourguignon et al., Astronomy and Astrophysics 462 (2007), 379] has previously been extensively benchmarked on real and simulated JET data. To optimize the system design of the ITER high-frequency magnetic diagnostic, we attempt to reduce false detection of the modes, and to minimize the sensitivity of the measurement with respect to noise in the data, loss of faulty sensors, and the displacement of the sensors. Using this approach, the original layout design for the ITER high-frequency magnetic diagnostic system, which uses 168 sensors, is found to be inadequate to meet the ITER measurement requirements. Based on this analysis, and taking into account the guidelines for the risk mitigation strategies that are given in the ITER management plan, various attempts at optimization of this diagnostic system have been performed. A revised proposal for its implementation has been developed, which now meets the ITER requirements for measurement performance and risk management. For toroidal mode number detection, this implementation includes: on the low field-side, 2 arrays of 50-55 sensors and 2 arrays of 25-35 un-evenly spaced sensors each; on the high-field side, 2 arrays of 25-35 un-evenly spaced sensors each. For poloidal mode number detection, we propose 6 arrays of 25-40 sensors each located in non equi-distant machine sectors, not covering the divertor region and, possibly, poloidal angles in the range 75<||(deg)<105, as this region is the most sensitive to the details of the magnetic equilibrium. In part-1 of this contribution we present the general summary results of this work, for which more details and an overview of our test calculations are reported in part-2, as an accompanying paper in this same journal issue.

Details

PDF