An Algorithm to Compute Bounds for the Star Discrepancy

We propose an algorithm to compute upper and lower bounds for the star discrepancy of an arbitrary sequence of points in the s-dimensional unit cube. The method is based on a particular partition of the unit cube into subintervals and on a specialized procedure for orthogonal range counting. The cardinality of the partition depends on the dimension and on an accuracy parameter that has to be specified. We have implemented this method and here we present results of some computational experiments obtained with this implementation.


Published in:
Journal of Complexity, 17, 4, 850-880
Year:
2001
Note:
PRO 2001.16
Other identifiers:
Laboratories:




 Record created 2006-02-13, last modified 2018-01-27

External link:
Download fulltext
URL
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)