RatioBounds RatioBounds ( x , y , m i s r a t e ) = exp ( ShiftBounds ( log x , log y , m i s r a t e ) ) \operatorname{RatioBounds}(\mathbf{x}, \mathbf{y}, \mathrm{misrate}) = \exp(\operatorname{ShiftBounds}(\log \mathbf{x}, \log \mathbf{y}, \mathrm{misrate})) RatioBounds ( x , y , misrate ) = exp ( ShiftBounds ( log x , log y , misrate ))
Robust bounds on Ratio ( x , y ) \operatorname{Ratio}(\mathbf{x}, \mathbf{y}) Ratio ( x , y ) with specified coverage — the multiplicative dual of ShiftBounds \operatorname{ShiftBounds} ShiftBounds .
Also known as — distribution-free confidence interval for Hodges-Lehmann ratio
Interpretation — m i s r a t e \mathrm{misrate} misrate is probability that true ratio falls outside bounds
Domain — x i > 0 x_i > 0 x i > 0 , y j > 0 y_j > 0 y j > 0 , m i s r a t e ≥ 2 ( n + m n ) \mathrm{misrate} \geq \frac{2}{\binom{n+m}{n}} misrate ≥ ( n n + m ) 2
Assumptions — positivity(x) , positivity(y)
Unit — dimensionless
Note — assumes weak continuity (ties from measurement resolution are tolerated but may yield conservative bounds)
Properties
Scale invariance RatioBounds ( k ⋅ x , k ⋅ y , m i s r a t e ) = RatioBounds ( x , y , m i s r a t e ) \operatorname{RatioBounds}(k \cdot \mathbf{x}, k \cdot \mathbf{y}, \mathrm{misrate}) = \operatorname{RatioBounds}(\mathbf{x}, \mathbf{y}, \mathrm{misrate}) RatioBounds ( k ⋅ x , k ⋅ y , misrate ) = RatioBounds ( x , y , misrate )
Scale equivariance RatioBounds ( k x ⋅ x , k y ⋅ y , m i s r a t e ) = ( k x k y ) ⋅ RatioBounds ( x , y , m i s r a t e ) \operatorname{RatioBounds}(k_x \cdot \mathbf{x}, k_y \cdot \mathbf{y}, \mathrm{misrate}) = (\frac{k_x}{k_y}) \cdot \operatorname{RatioBounds}(\mathbf{x}, \mathbf{y}, \mathrm{misrate}) RatioBounds ( k x ⋅ x , k y ⋅ y , misrate ) = ( k y k x ) ⋅ RatioBounds ( x , y , misrate )
Multiplicative antisymmetry RatioBounds ( x , y , m i s r a t e ) = 1 / RatioBounds ( y , x , m i s r a t e ) \operatorname{RatioBounds}(\mathbf{x}, \mathbf{y}, \mathrm{misrate}) = 1 / \operatorname{RatioBounds}(\mathbf{y}, \mathbf{x}, \mathrm{misrate}) RatioBounds ( x , y , misrate ) = 1/ RatioBounds ( y , x , misrate ) (bounds reversed)
Example
RatioBounds([1..30], [10..40], 1e-4) where Ratio ≈ 0.5 yields bounds containing 0.5 0.5 0.5
Bounds fail to cover true ratio with probability ≈ m i s r a t e \approx \mathrm{misrate} ≈ misrate
Relationship to ShiftBounds
RatioBounds \operatorname{RatioBounds} RatioBounds is computed via log-transformation:
RatioBounds ( x , y , m i s r a t e ) = exp ( ShiftBounds ( log x , log y , m i s r a t e ) ) \operatorname{RatioBounds}(\mathbf{x}, \mathbf{y}, \mathrm{misrate}) = \exp(\operatorname{ShiftBounds}(\log \mathbf{x}, \log \mathbf{y}, \mathrm{misrate})) RatioBounds ( x , y , misrate ) = exp ( ShiftBounds ( log x , log y , misrate ))
This means if ShiftBounds \operatorname{ShiftBounds} ShiftBounds returns [ a , b ] [a, b] [ a , b ] for the log-transformed samples, RatioBounds \operatorname{RatioBounds} RatioBounds returns [ e a , e b ] [e^a, e^b] [ e a , e b ] .
RatioBounds \operatorname{RatioBounds} RatioBounds provides not just the estimated ratio but also the uncertainty of that estimate. The function returns an interval of plausible ratio values given the data. Set m i s r a t e \mathrm{misrate} misrate to control how often the bounds might fail to contain the true ratio: use 10 − 3 10^{-3} 1 0 − 3 for everyday analysis or 10 − 6 10^{-6} 1 0 − 6 for critical decisions where errors are costly. These bounds require no assumptions about your data distribution, so they remain valid for any continuous positive measurements. If the bounds exclude 1 1 1 , that suggests a reliable multiplicative difference between the two groups.
Algorithm
The RatioBounds \operatorname{RatioBounds} RatioBounds estimator uses the same log-exp transformation as Ratio , delegating to ShiftBounds in log-space:
RatioBounds ( x , y , m i s r a t e ) = exp ( ShiftBounds ( log x , log y , m i s r a t e ) ) \operatorname{RatioBounds}(\mathbf{x}, \mathbf{y}, \mathrm{misrate}) = \exp(\operatorname{ShiftBounds}(\log \mathbf{x}, \log \mathbf{y}, \mathrm{misrate})) RatioBounds ( x , y , misrate ) = exp ( ShiftBounds ( log x , log y , misrate ))
The algorithm operates in three steps:
Log-transform Apply log \log log to each element of both samples. Positivity is required so that the logarithm is defined.
Delegate to ShiftBounds Compute [ a , b ] = ShiftBounds ( log x , log y , m i s r a t e ) [a, b] = \operatorname{ShiftBounds}(\log \mathbf{x}, \log \mathbf{y}, \mathrm{misrate}) [ a , b ] = ShiftBounds ( log x , log y , misrate ) . This provides distribution-free bounds on the shift in log-space.
Exp-transform Return [ e a , e b ] [e^a, e^b] [ e a , e b ] , converting the additive bounds back to multiplicative bounds.
Because log \log log and exp \exp exp are monotone, the coverage guarantee of ShiftBounds \operatorname{ShiftBounds} ShiftBounds transfers directly: the probability that the true ratio falls outside [ e a , e b ] [e^a, e^b] [ e a , e b ] equals the probability that the true log-shift falls outside [ a , b ] [a, b] [ a , b ] , which is at most m i s r a t e \mathrm{misrate} misrate .
Tests
RatioBounds ( x , y , m i s r a t e ) = exp ( ShiftBounds ( log x , log y , m i s r a t e ) ) \operatorname{RatioBounds}(\mathbf{x}, \mathbf{y}, \mathrm{misrate}) = \exp(\operatorname{ShiftBounds}(\log \mathbf{x}, \log \mathbf{y}, \mathrm{misrate})) RatioBounds ( x , y , misrate ) = exp ( ShiftBounds ( log x , log y , misrate ))
The RatioBounds \operatorname{RatioBounds} RatioBounds test suite contains 61 correctness test cases (3 demo + 9 natural + 6 property + 10 edge + 9 multiplic + 4 uniform + 5 misrate + 15 unsorted). Since RatioBounds \operatorname{RatioBounds} RatioBounds returns bounds rather than a point estimate, tests validate that the bounds contain Ratio ( x , y ) \operatorname{Ratio}(\mathbf{x}, \mathbf{y}) Ratio ( x , y ) and satisfy equivariance properties. Each test case output is a JSON object with lower and upper fields representing the interval bounds. All samples must contain strictly positive values. The domain constraint m i s r a t e ≥ 2 ( n + m n ) \mathrm{misrate} \geq \frac{2}{\binom{n+m}{n}} misrate ≥ ( n n + m ) 2 is enforced; inputs violating this return a domain error.
Demo examples (n = m = 5 n = m = 5 n = m = 5 , positive samples) — 3 tests:
demo-1: x = ( 1 , 2 , 3 , 4 , 5 ) \mathbf{x} = (1, 2, 3, 4, 5) x = ( 1 , 2 , 3 , 4 , 5 ) , y = ( 2 , 3 , 4 , 5 , 6 ) \mathbf{y} = (2, 3, 4, 5, 6) y = ( 2 , 3 , 4 , 5 , 6 ) , m i s r a t e = 0.05 \mathrm{misrate} = 0.05 misrate = 0.05
demo-2: x = ( 1 , 2 , 3 , 4 , 5 ) \mathbf{x} = (1, 2, 3, 4, 5) x = ( 1 , 2 , 3 , 4 , 5 ) , y = ( 2 , 3 , 4 , 5 , 6 ) \mathbf{y} = (2, 3, 4, 5, 6) y = ( 2 , 3 , 4 , 5 , 6 ) , m i s r a t e = 0.01 \mathrm{misrate} = 0.01 misrate = 0.01 , expected: wider bounds than demo-1
demo-3: x = ( 2 , 3 , 4 , 5 , 6 ) \mathbf{x} = (2, 3, 4, 5, 6) x = ( 2 , 3 , 4 , 5 , 6 ) , y = ( 2 , 3 , 4 , 5 , 6 ) \mathbf{y} = (2, 3, 4, 5, 6) y = ( 2 , 3 , 4 , 5 , 6 ) , m i s r a t e = 0.05 \mathrm{misrate} = 0.05 misrate = 0.05 , expected: bounds containing 1 1 1 (identity case)
These cases illustrate how tighter misrates produce wider bounds and validate the identity property where identical samples yield bounds containing one.
Natural sequences ([ n , m ] i n 5 , 8 , 10 × 5 , 8 , 10 [n, m] in {5, 8, 10} \times {5, 8, 10} [ n , m ] in 5 , 8 , 10 × 5 , 8 , 10 , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 ) — 9 combinations:
natural-5-5: x = ( 1 , … , 5 ) \mathbf{x} = (1, \ldots, 5) x = ( 1 , … , 5 ) , y = ( 1 , … , 5 ) \mathbf{y} = (1, \ldots, 5) y = ( 1 , … , 5 ) , expected bounds containing 1 1 1
natural-5-8: x = ( 1 , … , 5 ) \mathbf{x} = (1, \ldots, 5) x = ( 1 , … , 5 ) , y = ( 1 , … , 8 ) \mathbf{y} = (1, \ldots, 8) y = ( 1 , … , 8 )
natural-5-10: x = ( 1 , … , 5 ) \mathbf{x} = (1, \ldots, 5) x = ( 1 , … , 5 ) , y = ( 1 , … , 10 ) \mathbf{y} = (1, \ldots, 10) y = ( 1 , … , 10 )
natural-8-5: x = ( 1 , … , 8 ) \mathbf{x} = (1, \ldots, 8) x = ( 1 , … , 8 ) , y = ( 1 , … , 5 ) \mathbf{y} = (1, \ldots, 5) y = ( 1 , … , 5 )
natural-8-8: x = ( 1 , … , 8 ) \mathbf{x} = (1, \ldots, 8) x = ( 1 , … , 8 ) , y = ( 1 , … , 8 ) \mathbf{y} = (1, \ldots, 8) y = ( 1 , … , 8 ) , expected bounds containing 1 1 1
natural-8-10: x = ( 1 , … , 8 ) \mathbf{x} = (1, \ldots, 8) x = ( 1 , … , 8 ) , y = ( 1 , … , 10 ) \mathbf{y} = (1, \ldots, 10) y = ( 1 , … , 10 )
natural-10-5: x = ( 1 , … , 10 ) \mathbf{x} = (1, \ldots, 10) x = ( 1 , … , 10 ) , y = ( 1 , … , 5 ) \mathbf{y} = (1, \ldots, 5) y = ( 1 , … , 5 )
natural-10-8: x = ( 1 , … , 10 ) \mathbf{x} = (1, \ldots, 10) x = ( 1 , … , 10 ) , y = ( 1 , … , 8 ) \mathbf{y} = (1, \ldots, 8) y = ( 1 , … , 8 )
natural-10-10: x = ( 1 , … , 10 ) \mathbf{x} = (1, \ldots, 10) x = ( 1 , … , 10 ) , y = ( 1 , … , 10 ) \mathbf{y} = (1, \ldots, 10) y = ( 1 , … , 10 ) , expected bounds containing 1 1 1
These sizes are chosen to satisfy m i s r a t e ≥ 2 ( n + m n ) \mathrm{misrate} \geq \frac{2}{\binom{n+m}{n}} misrate ≥ ( n n + m ) 2 for all combinations.
Property validation (n = m = 10 n = m = 10 n = m = 10 , m i s r a t e = 10 − 3 \mathrm{misrate} = 10^{-3} misrate = 1 0 − 3 ) — 6 tests:
property-identity: x = ( 1 , 2 , … , 10 ) \mathbf{x} = (1, 2, \ldots, 10) x = ( 1 , 2 , … , 10 ) , y = ( 1 , 2 , … , 10 ) \mathbf{y} = (1, 2, \ldots, 10) y = ( 1 , 2 , … , 10 ) , bounds must contain 1 1 1
property-scale-2x: x = ( 2 , 4 , … , 20 ) \mathbf{x} = (2, 4, \ldots, 20) x = ( 2 , 4 , … , 20 ) , y = ( 1 , 2 , … , 10 ) \mathbf{y} = (1, 2, \ldots, 10) y = ( 1 , 2 , … , 10 ) , bounds must contain 2 2 2
property-reciprocal: x = ( 1 , 2 , … , 10 ) \mathbf{x} = (1, 2, \ldots, 10) x = ( 1 , 2 , … , 10 ) , y = ( 2 , 4 , … , 20 ) \mathbf{y} = (2, 4, \ldots, 20) y = ( 2 , 4 , … , 20 ) , bounds must contain 0.5 0.5 0.5 (reciprocal of scale-2x)
property-common-scale: x = ( 10 , 20 , … , 100 ) \mathbf{x} = (10, 20, \ldots, 100) x = ( 10 , 20 , … , 100 ) , y = ( 20 , 40 , … , 200 ) \mathbf{y} = (20, 40, \ldots, 200) y = ( 20 , 40 , … , 200 )
Same ratio as property-reciprocal (common scale invariance)
property-small-values: x = ( 0.1 , 0.2 , … , 1.0 ) \mathbf{x} = (0.1, 0.2, \ldots, 1.0) x = ( 0.1 , 0.2 , … , 1.0 ) , y = ( 0.2 , 0.4 , … , 2.0 ) \mathbf{y} = (0.2, 0.4, \ldots, 2.0) y = ( 0.2 , 0.4 , … , 2.0 )
Same ratio as property-reciprocal (small value handling)
property-mixed-scales: x = ( 0.01 , 0.1 , 1 , 10 , 100 , 1000 , 0.5 , 5 , 50 , 500 ) \mathbf{x} = (0.01, 0.1, 1, 10, 100, 1000, 0.5, 5, 50, 500) x = ( 0.01 , 0.1 , 1 , 10 , 100 , 1000 , 0.5 , 5 , 50 , 500 ) , y = ( 0.1 , 1 , 10 , 100 , 1000 , 10000 , 5 , 50 , 500 , 5000 ) \mathbf{y} = (0.1, 1, 10, 100, 1000, 10000, 5, 50, 500, 5000) y = ( 0.1 , 1 , 10 , 100 , 1000 , 10000 , 5 , 50 , 500 , 5000 )
Wide range validation
Edge cases — boundary conditions and extreme scenarios (10 tests):
edge-min-samples: x = ( 2 , 3 , 4 , 5 , 6 ) \mathbf{x} = (2, 3, 4, 5, 6) x = ( 2 , 3 , 4 , 5 , 6 ) , y = ( 3 , 4 , 5 , 6 , 7 ) \mathbf{y} = (3, 4, 5, 6, 7) y = ( 3 , 4 , 5 , 6 , 7 ) , m i s r a t e = 0.05 \mathrm{misrate} = 0.05 misrate = 0.05
edge-permissive-misrate: x = ( 1 , 2 , 3 , 4 , 5 ) \mathbf{x} = (1, 2, 3, 4, 5) x = ( 1 , 2 , 3 , 4 , 5 ) , y = ( 2 , 3 , 4 , 5 , 6 ) \mathbf{y} = (2, 3, 4, 5, 6) y = ( 2 , 3 , 4 , 5 , 6 ) , m i s r a t e = 0.5 \mathrm{misrate} = 0.5 misrate = 0.5 (very wide bounds)
edge-strict-misrate: n = m = 20 n = m = 20 n = m = 20 , m i s r a t e = 10 − 6 \mathrm{misrate} = 10^{-6} misrate = 1 0 − 6 (very narrow bounds)
edge-unity-ratio: n = m = 10 n = m = 10 n = m = 10 , all values = 5 = 5 = 5 , m i s r a t e = 10 − 3 \mathrm{misrate} = 10^{-3} misrate = 1 0 − 3 (bounds around 1)
edge-asymmetric-3-100: n = 3 n = 3 n = 3 , m = 100 m = 100 m = 100 , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 (extreme size difference)
edge-asymmetric-5-50: n = 5 n = 5 n = 5 , m = 50 m = 50 m = 50 , m i s r a t e = 10 − 3 \mathrm{misrate} = 10^{-3} misrate = 1 0 − 3 (highly unbalanced)
edge-duplicates: x = ( 3 , 3 , 3 , 3 , 3 ) \mathbf{x} = (3, 3, 3, 3, 3) x = ( 3 , 3 , 3 , 3 , 3 ) , y = ( 5 , 5 , 5 , 5 , 5 ) \mathbf{y} = (5, 5, 5, 5, 5) y = ( 5 , 5 , 5 , 5 , 5 ) , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 (all duplicates, bounds around 0.6)
edge-wide-range: n = m = 10 n = m = 10 n = m = 10 , values spanning 10 − 3 10^{-3} 1 0 − 3 to 10 8 10^8 1 0 8 , m i s r a t e = 10 − 3 \mathrm{misrate} = 10^{-3} misrate = 1 0 − 3 (extreme value range)
edge-tiny-values: n = m = 10 n = m = 10 n = m = 10 , values ≈ 10 − 6 \approx 10^{-6} ≈ 1 0 − 6 , m i s r a t e = 10 − 3 \mathrm{misrate} = 10^{-3} misrate = 1 0 − 3 (numerical precision)
edge-large-values: n = m = 10 n = m = 10 n = m = 10 , values ≈ 10 8 \approx 10^8 ≈ 1 0 8 , m i s r a t e = 10 − 3 \mathrm{misrate} = 10^{-3} misrate = 1 0 − 3 (large magnitude)
These edge cases stress-test boundary conditions, numerical stability, and the margin calculation with extreme parameters.
Multiplic distribution ([ n , m ] i n 10 , 30 , 50 × 10 , 30 , 50 [n, m] in {10, 30, 50} \times {10, 30, 50} [ n , m ] in 10 , 30 , 50 × 10 , 30 , 50 , m i s r a t e = 10 − 3 \mathrm{misrate} = 10^{-3} misrate = 1 0 − 3 ) — 9 combinations with Multiplic ‾ ( 1 , 0.5 ) \underline{\operatorname{Multiplic}}(1, 0.5) Multiplic ( 1 , 0.5 ) :
multiplic-10-10, multiplic-10-30, multiplic-10-50
multiplic-30-10, multiplic-30-30, multiplic-30-50
multiplic-50-10, multiplic-50-30, multiplic-50-50
Random generation: x \mathbf{x} x uses seed 0, y \mathbf{y} y uses seed 1
These fuzzy tests validate that bounds properly encompass the ratio estimate for realistic log-normally-distributed data at various sample sizes.
Uniform distribution ([ n , m ] i n 10 , 100 × 10 , 100 [n, m] in {10, 100} \times {10, 100} [ n , m ] in 10 , 100 × 10 , 100 , m i s r a t e = 10 − 4 \mathrm{misrate} = 10^{-4} misrate = 1 0 − 4 ) — 4 combinations with Uniform ‾ ( 1 , 10 ) \underline{\operatorname{Uniform}}(1, 10) Uniform ( 1 , 10 ) :
uniform-10-10, uniform-10-100, uniform-100-10, uniform-100-100
Random generation: x \mathbf{x} x uses seed 2, y \mathbf{y} y uses seed 3
Note: positive range [ 1 , 10 ) [1, 10) [ 1 , 10 ) used for ratio compatibility
The asymmetric size combinations are particularly important for testing margin calculation with unbalanced samples.
Misrate variation (n = m = 20 n = m = 20 n = m = 20 , x = ( 1 , 2 , … , 20 ) \mathbf{x} = (1, 2, \ldots, 20) x = ( 1 , 2 , … , 20 ) , y = ( 2 , 4 , … , 40 ) \mathbf{y} = (2, 4, \ldots, 40) y = ( 2 , 4 , … , 40 ) ) — 5 tests with varying misrates:
misrate-1e-2: m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2
misrate-1e-3: m i s r a t e = 10 − 3 \mathrm{misrate} = 10^{-3} misrate = 1 0 − 3
misrate-1e-4: m i s r a t e = 10 − 4 \mathrm{misrate} = 10^{-4} misrate = 1 0 − 4
misrate-1e-5: m i s r a t e = 10 − 5 \mathrm{misrate} = 10^{-5} misrate = 1 0 − 5
misrate-1e-6: m i s r a t e = 10 − 6 \mathrm{misrate} = 10^{-6} misrate = 1 0 − 6
These tests use identical samples with varying misrates to validate the monotonicity property: smaller misrates (higher confidence) produce wider bounds. The sequence demonstrates how bound width increases as misrate decreases, helping implementations verify correct margin calculation.
Unsorted tests — verify independent sorting of x \mathbf{x} x and y \mathbf{y} y (15 tests):
unsorted-x-natural-5-5: x = ( 5 , 3 , 1 , 4 , 2 ) \mathbf{x} = (5, 3, 1, 4, 2) x = ( 5 , 3 , 1 , 4 , 2 ) , y = ( 1 , 2 , 3 , 4 , 5 ) \mathbf{y} = (1, 2, 3, 4, 5) y = ( 1 , 2 , 3 , 4 , 5 ) , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 (X reversed, Y sorted)
unsorted-y-natural-5-5: x = ( 1 , 2 , 3 , 4 , 5 ) \mathbf{x} = (1, 2, 3, 4, 5) x = ( 1 , 2 , 3 , 4 , 5 ) , y = ( 5 , 3 , 1 , 4 , 2 ) \mathbf{y} = (5, 3, 1, 4, 2) y = ( 5 , 3 , 1 , 4 , 2 ) , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 (X sorted, Y reversed)
unsorted-both-natural-5-5: x = ( 5 , 3 , 1 , 4 , 2 ) \mathbf{x} = (5, 3, 1, 4, 2) x = ( 5 , 3 , 1 , 4 , 2 ) , y = ( 5 , 3 , 1 , 4 , 2 ) \mathbf{y} = (5, 3, 1, 4, 2) y = ( 5 , 3 , 1 , 4 , 2 ) , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 (both reversed)
unsorted-x-shuffle-5-5: x = ( 3 , 1 , 5 , 4 , 2 ) \mathbf{x} = (3, 1, 5, 4, 2) x = ( 3 , 1 , 5 , 4 , 2 ) , y = ( 1 , 2 , 3 , 4 , 5 ) \mathbf{y} = (1, 2, 3, 4, 5) y = ( 1 , 2 , 3 , 4 , 5 ) , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 (X shuffled)
unsorted-y-shuffle-5-5: x = ( 1 , 2 , 3 , 4 , 5 ) \mathbf{x} = (1, 2, 3, 4, 5) x = ( 1 , 2 , 3 , 4 , 5 ) , y = ( 4 , 2 , 5 , 1 , 3 ) \mathbf{y} = (4, 2, 5, 1, 3) y = ( 4 , 2 , 5 , 1 , 3 ) , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 (Y shuffled)
unsorted-both-shuffle-5-5: x = ( 3 , 1 , 5 , 4 , 2 ) \mathbf{x} = (3, 1, 5, 4, 2) x = ( 3 , 1 , 5 , 4 , 2 ) , y = ( 2 , 4 , 1 , 5 , 3 ) \mathbf{y} = (2, 4, 1, 5, 3) y = ( 2 , 4 , 1 , 5 , 3 ) , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 (both shuffled)
unsorted-demo-unsorted-x: x = ( 5 , 1 , 4 , 2 , 3 ) \mathbf{x} = (5, 1, 4, 2, 3) x = ( 5 , 1 , 4 , 2 , 3 ) , y = ( 2 , 3 , 4 , 5 , 6 ) \mathbf{y} = (2, 3, 4, 5, 6) y = ( 2 , 3 , 4 , 5 , 6 ) , m i s r a t e = 0.05 \mathrm{misrate} = 0.05 misrate = 0.05 (demo-1 X unsorted)
unsorted-demo-unsorted-y: x = ( 1 , 2 , 3 , 4 , 5 ) \mathbf{x} = (1, 2, 3, 4, 5) x = ( 1 , 2 , 3 , 4 , 5 ) , y = ( 6 , 2 , 5 , 3 , 4 ) \mathbf{y} = (6, 2, 5, 3, 4) y = ( 6 , 2 , 5 , 3 , 4 ) , m i s r a t e = 0.05 \mathrm{misrate} = 0.05 misrate = 0.05 (demo-1 Y unsorted)
unsorted-demo-both-unsorted: x = ( 4 , 1 , 5 , 2 , 3 ) \mathbf{x} = (4, 1, 5, 2, 3) x = ( 4 , 1 , 5 , 2 , 3 ) , y = ( 5 , 2 , 6 , 3 , 4 ) \mathbf{y} = (5, 2, 6, 3, 4) y = ( 5 , 2 , 6 , 3 , 4 ) , m i s r a t e = 0.05 \mathrm{misrate} = 0.05 misrate = 0.05 (demo-1 both unsorted)
unsorted-identity-unsorted: x = ( 4 , 1 , 5 , 2 , 3 ) \mathbf{x} = (4, 1, 5, 2, 3) x = ( 4 , 1 , 5 , 2 , 3 ) , y = ( 5 , 1 , 4 , 3 , 2 ) \mathbf{y} = (5, 1, 4, 3, 2) y = ( 5 , 1 , 4 , 3 , 2 ) , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 (identity property, both unsorted)
unsorted-scale-unsorted: x = ( 10 , 30 , 20 ) \mathbf{x} = (10, 30, 20) x = ( 10 , 30 , 20 ) , y = ( 15 , 5 , 10 ) \mathbf{y} = (15, 5, 10) y = ( 15 , 5 , 10 ) , m i s r a t e = 0.5 \mathrm{misrate} = 0.5 misrate = 0.5 (scale relationship, both unsorted)
unsorted-asymmetric-5-10: x = ( 2 , 5 , 1 , 3 , 4 ) \mathbf{x} = (2, 5, 1, 3, 4) x = ( 2 , 5 , 1 , 3 , 4 ) , y = ( 10 , 5 , 2 , 8 , 4 , 1 , 9 , 3 , 7 , 6 ) \mathbf{y} = (10, 5, 2, 8, 4, 1, 9, 3, 7, 6) y = ( 10 , 5 , 2 , 8 , 4 , 1 , 9 , 3 , 7 , 6 ) , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 (asymmetric sizes, both unsorted)
unsorted-duplicates: x = ( 3 , 3 , 3 , 3 , 3 ) \mathbf{x} = (3, 3, 3, 3, 3) x = ( 3 , 3 , 3 , 3 , 3 ) , y = ( 5 , 5 , 5 , 5 , 5 ) \mathbf{y} = (5, 5, 5, 5, 5) y = ( 5 , 5 , 5 , 5 , 5 ) , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 (all duplicates, any order)
unsorted-mixed-duplicates-x: x = ( 2 , 1 , 3 , 2 , 1 ) \mathbf{x} = (2, 1, 3, 2, 1) x = ( 2 , 1 , 3 , 2 , 1 ) , y = ( 1 , 1 , 2 , 2 , 3 ) \mathbf{y} = (1, 1, 2, 2, 3) y = ( 1 , 1 , 2 , 2 , 3 ) , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 (X has unsorted duplicates)
unsorted-mixed-duplicates-y: x = ( 1 , 1 , 2 , 2 , 3 ) \mathbf{x} = (1, 1, 2, 2, 3) x = ( 1 , 1 , 2 , 2 , 3 ) , y = ( 3 , 2 , 1 , 3 , 2 ) \mathbf{y} = (3, 2, 1, 3, 2) y = ( 3 , 2 , 1 , 3 , 2 ) , m i s r a t e = 10 − 2 \mathrm{misrate} = 10^{-2} misrate = 1 0 − 2 (Y has unsorted duplicates)
These unsorted tests are critical because RatioBounds \operatorname{RatioBounds} RatioBounds computes bounds from pairwise ratios, requiring both samples to be sorted independently. The variety ensures implementations dont incorrectly assume pre-sorted input or sort samples together. Each test must produce identical output to its sorted counterpart, validating that the implementation correctly handles the sorting step.
No performance test — RatioBounds \operatorname{RatioBounds} RatioBounds uses the FastRatio \text{FastRatio} FastRatio algorithm internally, which delegates to FastShift \text{FastShift} FastShift in log-space. Since bounds computation involves only two quantile calculations from the pairwise differences (at positions determined by PairwiseMargin \operatorname{PairwiseMargin} PairwiseMargin ), the performance characteristics are equivalent to computing two Ratio \operatorname{Ratio} Ratio estimates, which completes efficiently for large samples.