Previous Article in Journal
Logarithmic Coefficients for Some Classes Defined by Subordination
Previous Article in Special Issue
A Green Supply Chain Member Selection Method Considering Green Innovation Capability in a Hesitant Fuzzy Environment

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Homogeneity Test of Many-to-One Relative Risk Ratios in Unilateral and Bilateral Data with Multiple Groups

by * and
College of Mathematics and System Science, Xinjiang University, Urumqi 830046, China
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(4), 333; https://doi.org/10.3390/axioms12040333
Received: 21 February 2023 / Revised: 23 March 2023 / Accepted: 26 March 2023 / Published: 29 March 2023
(This article belongs to the Special Issue Fuzzy Logic and Application in Multi-Criteria Decision-Making (MCDM))

## Abstract

:
In medical clinical studies, we often encounter paired organs’ unilateral or bilateral data. For bilateral data, there exists an intraclass correlation between paired organs. Under an intraclass correlation model, this paper proposes asymptotic statistics for testing the equality of many-to-one relative risk ratios in combined unilateral and bilateral data. Furthermore, we calculate the explicit expressions of these statistics. Moreover, these procedures are adequate to solve the hypothesis problems of unilateral or bilateral data. Through comparison, the simulation results show that the score test has a robust empirical type-I error rate and sufficient power. We provide a clinical trial of acute otitis media to illustrate our proposed methods.
MSC:
62F25; 91G70; 62P10

## 1. Introduction

Binary outcome data are widespread in medical applications. We often encounter the observation of paired organs (e.g., eyes, ears, and arms) in medical clinical studies. If only one of the patients’ paired organs is diseased, one organ’s response outcomes are not cured or cured; we call this unilateral data. Previous researchers have achieved outstanding achievements on statistical inferences about the unilateral data [1,2,3,4]. If both patients’ paired organs are diseased, the treatment responses of paired organs are none, one, and both cured; this is called bilateral data. Generally, the paired organs of patients may be correlated. Unlike the unilateral case, correlation should be considered in bilateral data to avoid biased results. For such correlated paired data, researchers have proposed many probabilistic models, including Rosner’s model [5], Dallal’s model [6], and Donner’s model [7]. Under the above models, various statistical methods have been proposed for testing the equality of proportions in bilateral data [8,9,10,11]. Among these models, Rosner’s model is the basis and has been widely studied [12,13,14].
In practice, we usually obtain both data types described above. Due to certain factors, some patients have only one eye studied, while others provide information on both eyes. For example, Mandel et al. [15] conducted a double-blind randomized clinical trial to treat acute otitis media (OME). In this trial, each child suffered either unilateral or bilateral tympanocentesis and was randomly assigned to receive treatment with Amoxicillin or Cefaclor. Under the same treatment method, we divided children into three groups by age. In the Amoxicillin treatment, 97 children of different ages were classified into three types: less than 2 years old, 2 to 5 years old, and 6 years old or older. After 14 days of treatment, Table 1 shows that each child underwent no, unilateral or bilateral OME, and was assigned into three groups according to age: <2, 2–5, and ≥6 years. Pei et al. [16] observed that the previous methods of studying unilateral or bilateral data are unsuitable for their combination. They proposed new asymptotical ways to test the equality of two proportions under such data. This topic is an emerging problem, and related research is less available [17,18,19,20]. Therefore, studying unilateral and bilateral data under Rosner’s model is imperative.
To compare the treatment of Amoxicillin in Table 1, the common measures are risk difference (RD), relative risk ratio (RR), and odds ratio (OR). The RD is an essential and straightforward measure that reflects the underlying risk without treatment and the risk reduction associated with treatment. The relative risk of a treatment is the ratio of risks between the treatment and control groups. It is generally more meaningful to use relative effect measures for summarizing the evidence and absolute measures. The odds ratio aims to look at associations rather than differences. Please note that RR and OR are related; this paper compares the relative risk ratios under Rosner’s model.
Multiple groups can often occur in many different treatments or over time in medical trials [21]. It is natural to compare several treatment groups with a control group. Mou and Li [22] considered a homogeneity test of risk differences on many-to-one bilateral data in a study. Schaarschmidt et al. [23] studied asymptotic simultaneous confidence intervals (SCIs) for many-to-one comparisons on binary proportion. Yang et al. [24] proposed asymptotic SCI construction for many-to-one comparisons of proportion differences adjusting for multiplicity and correlation. To date, relatively few papers have studied unilateral and bilateral data. Most research on unilateral or bilateral data mainly focuses on homogeneity testing for risk differences in stratified data, and on the study of equality of multiple groups of proportions. When the cure rates are low, and the difference between groups is slight, using the relative risk ratio to judge is more appropriate. There is insufficient research into statistical inference on many-to-one procedures of combined unilateral and bilateral data. Under Rosner’s model, the paper aims to study homogeneity tests of many-to-one relative risk ratios in unilateral and bilateral data. The hypothesis proposed by Ma et al. [17] is a particular case of this type. Moreover, the method proposed in this paper can also be applied to unilateral or bilateral data, and the corresponding test statistics are given. We arrange the rest of the work as follows. In Section 2, we introduce the data structure and Rosner’s model, and estimate the unknown parameters by algorithms. Section 3 constructs the likelihood ratio, score, and Wald-type statistics. In Section 4, through Monte Carlo simulations, we compare the performance of these test statistics in terms of the empirical type-I error rates and power. We provide a real example to illustrate our proposed method in Section 5 and conclude in Section 6.

## 2. Data Structure and Probability Distribution

In an ophthalmologic study, let $m l i ( l = 0 , 1 , 2 )$ be the number of patients with l response(s) for bilateral data. For unilateral data, let $n l i ( l = 0 , 1 )$ be the number of patients with l response in the ith group for $i = 1 , ⋯ , g$. Table 2 shows the combined unilateral and bilateral data.
Let $m + i = ∑ l = 0 2 m l i ( i = 1 , 2 , … , g )$ be the total number of patients with bilateral data in the ith group, and $m l + = ∑ i = 1 g m l i$ be the total number of patients with exactly $l ( l = 0 , 1 , 2 )$ response(s). For the unilateral case, $n + i = ∑ l = 0 1 n l i$ is the total number of patients with binary data in the ith group, and $n l + = ∑ i = 1 g n l i$ is the total number of patients with $l ( l = 0 , 1 )$ response. We have
$M 1 = ∑ i = 1 g m + i = ∑ l = 0 2 m l + , M 2 = ∑ i = 1 g n + i = ∑ l = 0 1 n l + .$
For the bilateral data, let $Y l i$ be a random variable representing the number of patients with l response in the ith group, which has a trinomial distribution. In addition, $p Y l i$ is the corresponding probability. Define $m i = ( m 0 i , m 0 i , m 2 i ) T$, $Y i = ( Y 0 i , Y 1 i , Y 2 i ) T$, $p Y i = ( p Y 0 i , p Y 1 i , p Y 2 i ) T$ and $p Y 0 i + p Y 1 i + p Y 2 i = 1$. Then, the probability density of $Y i$ is
$f Y i ( m i ) = m + i ! m 0 i ! m 1 i ! m 2 i ! p Y 0 i m 0 i p Y 1 i m 1 i p Y 2 i m 2 i$
for $i = 1 , 2 , ⋯ , g$. For the unilateral data, let $X l i$ be a random variable that represents the number of patients with l response in the ith group and follows a binomial distribution, and $p X l i$ be the corresponding probability. We denote $n i = ( n 0 i , n 1 i ) T$, $X i = ( X 0 i , X 1 i ) T$, $p X i = ( p X 0 i , p X 1 i ) T$ and $p X 0 i + p X 1 i = 1$. Then, the probability density of $X i$ is
$f X i ( n i ) = n + i ! n 0 i ! n 1 i ! p X 0 i n 0 i p X 1 i n 1 i$
for $i = 1 , 2 , ⋯ , g$.
Suppose $Z i j k ( 1 ) = 1$ in bilateral data if the kth $( k = 1 , 2 )$ eye of the jth individual $( j = 1 , ⋯ , m + i )$ is cured in the ith group and 0 otherwise. Define $Z i j ( 2 ) = 1$ if the eye of the jth patient $( j = 1 , ⋯ , n + i )$ has a response in the ith group, and 0 otherwise for the unilateral patients. Under Rosner’s model, we propose a probability model of unilateral and bilateral data as
$P r ( Z i j k ( 1 ) = 1 ) = P r ( Z i j ( 2 ) = 1 ) = π i , P r ( Z i j k ( 1 ) = 1 | Z i j ( 3 − k ) ( 1 ) = 1 ) = R π i ,$
where R is a constant. If $R = 1$, the two organs of a patient are completely independent. They are completely dependent when $R π i = 1$.
From the probabilities of Equation (1), it is easy to derive $p X 0 i = 1 − π i$, $p X 1 i = π i$, $p Y 0 i = R π i 2 − 2 π i + 1$, $p Y 1 i = 2 π i ( 1 − R π i )$ and $p Y 2 i = R π i 2$ for $i = 1 , 2 , ⋯ , g$. Since $X i$ and $Y i$ are independent random variables, the likelihood function is expressed by
$L ( π , R ) = ∏ i = 1 g f Y i ( m i ) f X i ( n i ) = ∏ i = 1 g n + i ! n 0 i ! n 1 i ! m + i ! m 0 i ! m 1 i ! m 2 i ! p X 0 i n 0 i p X 1 i n 1 i p Y 0 i m 0 i p Y 1 i m 1 i p Y 2 i m 2 i ,$
where $π = ( π 1 , π 2 , ⋯ , π g )$. Thus, we have the log-likelihood as
$l ( π , R ) = ∑ i = 1 g l i ( π i , R ) = ∑ i = 1 g [ m 0 i ln ( R π i 2 − 2 π i + 1 ) + m 1 i ln ( 2 π i ( 1 − R π i ) ) + m 2 i ln ( R π i 2 ) + n 0 i ln ( 1 − π i ) + n 1 i ln ( π i ) ] + ln C ,$
where $C = ∏ i = 1 g n + i ! n 0 i ! n 1 i ! m + i ! m 0 i ! m 1 i ! m 2 i !$ is a constant.
Denote $δ = ( δ 2 , ⋯ , δ g )$ and $δ i = π i / π 1 , ( i = 2 , 3 , ⋯ , g )$. We aim to test whether the many-to-one relative risk ratios are identical and give the hypotheses as
$H 0 : δ 2 = δ 3 = ⋯ = δ g ≜ δ vs . H 1 : δ i is not all the same .$
Let $π ^ i$ and $R ^$ be the global maximum likelihood estimators (MLEs) of unknown parameters $π i ( i = 1 , 2 , ⋯ , g )$ and R, respectively. Global MLEs are the solutions to the equations
$∂ l ∂ π i = 0 , ∂ l ∂ R = 0 , i = 1 , 2 , ⋯ , g ,$
where
$∂ l ∂ π i = n 0 i π i − 1 + n 1 i π i + 2 m 0 i ( R π i − 1 ) R π i 2 − 2 π i + 1 + m 1 i ( 2 R π i − 1 ) π i ( R π i − 1 ) + 2 m 2 i π i , ∂ l ∂ R = m 2 + R + ∑ i = 1 g m 0 i π i 2 R π i 2 − 2 π i + 1 + m 1 i π i R π i − 1 .$
Denote $β = ( π 1 , ⋯ , π g , R )$ and $β ( t ) = ( π 1 ( t ) , ⋯ , π g ( t ) , R ( t ) )$. We simplify the first equation of (2) as the following 4th-order polynomial equations
$a π i 4 + b π i 3 + c π i 2 + d π i + e = 0 ,$
where
$a = ( 2 m + i + n + i ) R 2 , b = − R ( 4 m 0 i + 5 m 1 i + 6 m 2 i + 3 n + i ) − R 2 ( 2 m + i + n 1 i ) , c = ( 4 R + 2 ) ( m 0 i + 2 m 2 i + n 1 i ) + ( 7 R + 2 ) m 1 i + ( R + 2 ) n 0 i , d = − 2 m 0 i − ( 3 + 2 R ) m 1 i − ( 6 + 2 R ) m 2 i − n 0 i − ( R + 3 ) n 1 i , e = m 1 i + 2 m 2 i + n 1 i .$
The $( t + 1 )$th update of R is obtained by the Newton–Raphson algorithm
$R ( t + 1 ) = R ( t ) − ∂ 2 l ( β ( t ) ) ∂ R 2 − 1 ∂ l ( β ( t ) ) ∂ R ,$
where
$∂ 2 l ( β ) ∂ R 2 = − m 2 + R 2 − ∑ i = 1 g m 0 i π i 4 ( R π i 2 − 2 π i + 1 ) 2 + m 1 i π i 2 ( R π i − 1 ) 2 .$
Remark 1.
In the above algorithm, if $n l i = 0 ( l = 0 , 1 , i = 1 , ⋯ , g )$, the global MLEs of $π i$ and R correspond to the results of bilateral data in Ma et al. [14]. If $m l i = 0 ( l = 0 , 1 , 2 , i = 1 , ⋯ , g )$, it is easy to obtain the global MLEs $π ^ i = n 1 i / n + i ( i = 1 , … , g )$ in unilateral data.
Suppose that $π 2 = ⋯ = π g = π 1 δ$ under $H 0$, the log-likelihood is rewritten by
$l 0 ( δ , π 1 , R ) = l 01 ( δ , π 1 , R ) + ∑ i = 2 g l 0 i ( δ , π 1 , R ) ,$
where
$l 01 = m 01 ln ( R π 1 2 − 2 π 1 + 1 ) + m 11 ln ( 2 π 1 ( 1 − R π 1 ) ) + m 21 ln ( R π 1 2 ) + n 01 ln ( 1 − π 1 ) + n 11 ln ( π 1 ) , l 0 i = m 0 i ln ( R π 1 2 δ 2 − 2 π 1 δ + 1 ) + m 2 i ln ( R π 1 2 δ 2 ) + m 1 i ln ( 2 π 1 δ ( 1 − R π 1 δ ) ) + n 0 i ln ( 1 − π 1 δ ) + n 1 i ln ( π 1 δ ) .$
Let $π ˜$, $δ ˜$ and $R ˜$ be the constrained MLEs of $π$, $δ$ and R under the null hypothesis $H 0 : δ 2 = ⋯ = δ g$. We calculate the constrained MLEs of $π$ and R from
$∂ l 0 ∂ π 1 = 0 , ∂ l 0 ∂ R = 0 , ∂ l 0 ∂ δ = 0 .$
However, there is no close solution for Equation (3) when $n l i , m l i ≠ 0$ (see Appendix A.1). Given initial values $π 1 ( 0 )$, $R ( 0 )$ and $δ ( 0 ) = 1$, the Fisher scoring algorithm is introduced to obtain the $( t + 1 )$th updates of $π 1$ and R as follows
$δ ( t + 1 ) π 1 ( t + 1 ) R ( t + 1 ) = δ ( t ) π 1 ( t ) R ( t ) + I 1 − 1 ∂ l 0 ∂ δ ( δ ( t ) , π 1 ( t ) , R ( t ) ) ∂ l 0 ∂ π 1 ( δ ( t ) , π 1 ( t ) , R ( t ) ) ∂ l 0 ∂ R ( δ ( t ) , π 1 ( t ) , R ( t ) ) ,$
where $I 1$ is a $3 × 3$ Fisher information matrix (see Appendix A.1), and
$I 1 ( δ ( t ) , π 1 ( t ) , R ( t ) ) = − E ( ∂ 2 l 0 ∂ δ 2 ) E ( ∂ 2 l 0 ∂ δ ∂ π 1 ) E ( ∂ 2 l 0 ∂ δ ∂ R ) E ( ∂ 2 l 0 ∂ π 1 ∂ δ ) E ( ∂ 2 l 0 ∂ π 1 2 ) E ( ∂ 2 l 0 ∂ π 1 ∂ R ) E ( ∂ 2 l 0 ∂ R ∂ δ ) E ( ∂ 2 l 0 ∂ R ∂ π 1 ) E ( ∂ 2 l 0 ∂ R 2 ) .$
Remark 2.
Under $H 0 : δ i = δ , ( i = 2 , ⋯ , g )$, if $n l i = 0$, we can obtain the constrained MLEs of Rosner’s model through Equation (4) in bilateral data. If $m l i = 0$, the constrained MLEs of $π i$ and δ are directly given by
$π ^ 1 = n 11 n + 1 , δ ^ = ∑ i = 2 g n 1 i n + 1 n + i n 11 .$

## 3. Asymptotic Methods

This section uses the likelihood ratio, score, and Wald-type test for unilateral and bilateral data. Especially for bilateral or unilateral data, we provide three tests when $n l i = 0$ and $m l i = 0$, respectively.

#### 3.1. Likelihood Ratio Test

The likelihood ratio (LR) test can be given by
$T L = 2 ( l ( π ^ 1 , ⋯ , π ^ g , R ^ ) − l 0 ( π ˜ 1 , δ ˜ , R ˜ ) ) = 2 ( T L 1 + T L 2 ) ,$
where
$T L 1 = m 01 ln R ^ π ^ 1 2 − 2 π ^ 1 + 1 R ˜ π ˜ 1 2 − 2 π ˜ 1 + 1 + m 11 ln 2 π ^ 1 ( 1 − R ^ π ^ 1 ) 2 π ˜ 1 ( 1 − R ˜ π ˜ 1 ) + m 21 ln R ^ π ^ 1 2 R ˜ π ˜ 1 2 + n 01 ln 1 − π ^ 1 1 − π ˜ 1 + n 11 ln π ^ 1 π ˜ 1 , T L 2 = ∑ i = 2 g [ m 0 i ln R ^ π ^ i 2 − 2 π ^ i + 1 R ˜ π ˜ 1 2 δ ˜ 2 − 2 π ˜ 1 δ ˜ + 1 + m 2 i ln R ^ π ^ i 2 R ˜ π ˜ 1 2 δ ˜ 2 + m 1 i ln 2 π ^ i ( 1 − R ^ π ^ i ) 2 π ˜ 1 δ ˜ ( 1 − R ˜ π ˜ 1 δ ˜ ) + n 0 i ln 1 − π ^ i 1 − π ˜ 1 δ ˜ + n 1 i ln π ^ i π ˜ 1 δ ˜ ] .$
Under $H 0 : δ 2 = ⋯ = δ g = δ$, the likelihood ratio test is asymptotically distributed as a chi-square distribution with a $g − 2$ degree of freedom.
Remark 3.
If $n l i = 0$, the likelihood ratio test in the bilateral data can be calculated by
$T L = ∑ i = 2 g m 0 i ln R ^ π ^ i 2 − 2 π ^ i + 1 R ˜ π ˜ 1 2 δ ˜ 2 − 2 π ˜ 1 δ ˜ + 1 + m 2 i ln R ^ π ^ i 2 R ˜ π ˜ 1 2 δ ˜ 2 + m 1 i ln 2 π ^ i ( 1 − R ^ π ^ i ) 2 π ˜ 1 δ ˜ ( 1 − R ˜ π ˜ 1 δ ˜ ) + m 01 ln R ^ π ^ 1 2 − 2 π ^ 1 + 1 R ˜ π ˜ 1 2 − 2 π ˜ 1 + 1 + m 11 ln 2 π ^ 1 ( 1 − R ^ π ^ 1 ) 2 π ˜ 1 ( 1 − R ˜ π ˜ 1 ) + m 21 ln R ^ π ^ 1 2 R ˜ π ˜ 1 2 .$
If $m l i = 0$, we simplify the likelihood ratio test of the unilateral data as
$T L = ∑ i = 2 g n 0 i ln 1 − π ^ i 1 − π ˜ 1 δ ˜ + n 1 i ln π ^ i π ˜ 1 δ ˜ .$

#### 3.2. Score Test

Please note that $H 0 : δ 2 = ⋯ = δ g = δ$ is equivalent to $π 2 = ⋯ = π g$. Denote $π = ( π 1 , π 2 , ⋯ , π g ) T$, $π ˜ = ( π ˜ 1 , π ˜ 2 , ⋯ , π ˜ g ) T$, and $U = ( ∂ l 2 ∂ π 2 , ⋯ , ∂ l g ∂ π g , 0 , 0 )$. The score statistic is expressed by
$T S C = U I 2 − 1 ( π 2 , ⋯ , π g , π 1 , R ) U T | π = π ˜ , R = R ˜ ,$
where $I 2$ is a $( g + 1 ) × ( g + 1 )$ Fisher information matrix (see Appendix A.2 for more information). Let
$I 2 = A B B T D ,$
where
$A = diag E ( − ∂ 2 l 2 ∂ π 2 2 ) , ⋯ , E ( − ∂ 2 l g ∂ π g 2 ) , E ( − ∂ 2 l 1 ∂ π 1 2 ) ≜ diag ( a 2 , ⋯ , a g , a 1 ) , B = E − ∂ 2 l 2 ∂ π 2 ∂ R , ⋯ , E − ∂ 2 l g ∂ π g ∂ R , E ∂ 2 l 1 ∂ π 1 ∂ R T ≜ ( b 2 , ⋯ , b g , b 1 ) T , D = E ( − ∂ 2 l ∂ R 2 ) .$
After calculation, the inverse matrix of $I 2$ can be obtained by
$I 2 − 1 = I 2 − 1 ( 1 , 1 ) I 2 − 1 ( 1 , 2 ) I 2 − 1 ( 2 , 1 ) I 2 − 1 ( 2 , 2 ) ,$
where
$I 2 − 1 ( 1 , 1 ) = A − 1 + A − 1 B ( D − B T A − 1 B ) − 1 B T A − 1 , I 2 − 1 ( 1 , 2 ) = − A − 1 B ( D − B T A − 1 B ) − 1 , I 2 − 1 ( 2 , 1 ) = − ( D − B T A − 1 B ) − 1 B T A − 1 , I 2 − 1 ( 2 , 2 ) = ( D − B T A − 1 B ) − 1 ,$
and the inverse matrix of $A$ is given by
$A − 1 = diag ( a 2 − 1 , ⋯ , a g − 1 , a 1 − 1 ) .$
Then
$U I 2 − 1 U T = ( u 2 , ⋯ , u g , 0 , 0 ) I 2 − 1 ( u 2 , ⋯ , u g , 0 , 0 ) T = ( u 2 , ⋯ , u g , 0 ) I 2 − 1 ( 1 , 1 ) ( u 2 , ⋯ , u g , 0 ) T = ( u 2 , ⋯ , u g , 0 ) A − 1 ( u 2 , ⋯ , u g , 0 ) T + ( u 2 , ⋯ , u g , 0 ) A − 1 B B T A − 1 ( u 2 , ⋯ , u g , 0 ) T D − B T A − 1 B ,$
where
$( u 2 , ⋯ , u g , 0 ) A − 1 ( u 2 , ⋯ , u g , 0 ) T = ∑ i = 2 g u i 2 a i , ( u 2 , ⋯ , u g , 0 ) A − 1 B = ∑ i = 2 g u i b i a i , B T A − 1 B = ∑ i = 1 g b i 2 a i .$
The score statistic can be simplified as
$T S C = ∑ i = 2 g u i 2 a i + ∑ i = 2 g u i b i a i 2 D − ∑ i = 1 g b i 2 a i − 1 .$
Under $H 0$, the score test is asymptotically distributed as a chi-square distribution with a $g − 2$ degree of freedom.
Remark 4.
For bilateral data, we can obtain the score test through the Equation (5) and $n l i = 0$. Suppose that $m l i = 0$, $U = ( ∂ l 2 ∂ π 2 , ⋯ , ∂ l g ∂ π g , 0 )$, and $I 2 = A$ is a $g × g$ Fisher information matrix, where $E ( − ∂ 2 l i ∂ π i 2 ) = − n + i π i ( π 1 − 1 ) = a i$. The score test in unilateral data can be simplified as
$T S C = ∑ i = 2 g u i 2 a i .$

#### 3.3. Wald-Type Test

Let $β ^ = ( π ^ 1 , ⋯ , π ^ g , R ^ )$ be the global MLEs of $β = ( π 1 , ⋯ , π g , R )$. The null hypothesis $H 0 : δ 2 = ⋯ = δ g ≜ δ$ is equivalent to $C β T = 0$, where
$C = 0 1 − 1 0 ⋯ 0 0 0 0 1 − 1 0 0 ⋮ ⋱ ⋱ ⋮ ⋮ ⋮ ⋱ ⋱ ⋮ ⋮ 0 ⋯ ⋯ 0 1 − 1 0 ( g − 2 ) × ( g + 1 ) .$
Then, the Wald-type test statistic can be expressed as
$T W = ( β C T ) ( C I 3 − 1 C T ) − 1 ( C β T ) | π = π ^ , R = R ^ ,$
where $I 3$ is the information matrix (see Appendix A.2 for more information), and $I 3 − 1$ can be derived as
$I 3 − 1 = c 11 c 12 ⋯ c 1 g c 1 ( g + 1 ) c 21 c 22 ⋯ c 2 g c 2 ( g + 1 ) ⋮ ⋮ ⋱ ⋮ ⋮ c g 1 c g 2 ⋯ c g g c g ( g + 1 ) c ( g + 1 ) 1 c ( g + 1 ) 2 ⋯ c ( g + 1 ) g c ( g + 1 ) ( g + 1 ) .$
Denote a $( g − 2 ) × ( g − 2 )$ matrix $F = C I 3 − 1 C T$, and its element $F i , j = ( c ( i + 1 ) ( j + 1 ) − c ( i + 1 ) ( j + 2 ) ) − ( c ( i + 2 ) ( j + 1 ) − c ( i + 2 ) ( j + 2 ) )$. It is convenient to express elements of the inverse matrix $F − 1$ by a lower triangular matrix $H$, i.e., $F = H H T$, where $H = ( h i j )$ and
$h i j = F i , j − ∑ k = 1 j − 1 h i k h j k h j j , i > j , F i , j − ∑ k = 1 j − 1 h j k 2 1 / 2 , i = j , 0 , i < j .$
Obviously, $F − 1 = ( H H T ) − 1 = H T − 1 H − 1$, where $H − 1 = ( l i j )$ and its elements can be derived as follows
$l i j = − 1 h i i ∑ k = j i − 1 h i k l k j , i > j , 1 h i j , i = j , 0 , i < j .$
Thus, $( F − 1 ) i , j = ∑ k = 1 g − 2 l k i l k j$. The Wald-type statistic can be written as:
$T W = ∑ i = 1 g − 2 ∑ j = 1 g − 2 ( π i + 1 − π i + 2 ) ( F − 1 ) i , j ( π j + 1 − π j + 2 ) .$
The Wald-type test is asymptotically distributed as a chi-square distribution with a $g − 2$ degree of freedom under $H 0$.
Remark 5.
If $n l i = 0$, the Wald-type statistic can be obtained through Equation (6) in bilateral data. Suppose that $m l i = 0$, $β ^ = ( π ^ 1 , ⋯ , π ^ g )$ and $C$ is a $( g − 2 ) × g$ matrix, where
$C = 0 1 − 1 0 ⋯ 0 0 0 1 − 1 0 ⋮ ⋱ ⋱ ⋮ ⋮ ⋱ ⋱ ⋮ 0 ⋯ ⋯ 0 1 − 1 ( g − 2 ) × g .$
Let $F − 1 = ( C I 3 − 1 C T ) − 1$. Obviously, $F − 1$ is a symmetric matrix, and its elements can be derived as
$( F − 1 ) i , j = ∑ q = 2 j + 1 a q ∑ k = i + 2 g a k ∑ k = 2 g a k , i ≥ j , i , j = 1 , ⋯ , g − 2 , ( F − 1 ) i , j = ∑ q = 2 i + 1 a q ∑ k = j + 2 g a k ∑ k = 2 g a k , i < j , i , j = 1 , ⋯ , g − 2 .$
Thus, the Wald-type statistic (6) can be obtained.

## 4. Monte Carlo Simulation Studies

In this section, we compare the performance of the statistics for the homogeneity test of risk ratios. In addition, we selected two evaluation indexes of type-I error rates (TIEs) and power. The fitting results of the homogeneity test are calculated when the significance level $α = 0.05$. The TIE is the probability of rejecting the null hypothesis when it is true. Each parameter set is performed 10,000 times based on the null hypothesis. The empirical TIEs is calculated as the number rejecting the null hypothesis divided by 10,000 at a significance level $α = 0.05$. According to Tang et al. [12], a test is liberal if the empirical TIEs is larger than 0.06, conservative if the empirical TIE rate is less than 0.04, and robust otherwise at the significance level $α = 0.05$.
First, we investigate the performance of TIEs under different parameter settings. Take the sample sizes $m ≜ m + 1 = m + 2 = ⋯ = m + g$, $n ≜ n + 1 = n + 2 = ⋯ = n + g$. Let $m = n = 30 , 60 , 90$ for balanced designs and $( m , n ) = ( 30 , 60 ) , ( 60 , 30 ) , ( 90 , 30 )$ for unbalanced designs. Take $g = 3 , 4 , 5$, $R = 1.1 , 1.2 , 1.3$, and $π 1 = 0.3 , 0.4 , 0.5$ under the hypothesis $H 0 : δ = 1 , 1.2$. Then, we calculate the empirical TIEs of all proposed test statistics. For each scenario, 10,000 replicates are randomly generated under the null hypothesis $H 0$.
Table 3, Table 4 and Table 5 show the empirical TIEs based on three statistics under all configurations for $g = 3 , 4$ and 5, respectively. The left side of the table shows the balanced design results. We provide the unbalanced design results on the right of each table. Let $δ = 1$, $H 0 : δ 2 = ⋯ = δ g$ is equivalent to $π 1 = π 2 = ⋯ = π g$. This situation can be seen as the proportional homogeneity test proposed by Ma and Wang [17]. In Table 3, the likelihood ratio and score tests are robust for the balanced design and small sample size, while the Wald-type statistic is liberal. The Wald-type test tends to be robust when the sample size becomes larger. In the unbalanced design, let $δ = 1$, the Wald-type statistic is liberal when $m = 60 , n = 30$ and $m = 30 , n = 60$. Take $δ = 1.2$, and the Wald-type test performs better. The result of the Wald-type statistic becomes more robust when the total sample sizes increase. Table 4 displays that the Wald-type test is more liberal for the unbalanced design. The likelihood ratio and score tests are more robust than the Wald-type tests for the balanced design, especially for a small sample size. In Table 5, the Wald-type test is worse for small sample sizes in unbalanced scenarios, similar to balanced ones.
In Table 3, Table 4 and Table 5, the score test $T S C$ and likelihood ratio test $T L$ are more robust than Wald-type test $T W$ in terms of the TIEs. The Wald-type statistic is liberal in small sample scenarios and becomes more liberal as the number of groups increases. There is no significant difference in the performance of the three statistics when the total sample sizes of the balanced and unbalanced groups are the same. As the values of m or n increase, we can observe that the Wald-type statistic tends to be robust. The TIEs of all three tests grow closer if the sample size increases.
The above results are obtained for given multiple parameter values. In practice, there are more possibilities for parameter values. To further compare these test statistics, we randomly choose 1000 sets of parameters $( R , π 1 , ⋯ , π g )$ according to the constrained ranges of parameters for $g = 3 , 4 , 5$ and $m = n = 30 , 60 , 90$ under $H 0$. A total of 10,000 replicates are randomly generated for each configuration to calculate the type-I error rates. Figure 1 shows the box plots of the empirical TIEs. The results display that the three statistics become more robust under the same number of groups as the sample size increases. As the number of groups grows, the Wald-type test becomes more liberal while the likelihood ratio and score tests become more robust. Overall, the score test is more robust in the sense that the TIE of it is close to the significant level $α = 0.05$ regardless of sample size and the number of groups.
Next, we compare and summarize the performance of proposed test statistics in terms of power under different parameter settings. To be specific, we consider the balanced and unbalanced settings, respectively. In addition, we also take the same parameter $( R , π 1 )$ as we do for empirical type-I error rates. Under the hypothesis $H 1$, the parameter settings satisfy: $δ = ( 1 , 1.3 ) , ( 1 , 1.3 , 1.4 ) , ( 1 , 1.3 , 1 , 1.4 )$ for $g = 3 , 4 , 5$. Similarly, we randomly select 10,000 replicates from the alternative hypothesis for each parameter setting and calculate the empirical power by the number of rejections of the alternative hypothesis divided by 10,000. The simulated results are presented in Table 6 and Table 7, respectively. At the same number of groups, when we fix parameter R, the empirical powers will become larger as $π 1$ increases. However, given a fixed parameter $π 1$, the empirical powers do not change much as R increases. In Table 7, the powers obtained with sample size of $m = 60 , n = 30$ are greater than that obtained with $m = 30 , n = 60$. The results show that these powers of the three test statistics are very close under the same parameter settings. The empirical power will increase with the sample size or the number of groups.
We further study the power of three statistics changes as the given parameters change. Under hypothesis $H 1$, let $m = n = 30 , 60 , 90$, $R = 1.1$ and $π 1 = 0.3$ for $g = 3 , 4 , 5$. Figure 2 reveals the empirical powers of three tests for given parameters $m , n , R , π 1 , g$. The performances of the three statistics are close in terms of powers. When $δ$ is around 1, the resulting power is around 0.05. This is because the difference between the null and alternative hypotheses is small. The increasing $δ$ leads to a significant power increase. The powers increase in the same number of groups as the sample size increases. The three statistics have good power performance when the number of groups increases.
According to the simulation results, the score statistic and LR test have satisfactory results in the TIEs. However, the score statistics are more robust than the other two for small sample-size scenarios. The powers of the three tests are very close. Thus, we recommend it for the homogeneity test about a many-to-one comparison of relative risk ratios.

## 5. A Real Example

In this section, we review the double-blinded randomized clinical trial of treating acute otitis media (OME) from Mandel et al. [15] to illustrate the proposed methods. In Table 1, $m 01 = 2 , m 11 = 2 , m 21 = 11 , n 01 = 2$ and $n 11 = 10$ in the first group; for the second group, $m 02 = 5 , m 12 = 1 , m 22 = 3 , n 02 = 14$ and $n 12 = 22$; $m 03 = 6 , m 13 = 0 , m 23 = 1 , n 03 = 11$ and $n 13 = 7$ in the third group. Under Rosner’s model, an interesting test is whether there is a significant difference in many-to-one risk ratios. Thus, give the hypotheses as $H 0 : δ 2 = δ 3 vs H 1 : δ 2 ≠ δ 3$.
Let $g = 3$, then we use the formula in this article to obtain global MLEs and constrained MLEs. Global and constrained MLEs are given in Table 8. The results show a correlation between paired organs. The estimated relative risk ratios under $H 0 : δ 2 = δ 3$ is 0.6709, and global relative risk ratios can be calculated as $δ ^ 2 = 0.5926 / 0.7329 = 0.8086 , δ ^ 3 = 0.3073 / 0.7329 = 0.4193$ in Table 9. The values of $T L , T S C$ and $T W$ are 5.3330, 3.8502, and 7.4055. At significance level $α = 0.05$, the values are bigger than the 95 percentile of the chi-square distribution with one degree of freedom, and p values of statistics are less than 0.05. Therefore, it provides stronger evidence to reject the null hypothesis $H 0 : δ 2 = δ 3 = δ$. It means that there were significant differences in relative risk ratios between groups. We can find that if $δ ^ 2 , δ ^ 3$ is less than 1, then children who were less than two years old had the highest cure rates in Amoxicillin-treated. Mandel et al. [15] find that children less than 2 years old had more OME. This is consistent with our results.

## 6. Conclusions

This paper introduces three statistics for testing the homogeneity of many-to-one relative risk ratios for bilateral and unilateral data under Rosner’s model. Then, we use a fourth-order polynomial and the Newton–Raphson algorithm to estimate the global maximum likelihood estimate. In addition, we obtain constrained MLEs under $H 0$ through the Fisher scoring method. Three statistics are proposed in bilateral and unilateral data. Moreover, we also offer global MLEs, constrained MLEs, and three statistics for unilateral and bilateral data, respectively. The Monte Carlo simulation was carried out with different parameter settings.
Based on the simulation results, the score test is more robust than the likelihood ratio and the Wald-type test regarding the TIEs and has sufficient power. The powers of the proposed three tests grow closer as the sample size becomes more significant. By comparison, the Wald-type test is better than the score test and the likelihood ratio test in terms of power. However, the Wald-type test has liberal type-I error rates under a small sample size. The results of the Wald-type test behave worse, especially when the number of groups is more extensive and the sample size is small. The score test performs well regardless of the number of groups and sample size. The score test is recommended for unilateral and bilateral data for the above reasons.
In future work, we will focus on other statistical problems of the many-to-one relative risk ratios for unilateral and bilateral data, such as confidence intervals.

## Author Contributions

Methodology, Y.L.; software, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, Z.L. and K.M.; supervision, Z.L.; funding acquisition, Z.L. All authors have read and agreed to the published version of the manuscript.

## Funding

This research was supported by the National Natural Science Foundation of China (Grant No: 12061070) and the Science and Technology Department of Xinjiang Uygur Autonomous Region (Grant No: 2021D01E13).

## Data Availability Statement

Clinical data referred to are from Mandel et al. [15].

## Conflicts of Interest

The authors declare no conflict of interest.

## Abbreviations

The following abbreviations are used in this manuscript:
 RD risk difference RR risk ratio OR odds ratio SCIs simultaneous confidence intervals MLEs maximum likelihood estimators TIEs type-I error rates $T L$ likelihood ratio statistic $T S C$ score statistic $T W$ Wald-type statistic

## Appendix A

#### Appendix A.1. Information Matrix I1

The first-order differential equations of $δ , π 1 , R$ under null hypothesis are
$∂ l 0 ∂ δ = ∑ i = 2 g 2 m 2 i + n 1 i δ + n 0 i π 1 δ π 1 − 1 + m 1 i ( 2 R δ π 1 − 1 ) δ ( R δ π 1 − 1 ) + 2 m 0 i π 1 ( R δ π 1 − 1 ) R δ 2 π 1 2 − 2 δ π 1 + 1 , ∂ l 0 ∂ π 1 = ∑ i = 2 g δ n 0 i δ π 1 − 1 + m 1 i ( 2 R δ π 1 − 1 ) π 1 ( R δ π 1 − 1 ) + 2 δ m 0 i ( R δ π 1 − 1 ) R δ 2 π 1 2 − 2 δ π 1 + 1 + 2 m 2 + + n 1 + π 1 + n 01 π 1 − 1 + m 11 ( 2 R π 1 − 1 ) π 1 ( R π 1 − 1 ) + 2 m 01 ( R π 1 − 1 ) R π 1 2 − 2 π 1 + 1 , ∂ l 0 ∂ R = m 2 + R + m 11 π 1 R π 1 − 1 + m 01 π 1 2 R π 1 2 − 2 π 1 + 1 + ∑ i = 2 g m 1 i δ π 1 R δ π 1 − 1 + m 0 i δ 2 π 1 2 R δ 2 π 1 2 − 2 δ π 1 + 1 .$
The second-order differential equations of $δ , π 1 , R$ under null hypothesis are
$∂ 2 l 0 ∂ δ 2 = ∑ i = 2 g [ 2 m 0 i π 1 2 ( − R 2 δ 2 π 1 2 + 2 R δ π 1 + R − 2 ) ( R δ 2 π 1 2 − 2 δ π 1 + 1 ) 2 − m 1 i ( 2 R 2 δ 2 π 1 2 − 2 R δ π 1 + 1 ) δ 2 ( R δ π 1 − 1 ) 2 − 2 m 2 i + n 1 i δ 2 − n 0 i π 1 2 ( δ π 1 − 1 ) 2 ] , ∂ 2 l 0 ∂ π 1 2 = ∑ i = 2 g 2 m 0 i ( R 2 δ 2 π 1 2 − 2 R δ π 1 − R + 2 ) ( R δ 2 π 1 2 − 2 δ π 1 + 1 ) 2 − n 0 i δ 2 ( δ π 1 − 1 ) 2 − m 1 i ( 2 R 2 δ 2 π 1 2 − 2 R δ π 1 + 1 ) π 1 2 ( R δ π 1 − 1 ) 2 − 2 m 2 i + n 1 i π 1 2 − 2 m 01 ( − R 2 π 1 2 + 2 R π 1 + R − 2 ) ( R π 1 2 − 2 π 1 2 + 1 ) 2 − n 01 ( π 1 − 1 ) 2 − m 11 ( 2 R 2 π 1 2 − 2 R π 1 + 1 ) π 1 2 ( π 1 R − 1 ) 2 − 2 m 21 + n 11 π 1 2 , ∂ 2 l 0 ∂ R 2 = ∑ i = 2 g − m 2 i R 2 − m 0 i δ 4 π 1 4 ( R δ 2 π 1 2 − 2 δ π 1 + 1 ) 2 − m 1 i δ 2 π 1 2 R δ π 1 − 1 ) 2 − m 21 R 2 − m 01 π 1 4 ( R π 1 2 − 2 π 1 + 1 ) 2 − m 11 π 1 2 R π 1 − 1 ) 2 , ∂ 2 l 0 ∂ δ ∂ π 1 = ∑ i = 2 g − m 0 i ( R δ 2 π 1 2 − 2 R δ π 1 + 1 ) ( R δ 2 π 1 2 − 2 δ π 1 + 1 ) 2 − m 1 i R ( R δ π 1 − 1 ) 2 − n 02 ( δ π 1 − 1 ) 2 , ∂ 2 l 0 ∂ δ ∂ R = ∑ i = 2 g − m 1 i π 1 ( R δ π 1 − 1 ) 2 − 2 m 0 i δ π 1 2 ( δ π 1 − 1 ) ( R δ 2 π 1 2 − 2 δ π 1 + 1 ) 2 ,$
$∂ 2 l 0 ∂ π 1 ∂ R = ∑ i = 2 g − m 1 i δ ( R δ π 1 − 1 ) 2 − 2 m 0 i δ π 1 ( δ π 1 − 1 ) ( R δ 2 π 1 2 − 2 δ π 1 + 1 ) 2 − m 11 ( R π 1 − 1 ) 2 − 2 m 01 π 1 ( π 1 − 1 ) ( R π 1 − 2 π 1 + 1 ) 2 .$
Thus, we have
$I 1 ( δ ( t ) , π 1 ( t ) , R ( t ) ) = − E ( ∂ 2 l 0 ∂ δ 2 ) E ( ∂ 2 l 0 ∂ δ ∂ π 1 ) E ( ∂ 2 l 0 ∂ δ ∂ R ) E ( ∂ 2 l 0 ∂ π 1 ∂ δ ) E ( ∂ 2 l 0 ∂ π 1 2 ) E ( ∂ 2 l 0 ∂ π 1 ∂ R ) E ( ∂ 2 l 0 ∂ R ∂ δ ) E ( ∂ 2 l 0 ∂ R ∂ π 1 ) E ( ∂ 2 l 0 ∂ R 2 ) = − I 11 I 12 I 13 I 12 I 22 I 23 I 13 I 23 I 33 ,$
where
$I 11 = ∑ i = 2 g 2 m + i π 1 ( 2 R 2 δ 2 π 1 2 − R δ 2 π 1 2 − 2 R δ π 1 + 1 ) δ ( R δ π 1 − 1 ) ( R δ 2 π 1 2 − 2 δ π 1 + 1 ) + n + i π 1 δ ( δ π 1 − 1 ) , I 12 = ∑ i = 2 g n + i δ π 1 − 1 + m + i ( 2 R δ π 1 ( 2 R δ π 1 − δ π 1 − 2 ) + 2 ) ( R δ π 1 − 1 ) ( R δ 2 π 1 2 − 2 δ π 1 + 1 ) , I 13 = ∑ i = 2 g 2 m + i δ π 1 2 R δ π 1 − 1 − 2 m + i δ π 1 2 ( δ π 1 − 1 ) R δ 2 π 1 2 − 2 δ π 1 + 1 , I 22 = ∑ i = 2 g 2 m + i δ ( 2 R 2 δ 2 π 1 2 − R δ 2 π 1 2 − 2 R δ π 1 + 1 ) π 1 ( R 2 δ 3 π 1 3 − 3 R δ 2 π 1 2 + R δ π 1 + 2 δ π 1 − 1 ) + n + i δ π 1 ( π 1 δ − 1 ) n + 1 π 1 ( π 1 − 1 ) + 2 m + 1 ( 2 R 2 π 1 2 − R π 1 2 − 2 R π 1 + 1 ) π 1 ( R 2 π 1 3 − 3 R π 1 2 + R π 1 + 2 π 1 − 1 ) , I 23 = m + 1 ( 2 R π 1 2 − 2 π 1 2 ) ( R π 1 2 − 2 π 1 + 1 ) ( R π 1 − 1 ) + ∑ i = 2 g m + i ( 2 R δ 3 π 1 2 − 2 δ 3 π 1 2 ) ( R δ 2 π 1 2 − 2 δ π 1 + 1 ) ( R δ π 1 − 1 ) , I 33 = m + 1 π 1 2 ( R π 1 − 2 π 1 + 1 ) R ( R π 1 − 1 ) ( R π 1 2 − 2 π 1 + 1 ) + ∑ i = 2 g m + i π 1 2 δ 2 ( R δ π 1 − 2 δ π 1 + 1 ) R ( R δ π 1 − 1 ) ( R δ 2 π 1 2 − 2 δ π 1 + 1 ) .$

#### Appendix A.2. Information Matrix I2

Differentiating $∂ l i ∂ π i$ and $∂ l i ∂ R$ with respect to $π i$ and R yield
$∂ l i ∂ π i = 2 m 2 i + n 1 i π i + n 0 i π i − 1 + 2 m 0 i ( R π i − 1 ) R π i 2 − 2 π i + 1 + m 1 i ( 2 R π i − 1 ) π i ( R π 1 − 1 ) , ∂ 2 l i ∂ π i 2 = − n 1 i π i 2 − n 0 i ( π i − 1 ) 2 − 2 m 0 i ( R 2 π i 2 − 2 R π i − R + 2 ) ( R π i 2 − 2 π i + 1 ) 2 − m 1 i ( 2 R 2 π i 2 − 2 R π i + 1 ) π i 2 ( R π i − 1 ) 2 − 2 m 2 i π i 2 , ∂ 2 l i ∂ π i ∂ R = − m 1 i ( R π i − 1 ) 2 − 2 m 0 i π i ( π i − 1 ) ( R π i 2 − 2 π i + 1 ) 2 , ∂ 2 l ∂ R 2 = − m 2 + R 2 − ∑ i = 1 g m 0 i π i 4 ( R π i 2 − 2 π i + 1 ) 2 + m 1 i π i 2 ( R π i − 1 ) 2 .$
Therefore, we have
$E ( − ∂ 2 l i ∂ π i 2 ) = − 2 m + i ( 2 R 2 π i 2 − R π i 2 − 2 R π i + 1 ) π i ( R 2 π i 3 − 3 R π i 2 + R π i + 2 π i − 1 ) − n + i π i ( π i − 1 ) , E ( − ∂ 2 l i ∂ π i ∂ R ) = − m + i ( 2 R π i 2 − 2 π i 2 ) ( R π i 2 − 2 π i + 1 ) ( R π i − 1 ) , E ( − ∂ 2 l ∂ R 2 ) = − ∑ i = 1 g m + i π i 2 ( R π i − 2 π i + 1 ) R ( R π i − 1 ) ( R π i 2 − 2 π i + 1 ) ,$
for $i = 1 , ⋯ , g$.
Under the null hypothesis $H 0 : δ 2 = ⋯ = δ g$, the information matrix $I 2$ can be derived as
$I 2 = − E ∂ 2 l 2 ∂ π 2 2 ∂ 2 l 2 ∂ π 2 ∂ R ⋱ ⋮ ∂ 2 l g ∂ π g 2 ∂ 2 l g ∂ π g ∂ R ∂ 2 l 1 ∂ π 1 2 ∂ 2 l 1 ∂ π 1 ∂ R ∂ 2 l 2 ∂ π 2 ∂ R ⋯ ∂ 2 l g ∂ π g ∂ R ∂ 2 l 1 ∂ π 1 ∂ R ∂ 2 l ∂ R 2 ( g + 1 ) × ( g + 1 ) ,$

#### Appendix A.3. Information Matrix I3 and $I 3 − 1$

The information matrix $I 3$ can be expressed as
$I 3 = − E ∂ 2 l ∂ π 1 2 ∂ 2 l ∂ π 1 ∂ R ⋱ ⋮ ∂ 2 l g ∂ π g 2 ∂ 2 l g ∂ π g ∂ R ∂ 2 l ∂ π 1 ∂ R ⋯ ∂ 2 l g ∂ π g ∂ R ∂ 2 l ∂ R 2 ( g + 1 ) × ( g + 1 ) ,$
and the elements of $I 3 − 1$ can be derived as follows
$c ( g + 1 ) ( g + 1 ) = D − ∑ i = 1 g b i 2 a i − 1 , c i i = a i − 1 + b i 2 c ( g + 1 ) ( g + 1 ) a i 2 , i = 1 , ⋯ , g , c i j = b i b j c ( g + 1 ) ( g + 1 ) a i a j , i ≠ j , c ( g + 1 ) i = c i ( g + 1 ) = − b i c ( g + 1 ) ( g + 1 ) a i .$

## References

1. Gart, J.J. The comparison of proportions: A review of significance tests, confidence intervals and adjustments for stratification. Rev. Int. Stat. Inst. 1971, 39, 148–169. [Google Scholar] [CrossRef]
2. Gart, J.J. Approximate tests and interval estimation of the common relative risk in the combination of 2 × 2 tables. Biometrika 1985, 72, 673–677. [Google Scholar]
3. Casagrande, J.T.; Pike, M.C.; Smith, P.G. An improved approximate formula for calculating sample sizes for comparing two binomial distributions. Biometrics 1978, 34, 483–486. [Google Scholar] [CrossRef] [PubMed]
4. Storer, B.E.; Kim, C. Exact properties of some exact test statistics for comparing two binomial proportions. Publ. Am. Stat. Assoc. 1990, 85, 146–155. [Google Scholar] [CrossRef]
5. Rosner, B. Statistical methods in ophthalmology: An adjustment for the intraclass correlation between eyes. Biometrics 1982, 38, 105–114. [Google Scholar] [CrossRef] [PubMed]
6. Dallal, G.E. Paired bernoulli trials. Biometrics 1988, 44, 253–257. [Google Scholar] [CrossRef]
7. Donner, A. Statistical methods in ophthalmology: An adjusted chi-square approach. Biometrics 1989, 45, 605–611. [Google Scholar] [CrossRef]
8. Pei, Y.; Tian, G.L.; Tang, M.L. Testing homogeneity of proportion ratios for stratified correlated bilateral data in two-arm randomized clinical trials. Stat. Med. 2014, 33, 4370–4386. [Google Scholar] [CrossRef]
9. Zhuang, T.; Tian, G.L.; Ma, C.X. Homogeneity test of ratio of two proportions in stratified bilateral data. Stat. Biopharm. Res. 2019, 11, 200–209. [Google Scholar] [CrossRef]
10. Sun, S.M.; Li, Z.M.; Ai, M.Y.; Jiang, H.J. Risk difference tests for stratified binary data under Dallal’s model. Stat. Methods Med. Res. 2022, 31, 1135–1156. [Google Scholar] [CrossRef]
11. Chen, Y.F.; Li, Z.M.; Ma, C.X. Further study on testing the equality of response rates under Dallal’s model. Stat. Its Interface 2022, 15, 115–126. [Google Scholar] [CrossRef]
12. Tang, M.L.; Tang, N.S.; Rosner, B. Statistical inference for correlated data in ophthalmologic studies. Stat. Med. 2006, 25, 2771–2783. [Google Scholar] [CrossRef] [PubMed]
13. Tang, N.S.; Tang, M.L.; Qiu, S.F. Testing the equality of proportions for correlated otolaryngologic data. Comput. Stat. Data Anal. 2008, 52, 3719–3729. [Google Scholar] [CrossRef]
14. Ma, C.X.; Shan, G.; Liu, S. Homogeneity test for correlated binary data. PLoS ONE 2015, 10, e0124337. [Google Scholar] [CrossRef] [PubMed]
15. Mandel, E.M.; Bluestone, C.D.; Rockette, H.E.; Blatter, M.M.; Reisinger, K.S.; Wucher, F.P.; Harper, J. Duration of effusion after antibiotic treatment for acute otitis media: Comparison of cefaclor and amoxicillin. Pediatr. Infect. Dis. 1982, 1, 310–316. [Google Scholar] [CrossRef] [PubMed]
16. Pei, Y.B.; Tang, M.L.; Guo, J.H. Testing the equality of two proportions for combined unilateral and bilateral data. Commun.-Stat.-Simul. Comput. 2008, 37, 1–15. [Google Scholar] [CrossRef]
17. Ma, C.X.; Wang, K.J. Testing the equality of proportions for combined unilateral and bilateral data. arXiv 2020, arXiv:2010.03501. [Google Scholar]
18. Ma, C.X.; Wang, H. Testing the equality of proportions for combined unilateral and bilateral data under equal intraclass correlation model. Stat. Biopharm. Res. 2022. [Google Scholar] [CrossRef]
19. Sun, S.M.; Li, Z.M.; Jiang, H.J. Homogeneity test and sample size of risk difference for stratified unilateral and bilateral data. Commun.-Stat.-Simul. Comput. 2022. [Google Scholar] [CrossRef]
20. Li, Z.M.; Ma, C.X.; Mou, K.Y. Testing the common risk difference of proportions for stratified uni-and bilateral correlated data. Stat. Neerl. 2023. [Google Scholar] [CrossRef]
21. Kropf, S.; Hothorn, L.A.; Lauter, J. Multivariate many-to-one procedures with applications to preclinical trials. Drug Inf. J. 1997, 31, 433–447. [Google Scholar] [CrossRef]
22. Mou, K.Y.; Li, Z.M. Homogeneity test of many-to-one risk differences for correlated binary data under optimal algorithms. Complexity 2021, 2021, 6685951. [Google Scholar] [CrossRef]
23. Schaarschmidt, F.; Biesheuvel, E.; Hothorn, L.A. Asymptotic simultaneous confidence intervals for many-to-one comparisons of binary proportions in randomized clinical trials. J. Biopharm. Stat. 2009, 19, 292–310. [Google Scholar] [CrossRef] [PubMed]
24. Yang, Z.; Tian, G.L.; Liu, X.; Ma, C.X. Simultaneous confidence interval construction for many-to-one comparisons of proportion differences based on correlated paired data. J. Appl. Stat. 2021, 48, 1442–1456. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Empirical TIEs of tests.
Figure 1. Empirical TIEs of tests.
Figure 2. Empirical powers of tests.
Figure 2. Empirical powers of tests.
Table 1. OME status of Amoxicillin treatment.
Table 1. OME status of Amoxicillin treatment.
ResponsesAge Group (Unit: yrs)
<22–5≥6
0256
1210
21131
Total1597
021411
110227
Total123618
Note: response 0—no OME, response 1—unilateral OME, and response 2—bilateral OME.
Table 2. The unilateral and bilateral data in g groups.
Table 2. The unilateral and bilateral data in g groups.
Response $( l )$Group (i)Total
12g
0$m 01$$m 02$$m 0 g$$m 0 +$
1$m 11$$m 12$$m 1 g$$m 1 +$
2$m 21$$m 22$$m 2 g$$m 2 +$
Total$m + 1$$m + 2$$m + g$$M 1$
0$n 01$$n 02$$n 0 g$$n 0 +$
1$n 11$$n 12$$n 1 g$$n 1 +$
Total$n + 1$$n + 2$$n + g$$M 2$
Table 3. The empirical TIEs (%) of tests under $H 0 : δ i = δ$ ($g = 3 , α = 0.05$).
Table 3. The empirical TIEs (%) of tests under $H 0 : δ i = δ$ ($g = 3 , α = 0.05$).
R$π 1$BalanceUnbalance
$δ = 1$$δ = 1.2$$δ = 1$$δ = 1.2$
$T L$$T SC$$T W$$T L$$T SC$$T W$$T L$$T SC$$T W$$T L$$T SC$$T W$
$m = n = 30$$m = 30 , n = 60$
1.10.35.475.015.625.654.975.735.585.065.535.595.125.51
0.45.545.165.855.505.156.045.294.965.485.254.905.62
0.55.545.256.045.685.066.425.174.955.525.465.175.84
1.20.35.334.815.555.865.236.045.445.025.525.504.975.56
0.45.805.386.105.475.025.995.655.385.915.525.115.95
0.55.865.416.465.755.125.905.855.546.265.405.015.66
1.30.35.284.795.495.444.975.695.374.975.495.365.005.59
0.45.615.206.045.785.306.145.314.995.655.014.635.32
0.55.465.015.915.975.415.865.625.266.005.705.135.82
$m = n = 60$$m = 60 , n = 30$
1.10.35.214.985.305.375.105.435.335.115.415.525.255.64
0.45.295.105.535.114.945.345.255.095.344.814.515.14
0.55.234.875.325.285.035.675.315.015.555.225.005.52
1.20.35.355.105.434.854.645.155.164.895.365.355.025.45
0.45.395.205.585.675.465.864.984.775.245.455.095.70
0.55.205.045.335.855.465.885.104.785.305.274.925.38
1.30.35.355.185.535.114.965.204.844.665.055.174.815.38
0.45.235.105.485.655.335.885.615.375.835.625.255.96
0.55.295.085.595.344.985.315.775.326.155.515.225.37
$m = n = 90$$m = 90 , n = 30$
1.10.35.375.235.435.134.915.245.165.035.245.245.085.33
0.44.984.875.095.195.055.345.325.175.465.275.035.48
0.54.754.664.835.114.935.355.405.165.535.345.165.59
1.20.35.155.055.205.074.935.245.555.345.725.285.045.42
0.44.964.885.045.134.925.235.094.955.225.265.105.41
0.55.335.075.485.355.155.485.255.025.505.305.085.43
1.30.35.285.155.404.964.895.105.575.395.735.224.975.41
0.45.094.955.255.305.145.465.214.975.295.395.155.51
0.55.235.035.354.874.614.875.254.965.444.934.624.85
Table 4. The empirical TIEs (%) of tests under $H 0 : δ i = δ$ ($g = 4 , α = 0.05$).
Table 4. The empirical TIEs (%) of tests under $H 0 : δ i = δ$ ($g = 4 , α = 0.05$).
R$π 1$BalanceUnbalance
$δ = 1$$δ = 1.2$$δ = 1$$δ = 1.2$
$T L$$T SC$$T W$$T L$$T SC$$T W$$T L$$T SC$$T W$$T L$$T SC$$T W$
$m = n = 30$$m = 30 , n = 60$
1.10.35.104.695.625.304.935.914.754.505.204.934.745.44
0.45.425.026.105.554.816.095.224.885.885.575.106.03
0.55.244.715.775.514.766.465.294.775.505.815.436.45
1.20.35.254.905.855.384.866.044.794.515.345.355.095.90
0.45.645.256.355.505.076.385.665.366.355.505.106.17
0.55.404.976.395.304.745.975.395.075.805.215.005.82
1.30.35.544.986.245.154.665.865.565.226.065.234.945.82
0.45.314.906.025.464.766.425.164.845.765.465.036.02
0.55.204.566.195.614.966.425.535.056.195.625.136.15
$m = n = 60$$m = 60 , n = 30$
1.10.35.024.865.235.345.075.625.505.245.795.194.985.60
0.45.194.965.534.804.555.214.964.725.375.705.386.08
0.55.335.105.755.224.995.685.495.245.805.765.366.12
1.20.35.024.825.295.154.985.524.704.455.015.144.745.59
0.45.435.255.785.325.065.604.854.565.235.214.755.67
0.55.495.176.005.295.015.565.154.905.635.565.116.11
1.30.35.605.446.045.144.995.505.405.155.825.084.775.66
0.45.275.075.725.345.145.785.905.516.335.414.975.93
0.55.705.416.075.174.755.425.475.236.044.924.585.28
$m = n = 90$$m = 90 , n = 30$
1.10.35.265.165.575.004.825.115.295.245.584.974.735.16
0.45.245.195.435.275.195.525.124.955.445.365.225.61
0.55.094.865.325.004.795.365.215.065.745.485.185.83
1.20.34.844.715.055.505.305.585.144.975.505.355.165.80
0.45.014.835.285.295.175.595.475.415.785.144.805.49
0.55.094.965.435.425.185.655.725.416.035.104.795.56
1.30.35.074.915.294.984.815.195.114.885.485.074.945.41
0.45.285.105.485.094.835.325.284.945.755.475.115.77
0.55.264.945.625.164.835.215.154.835.525.445.275.63
Table 5. The empirical TIEs (%) of tests under $H 0 : δ i = δ$ ($g = 5 , α = 0.05$).
Table 5. The empirical TIEs (%) of tests under $H 0 : δ i = δ$ ($g = 5 , α = 0.05$).
R$π 1$BalanceUnbalance
$δ = 1$$δ = 1.2$$δ = 1$$δ = 1.2$
$T L$$T SC$$T W$$T L$$T SC$$T W$$T L$$T SC$$T W$$T L$$T SC$$T W$
$m = n = 30$$m = 30 , n = 60$
1.10.34.794.285.795.484.826.265.074.755.665.004.635.71
0.45.885.276.575.374.856.445.465.155.975.435.026.30
0.55.755.206.775.795.006.935.945.606.675.675.256.51
1.20.35.274.806.315.354.926.345.124.775.725.355.106.09
0.45.534.846.475.605.086.895.294.975.775.615.236.50
0.55.164.796.435.585.026.585.635.236.425.014.555.78
1.30.35.194.686.125.695.176.845.575.186.194.794.455.62
0.45.444.916.305.564.896.875.395.056.305.445.266.41
0.55.835.327.296.015.377.515.515.036.345.654.986.64
$m = n = 60$$m = 60 , n = 30$
1.10.35.344.925.554.974.735.455.374.935.824.964.705.64
0.44.944.655.215.205.005.765.294.925.775.294.915.90
0.55.104.885.595.064.685.475.264.996.005.665.196.39
1.20.35.425.105.845.665.426.075.635.206.165.024.785.70
0.45.445.275.765.315.045.835.505.206.165.545.286.29
0.55.415.215.935.264.965.945.595.176.455.625.316.41
1.30.34.754.555.145.485.205.935.324.955.735.555.136.18
0.45.174.925.775.304.955.745.405.196.015.395.216.36
0.55.234.925.785.224.865.835.415.126.195.494.866.11
$m = n = 90$$m = 90 , n = 30$
1.10.35.325.195.575.385.205.675.114.925.565.385.155.68
0.45.565.375.855.195.015.544.994.815.375.174.805.67
0.55.034.895.315.345.145.635.254.965.655.715.366.19
1.20.35.054.905.305.405.255.595.354.995.675.315.085.80
0.45.185.045.585.335.165.745.094.965.445.064.845.54
0.55.044.845.485.034.915.544.924.705.435.204.885.78
1.30.35.545.325.935.315.055.585.575.446.025.185.045.47
0.45.135.055.375.135.015.605.465.105.885.635.196.24
0.55.615.425.895.345.145.535.395.135.905.104.745.57
Table 6. The power (%) of the balance group ($α = 0.05$).
Table 6. The power (%) of the balance group ($α = 0.05$).
R$π 1$$g = 3$$g = 4$$g = 5$
$T L$$T SC$$T W$$T L$$T SC$$T W$$T L$$T SC$$T W$
$m = n = 30$
1.10.324.6923.5825.3832.2730.4934.7738.5036.8840.83
0.435.5834.3637.3749.0947.2551.5661.1859.7463.69
0.552.5450.8754.1870.7968.8372.4882.0180.8983.85
1.20.324.6723.4225.6530.6829.0733.0939.0937.2841.02
0.434.8333.5736.1948.0145.6950.2958.9357.2861.68
0.553.6751.7754.8373.6671.6374.4984.5683.4585.98
1.30.324.0422.7625.0431.1628.9233.6438.2036.6440.54
0.436.3634.6437.4648.8946.1451.2160.4258.7662.73
0.558.3456.1758.5582.0480.0281.1691.3489.9292.00
$m = n = 60$
1.10.343.2942.7243.6857.7356.7259.4469.7969.1270.69
0.461.2460.5061.9980.0079.3480.9490.0789.6490.58
0.580.4479.9581.1195.6895.2095.8198.7498.6698.80
1.20.342.1141.4442.4256.8555.4658.6969.2668.6670.10
0.460.8459.9461.6278.9678.0079.9789.6589.2390.27
0.581.7481.2582.2996.4496.0196.4399.0699.0799.19
1.30.340.2239.6240.9756.1254.4157.8768.1967.3169.27
0.460.7160.0661.3279.5678.1780.4089.9989.5090.49
0.586.7386.1086.8898.5698.4398.4699.8299.8099.86
$m = n = 90$
1.10.358.3658.1758.7876.6376.1077.5887.5287.3987.86
0.478.3378.0278.6093.1192.8093.4698.1598.1198.25
0.592.6392.4692.8299.5199.4999.5499.9499.9399.94
1.20.358.3057.7258.7375.2974.3776.2886.3086.0786.66
0.477.7777.3378.0593.0792.7693.4098.0998.0398.18
0.594.1093.9294.3599.7899.7299.7899.9999.9999.98
1.30.356.7956.3057.2274.9373.7675.9086.2385.9586.59
0.478.1877.8378.5893.6393.1293.9198.2698.1998.40
0.596.3896.2196.3799.9499.9399.9699.9999.9899.99
Table 7. The power (%) of the unbalance group ($α = 0.05$).
Table 7. The power (%) of the unbalance group ($α = 0.05$).
R$π 1$$g = 3$$g = 4$$g = 5$
$T L$$T SC$$T W$$T L$$T SC$$T W$$T L$$T SC$$T W$
$m = 30 , n = 60$
1.10.331.4630.6932.1041.9540.6743.8251.0550.0252.98
0.446.4445.4647.7261.9760.6363.5874.0273.2675.37
0.563.9362.9665.1184.0982.9184.7492.3591.7793.15
1.20.330.1929.4430.6340.7239.3842.8851.4650.3853.32
0.445.6044.6946.8661.8160.0363.8673.8472.8475.07
0.564.9463.8065.6685.3684.3385.6893.6393.0694.19
1.30.330.9329.8031.6641.1839.5143.3549.6548.5151.37
0.445.1844.1146.0361.8659.5563.7073.9172.7875.55
0.569.0667.8369.5990.5189.0790.2496.7696.3396.96
$m = 60 , n = 30$
1.10.336.6835.9837.3849.7848.8151.5159.9159.0661.32
0.453.1452.2754.1170.4869.5071.7882.6582.1583.63
0.572.3471.4673.1890.3089.6490.6496.6996.4896.95
1.20.335.8035.0836.4848.7347.1550.5859.5358.5160.78
0.452.0551.0252.8670.6669.0672.1782.1481.3882.97
0.574.4673.6874.9092.7592.0292.8697.9397.7598.26
1.30.335.5334.5936.2348.4046.5350.5458.2157.1959.80
0.453.3152.2054.1672.1070.1973.2283.7482.9784.69
0.579.5878.4379.7096.9596.5196.8099.3199.2399.41
$m = 90 , n = 30$
1.10.347.8047.3248.5164.6763.6165.7976.2375.8476.92
0.467.4166.9668.1684.8784.4285.7393.9793.8394.28
0.584.9884.4985.5297.6397.3797.8799.5999.5799.65
1.20.346.1345.6746.7063.2462.1764.7375.0674.6475.45
0.466.1065.6666.7285.1984.4485.9393.1592.8993.55
0.587.5187.0387.8698.2798.0798.3299.7199.6999.72
1.30.345.9445.3546.6462.4860.9664.1973.7173.1274.51
0.467.3266.4667.7985.9884.7686.3394.2794.0994.62
0.592.1991.7692.1499.6199.5499.5999.9999.9899.99
Table 8. Global MLEs and Constrained MLEs.
Table 8. Global MLEs and Constrained MLEs.
MLEsGlobal MLEsConstrained MLEs
$π ^ 1$$π ^ 2$$π ^ 3$$R ^$$π ˜ 1$$δ ˜$$R ˜$
value0.73290.59260.30731.27230.72480.67091.2874
Table 9. Statistic values and p-values.
Table 9. Statistic values and p-values.
ValueTest Statistics
$T L$$T SC$$T W$
Statistic value5.33303.85027.4055
p-value0.02090.04970.0065
 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## Share and Cite

MDPI and ACS Style

Li, Y.; Li, Z.; Mou, K. Homogeneity Test of Many-to-One Relative Risk Ratios in Unilateral and Bilateral Data with Multiple Groups. Axioms 2023, 12, 333. https://doi.org/10.3390/axioms12040333

AMA Style

Li Y, Li Z, Mou K. Homogeneity Test of Many-to-One Relative Risk Ratios in Unilateral and Bilateral Data with Multiple Groups. Axioms. 2023; 12(4):333. https://doi.org/10.3390/axioms12040333

Chicago/Turabian Style

Li, Yuxin, Zhiming Li, and Keyi Mou. 2023. "Homogeneity Test of Many-to-One Relative Risk Ratios in Unilateral and Bilateral Data with Multiple Groups" Axioms 12, no. 4: 333. https://doi.org/10.3390/axioms12040333

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.