# Group Testing with a Graph Infection Spread Model

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Work

## 3. System Model

## 4. Motivating Example

- If${F}_{m}={F}_{1}$: If M samples individual 1 or 2 from the cluster ${S}_{1}^{1}=\{1,2,3\}$, a false classification occurs if $F={F}_{2}$ and the cluster $\{1,2\}$ is infected, in that case, individual 3 is falsely classified as infected. Similar false classification occurs when $F={F}_{3}$ and the cluster $\{1,2\}$ is infected. Similarly, in these cases, if individual 3 is infected, again, individual 3 is falsely classified as non-infected. Thus, for cluster $\{1,2,3\}$, when either individuals 1 or 2 is sampled, the expected number of false classifications is:$$\begin{array}{c}\hfill ({p}_{F}\left({F}_{2}\right)+{p}_{F}\left({F}_{3}\right))({p}_{Z}\left(1\right)+{p}_{Z}\left(2\right)+{p}_{Z}\left(3\right))\\ \hfill =0.6\times 0.3=0.18\end{array}$$Similarly, when individual 3 is sampled from the cluster $\{1,2,3\}$, individuals 1 and 2 are falsely classified when $F={F}_{2}$ or $F={F}_{3}$ and either the cluster $\{1,2\}$ or individual 3 is infected. Thus, in that case, the expected number of false classifications is:$$\begin{array}{c}\hfill 2({p}_{F}\left({F}_{2}\right)+{p}_{F}\left({F}_{3}\right))({p}_{Z}\left(1\right)+{p}_{Z}\left(2\right)+{p}_{Z}\left(3\right))\\ \hfill =2\times 0.6\times 0.3=0.36\end{array}$$Thus, (14) and (15) imply that, for cluster ${S}_{1}^{1}=\{1,2,3\}$, the optimal M should select either individuals 1 or 2 for testing. As discussed above, for cluster ${S}_{2}^{1}=\{4,5\}$, the selection of sampled individual is indifferent and results in 0 expected false classification. Finally, for cluster ${S}_{3}^{1}=\{6,7,8,9,10\}$, a similar analysis implies that the optimal M should select one of the individuals in $\{8,9,10\}$ for testing.
- If${F}_{m}={F}_{2}$: Similar combinatorial arguments follow and we conclude that selection of sampled individuals from the clusters ${S}_{1}^{2}=\{1,2\}$, ${S}_{2}^{2}=\left\{3\right\}$ and ${S}_{3}^{2}=\{4,5\}$ are indifferent in terms of the expected number of false classifications. Only a possible false classification can happen in cluster ${S}_{4}^{2}=\{6,7,8,9,10\}$ when $F={F}_{3}$ and the infected cluster is either ${S}_{4}^{3}=\{6,7\}$ or ${S}_{5}^{3}=\{8,9,10\}$. Similar to the case $m=1$, if the sampled individual is either 6 or 7, then the expected number of false classifications is 0.6 in contrast to the 0.4 when the sampled individual is one of 8, 9 and 10. Thus, the optimal M should select one of the individuals 8, 9 and 10 as the sampled individual to minimize the expected number of false classifications.
- If${F}_{m}={F}_{3}$: It is not possible to make a false classification since, for all clusters in ${F}_{3}$, all individuals that are in the same cluster have the same infection status with probability 1.

- If${F}_{m}={F}_{1}$ (i.e., $M=\{1,4,8\}$): The set of all possible infected sets is $\mathcal{P}\left({K}_{M}\right)=\{\left\{1\right\},\left\{4\right\},\left\{8\right\}\}$. By a counting argument, we need at least two tests, since each of three possible infected sets must result in a unique result vector y, and each one of these sets has one element. We can achieve this lower bound by using the following test matrix:

- If${F}_{m}={F}_{2}$ (i.e., $M=\{1,3,4,8\}$): In this case, the set of all possible infected sets is now $\mathcal{P}\left({K}_{M}\right)=\{\left\{1\right\},\left\{3\right\},\{1,3\},\left\{4\right\},\left\{8\right\}\}$. In the classical zero-error construction for the combinatorial group testing model, one can construct d-separable matrices, and the rationale behind the construction is to enable the decoding of the infected set, when the infected set can be any d-sized subset of $\left[n\right]$. However, in our model, the set of all possible infected sets, i.e., $\mathcal{P}\left({K}_{M}\right)$, is not a set of all fixed sized subsets of $\left[n\right]$, but instead consists of varying sized subsets of $\left[n\right]$ that are structured, depending on the given $\mathcal{F}$. As illustrated in Figure 3, a given cluster formation tree $\mathcal{F}$ can be represented by a tree structure with nodes (Throughout the paper, we use the word “node” only for the possible clusters in the cluster formation tree representations, not for the vertices in the connection graphs that represent the individuals.) representing possible infected sets, i.e., clusters at each level. Then, the aim of constructing a zero-error test matrix is to have unique test result vectors for each unique possible infected set, i.e., unique nodes in the cluster formation tree. In Figure 4, we present the subtree of $\mathcal{F}$, which ends at the level ${F}_{2}$, with assigned result vectors to each node. One must assign unique binary vectors to each node, except for the nodes that do not become partitioned while moving from level to level: those nodes represent the same cluster, and thus the same vector is assigned, as seen in Figure 4. Moreover, while merging in upper level nodes, binary OR of vectors assigned to the descendant nodes must be assigned to their ancestor node. By combinatorial arguments, one can find the minimum vector length such that such vectors can be assigned to the nodes.In this case, the required number of tests must be at least 3 and, by assigning result vectors as in Figure 4, we can construct the following test matrix $\mathit{X}$:Note that, for all elements of $\mathcal{P}\left({K}_{M}\right)$, the corresponding result vector is unique and satisfies the tree structure criteria, as shown in Figure 4.
- If${F}_{m}={F}_{3}$ (i.e., $M=\{1,3,4,6,8\}$): In this case, the set of all possible infected sets is $\mathcal{P}\left({K}_{M}\right)=\{\left\{1\right\},\left\{3\right\},\{1,3\},\left\{4\right\},\left\{6\right\},\left\{8\right\},\{6,8\}\}$. We give a tree structure representation with assigned result vectors of length 3 that achieves the tree structure criteria discussed above, which is shown in Figure 5 where each unique node is assigned a unique vector except for the nodes that do not become partitioned while moving from level to level. Note that every unique node in the tree representation corresponds to a unique element of $\mathcal{P}\left({K}_{M}\right)$. The corresponding test matrix $\mathit{X}$ is the following $3\times 5$ matrix:

- If${F}_{m}={F}_{1}$:$$\begin{array}{cc}\hfill {E}_{f}& =\sum _{\alpha}{p}_{F}\left({F}_{\alpha}\right){E}_{f,\alpha}\hfill \\ \hfill & ={p}_{F}\left({F}_{2}\right){E}_{f,2}+{p}_{F}\left({F}_{3}\right){E}_{f,3}\hfill \\ \hfill & =0.2(0.3\times 1)+0.4(0.3\times 1+0.5\times 2)\hfill \\ \hfill & =0.58\hfill \end{array}$$
- If${F}_{m}={F}_{2}$:$$\begin{array}{cc}\hfill {E}_{f}& ={p}_{F}\left({F}_{3}\right){E}_{f,3}\hfill \\ \hfill & =0.4(0.5\times 2)\hfill \\ \hfill & =0.4\hfill \end{array}$$
- If${F}_{m}={F}_{3}$, we have ${E}_{f}=0$.

## 5. Proposed Algorithm and Analysis

**Theorem 1.**

**Theorem 2.**

- For every node at the levels that are above the level ${F}_{m}$, each node must be assigned a binary column vector that is equal to the OR of all vectors that are assigned to its descendant nodes. This is because each node in the tree corresponds to a possible set of infected individuals among the selected individuals where each merging of the nodes corresponds to the union of the possible infected sets which results in taking the OR of the assigned vectors of the merged nodes.
- Each assigned binary vector must be unique for each unique node, i.e., for every node that represents a unique set ${S}_{i}^{j}$. For the nodes that do not split between two levels, the assigned vector remains the same. This is because each unique node (note that when a node does not split between levels, it still represents the same set of individuals) corresponds to a unique possible infected subset of the selected individuals and they must satisfy (18).

- Let u be a node with Hamming weight ${d}_{H}\left(u\right)$. Then, the number of all descendant nodes of u with constant Hamming weights i must be less than $\left(\genfrac{}{}{0pt}{}{{d}_{H}\left(u\right)}{i}\right)$ for all i. This must hold for all nodes u. Furthermore, the number of nodes with constant Hamming weight i must be less than $\left(\genfrac{}{}{0pt}{}{T}{i}\right)$ for all i. In addition, Hamming weights of the nodes must strictly decrease while moving from ancestor nodes to descendant nodes.

**Theorem 3.**

**Complexity:**The time complexity of the two-step sampled group testing algorithms consists of the complexity of finding the optimal M given ${F}_{m}$ and $\mathcal{F}$, the complexity of the construction of the $\mathcal{F}$-separable test matrix given M and $\mathcal{F}$, and the complexity of the decoding of the test results given the test matrix $\mathit{X}$ and the result vector y. In the following lemmas, we analyze the complexity of these processes.

**Lemma 1.**

**Proof.**

**Lemma 2.**

**Proof.**

**Lemma 3.**

**Proof.**

- We start with the assumption that exact connections between the individuals are not known, but the probability distribution of the possible edge realizations are known.
- The given edge set probability distribution results in a random cluster formation variable, F. Each possible cluster formation is a partition of the set of all individuals.
- Out of all possible cluster formations (which we call this set as $\mathcal{F}$), one cluster formation is selected as the sampling cluster formation, which we call ${F}_{m}$.
- Exactly one individual is selected from each cluster in ${F}_{m}$. These individuals are then tested and identified.
- The selection is carried out according to the sampling function M. For the given choice of ${F}_{m}$, M selects the individuals from the clusters that minimizes the expected number of false classifications, given in Theorem 2, and this results in the expected number of false classifications given in Theorem 1.
- By using the given set of possible cluster formations, $\mathcal{F}$, an $\mathcal{F}$-separable test matrix is constructed to identify the individuals selected by M. This test matrix is guaranteed to identify the selected individuals since the construction is based on assigning a unique test result vector to every possible infected set among the selected individuals.
- In Theorem 3, we present a converse argument by giving a lower bound for the required number of tests, in terms of the system parameters.
- After obtaining the test results and identifying the selected individuals with zero-error, for each selected individual, their infection status is assigned to the others in their cluster, in ${F}_{m}$. Note that there is exactly one individual selected and identified from every cluster in ${F}_{m}$. This step introduces possible false classifications.
- Selecting ${F}_{m}$ from lower levels from the possible cluster formations tree results in lower expected false classifications while increasing the number of required tests for identification. This results in a trade-off between the number of tests and expected false classifications. By using a randomized selection of ${F}_{m}$, intermediate points can also be achieved for the expected false classifications and required number of tests.

## 6. Exponentially Split Cluster Formation Trees

- An exponentially split cluster formation tree that consists of f levels has ${2}^{i-1}$ nodes at level ${F}_{i}$, for each $i\in \left[f\right]$, i.e., ${\sigma}_{i}={2}^{i-1},i\in \left[f\right]$.
- At level ${F}_{i}$, every node has ${2}^{f-i}\delta $ individuals where $\delta $ is a constant positive integer, i.e., $|{S}_{j}^{i}|={2}^{f-i}\delta ,i\in \left[f\right],j\in \left[{\sigma}_{i}\right]$.
- Every node has exactly two descendant nodes in one level below in the cluster formation tree, i.e., every node is partitioned into equal sized 2 nodes when moving one level down in the cluster formation tree.
- Random cluster formation variable F is uniformly distributed over $\mathcal{F}$, i.e., ${p}_{F}\left({F}_{i}\right)=1/f,i\in \left[f\right]$.

**Theorem 4.**

## 7. Numerical Results

#### 7.1. Exponentially Split Cluster Formation Tree Based System

#### 7.2. Arbitrary Random Connection Graph Based System

## 8. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## Appendix A

**Theorem A1.**

**Proof.**

**Theorem A2.**

**Proof.**

**Theorem A3.**

**Proof.**

**Theorem A4.**

**Proof.**

**Lemma A1.**

**Proof.**

**Lemma A2.**

**Proof.**

## References

- Dorfman, R. The Detection of Defective Members of Large Populations. Ann. Math. Stat.
**1943**, 14, 436–440. [Google Scholar] [CrossRef] - Zhu, D.Z.; Hwang, F.K. Combinatorial Group Testing and Its Applications, 2nd ed.; World Scientific: London, UK, 1999. [Google Scholar]
- Wolf, J. Born Again Group Testing: Multiaccess Communications. IEEE Trans. Inf. Theory
**1985**, 31, 185–191. [Google Scholar] [CrossRef] - Atia, G.K.; Saligrama, V. Boolean Compressed Sensing and Noisy Group Testing. IEEE Trans. Inf. Theory
**2012**, 58, 1880–1901. [Google Scholar] [CrossRef] [Green Version] - Wadayama, T. Nonadaptive Group Testing Based on Sparse Pooling Graphs. IEEE Trans. Inf. Theory
**2017**, 63, 1525–1534. [Google Scholar] [CrossRef] [Green Version] - Wang, C.; Zhao, Q.; Chuah, C. Optimal Nested Test Plan for Combinatorial Quantitative Group Testing. IEEE Trans. Signal Processing
**2018**, 66, 992–1006. [Google Scholar] [CrossRef] - Wu, S.; Wei, S.; Wang, Y.; Vaidyanathan, R.; Yuan, J. Partition Information and its Transmission Over Boolean Multi-Access Channels. IEEE Trans. Inf. Theory
**2015**, 61, 1010–1027. [Google Scholar] [CrossRef] [Green Version] - Shangguan, C.; Ge, G. New Bounds on the Number of Tests for Disjunct Matrices. IEEE Trans. Inf. Theory
**2016**, 62, 7518–7521. [Google Scholar] [CrossRef] [Green Version] - Scarlett, J.; Johnson, O. Noisy Non-Adaptive Group Testing: A (Near-)Definite Defectives Approach. IEEE Trans. Inf. Theory
**2020**, 66, 3775–3797. [Google Scholar] [CrossRef] [Green Version] - Scarlett, J.; Cevher, V. Near-Optimal Noisy Group Testing via Separate Decoding of Items. IEEE J. Sel. Top. Signal Process.
**2018**, 12, 902–915. [Google Scholar] [CrossRef] [Green Version] - Scarlett, J. Noisy Adaptive Group Testing: Bounds and Algorithms. IEEE Trans. Inf. Theory
**2019**, 65, 3646–3661. [Google Scholar] [CrossRef] - Mazumdar, A. Nonadaptive Group Testing with Random Set of Defectives. IEEE Trans. Inf. Theory
**2016**, 62, 7522–7531. [Google Scholar] [CrossRef] - Kealy, T.; Johnson, O.; Piechocki, R. The Capacity of Non-Identical Adaptive Group Testing. In Proceedings of the Allerton Conference, Monticello, IL, USA, 30 September–3 October 2014; pp. 101–108. [Google Scholar]
- Johnson, O.; Aldridge, M.; Scarlett, J. Performance of Group Testing Algorithms with Near-Constant Tests Per Item. IEEE Trans. Inf. Theory
**2019**, 65, 707–723. [Google Scholar] [CrossRef] [Green Version] - Inan, H.A.; Kairouz, P.; Wootters, M.; Ozgur, A. On the Optimality of the Kautz-Singleton Construction in Probabilistic Group Testing. In Proceedings of the Allerton Conference, Monticello, IL, USA, 2–5 October 2018; pp. 188–195. [Google Scholar]
- Karimi, E.; Kazemi, F.; Heidarzadeh, A.; Narayanan, K.R.; Sprintson, A. Non-adaptive Quantitative Group Testing Using Irregular Sparse Graph Codes. In Proceedings of the Allerton Conference, Monticello, IL, USA, 24–27 September 2019; pp. 608–614. [Google Scholar]
- Gebhard, O.; Hahn-Klimroth, M.; Kaaser, D.; Loick, P. Quantitative Group Testing in the Sublinear Regime. arXiv
**2021**, arXiv:1905.01458. [Google Scholar] - Falahatgar, M.; Jafarpour, A.; Orlitsky, A.; Pichapati, V.; Suresh, A.T. Estimating the Number of Defectives with Group Testing. In Proceedings of the IEEE ISIT, Barcelona, Spain, 10–15 July 2016; pp. 1376–1380. [Google Scholar]
- Coja-Oghlan, A.; Gebhard, O.; Hahn-Klimroth, M.; Loick, P. Information-Theoretic and Algorithmic Thresholds for Group Testing. IEEE Trans. Inf. Theory
**2020**, 66, 7911–7928. [Google Scholar] [CrossRef] - Chan, C.L.; Jaggi, S.; Saligrama, V.; Agnihotri, S. Non-Adaptive Group Testing: Explicit Bounds and Novel Algorithms. IEEE Trans. Inf. Theory
**2014**, 60, 3019–3035. [Google Scholar] [CrossRef] [Green Version] - Cai, S.; Jahangoshahi, M.; Bakshi, M.; Jaggi, S. Efficient Algorithms for Noisy Group Testing. IEEE Trans. Inf. Theory
**2017**, 63, 2113–2136. [Google Scholar] [CrossRef] - Bondorf, S.; Chen, B.; Scarlett, J.; Yu, H.; Zhao, Y. Sublinear-Time Non-Adaptive Group Testing with O(klogn) Tests via Bit-Mixing Coding. arXiv
**2020**, arXiv:1904.10102. [Google Scholar] - Aldridge, M. Individual Testing Is Optimal for Nonadaptive Group Testing in the Linear Regime. IEEE Trans. Inf. Theory
**2019**, 65, 2058–2061. [Google Scholar] [CrossRef] [Green Version] - Agarwal, A.; Jaggi, S.; Mazumdar, A. Novel Impossibility Results for Group-Testing. In Proceedings of the IEEE ISIT, Vail, CO, USA, 17–22 June 2018; pp. 2579–2583. [Google Scholar]
- Heidarzadeh, A.; Narayanan, K. Two-Stage Adaptive Pooling with RT-qPCR for COVID-19 Screening. arXiv
**2020**, arXiv:2007.02695. [Google Scholar] - Ruszinko, M. On the Upper Bound of the Size of the R-Cover-Free Families. J. Comb. Theory Ser.
**1994**, 66, 302–310. [Google Scholar] [CrossRef] [Green Version] - Riccio, L.; Colbourn, C.J. Sharper Bounds in Adaptive Group Testing. Taiwan. J. Math.
**2000**, 4, 669–673. [Google Scholar] [CrossRef] - Aldridge, M.; Johnson, O.; Scarlett, J. Group Testing: An Information Theory Perspective. Found. Trends Commun. Inf. Theory
**2019**, 15, 196–392. [Google Scholar] [CrossRef] [Green Version] - Li, T.; Chan, C.L.; Huang, W.; Kaced, T.; Jaggi, S. Group Testing with Prior Statistics. In Proceedings of the IEEE ISIT, Honolulu, HI, USA, 29 June–4 July 2014; pp. 2346–2350. [Google Scholar]
- Lendle, S.D.; Hudgens, M.G.; Qaqish, B.F. Group Testing for Case Identification with Correlated Responses. Biometrics
**2012**, 68, 532–540. [Google Scholar] [CrossRef] [PubMed] - Lin, Y.J.; Yu, C.H.; Liu, T.H.; Chang, C.S.; Chen, W.T. Positively Correlated Samples Save Pooled Testing Costs. arXiv
**2021**, arXiv:2011.09794. [Google Scholar] [CrossRef] - Nikolopoulos, P.; Guo, T.; Fragouli, C.; Diggavi, S. Community Aware Group Testing. arXiv
**2021**, arXiv:2007.08111. [Google Scholar] - Nikolopoulos, P.; Srinivasavaradhan, S.R.; Guo, T.; Fragouli, C.; Diggavi, S. Group Testing for Overlapping Communities. In Proceedings of the ICC 2021—IEEE International Conference on Communications, Montreal, QC, Canada, 14–23 June 2021; pp. 1–7. [Google Scholar]
- Ahn, S.; Chen, W.N.; Ozgur, A. Adaptive Group Testing on Networks with Community Structure. arXiv
**2021**, arXiv:2101.02405. [Google Scholar] - Arasli, B.; Ulukus, S. Graph and Cluster Formation Based Group Testing. In Proceedings of the IEEE ISIT, Melbourne, Australia, 12–20 July 2021. [Google Scholar]
- Hwang, F.K. A Method for Detecting All Defective Members in a Population by Group Testing. J. Am. Stat. Assoc.
**1972**, 67, 605–608. [Google Scholar] [CrossRef] - Idalino, T.B.; Moura, L. Structure-Aware Combinatorial Group Testing: A New Method for Pandemic Screening. arXiv
**2022**, arXiv:2202.09264. [Google Scholar] - Gonen, M.; Langberg, M.; Sprintson, A. Group Testing on General Set-Systems. arXiv
**2022**, arXiv:2202.04988. [Google Scholar] - Chen, H.B.; Hwang, F.K. Exploring the Missing Link Among d-Separable, d¯-Separable and d-Disjunct Matrices. Discret. Appl. Math.
**2007**, 155, 662–664. [Google Scholar] [CrossRef] [Green Version] - Baldassini, L.; Johnson, O.; Aldridge, M. The Capacity of Adaptive Group Testing. In Proceedings of the IEEE ISIT, Istanbul, Turkey, 7–12 July 2013. [Google Scholar]
- Allemann, A. An efficient algorithm for combinatorial group testing. In Proceedings of the Information Theory, Combinatorics, and Search Theory: In Memory of Rudolf Ahlswede, Bielefeld, Germany, 25–26 July 2011. [Google Scholar]
- Sobel, M.; Groll, P.A. Group Testing To Eliminate Efficiently All Defectives in a Binomial Sample. Bell Syst. Tech. J.
**1959**, 38, 1179–1252. [Google Scholar] [CrossRef]

**Figure 1.**Random connection graph $\mathcal{C}$ and three possible realizations and cluster formations. We show each cluster with a different color. (

**a**) Probabilities of the edges; (

**b**) a realization of $\mathcal{C}$ with four clusters; (

**c**) a realization of $\mathcal{C}$ with six clusters; (

**d**) a realization of $\mathcal{C}$ with four clusters.

**Figure 2.**Edge probabilities of $\mathcal{C}$ and elements of $\mathcal{F}$ in example $\mathit{C}$ given in (1) with clusters shown in different colors.

**Figure 7.**Four realizations of a random connection graph $\mathcal{C}$ that falls under four different cluster formations in a 4-level exponentially split cluster formation tree with $\delta =4$.

**Figure 8.**(

**a**) Expected number of false classifications vs. the choice of sampling cluster formation ${F}_{m}$; (

**b**) required number of tests vs. the choice of sampling cluster formation ${F}_{m}$.

**Figure 9.**(

**a**) Expected number of false classifications vs. the choice of sampling cluster formation ${F}_{m}$; (

**b**) required number of tests vs. the choice of sampling cluster formation ${F}_{m}$; (

**c**) random connection graph.

System | |
---|---|

n | number of individuals in the system |

U | infection status vector of size n |

Z | patient zero random variable |

${p}_{Z}\left(i\right)$ | probability of individual i is the patient zero |

$\mathcal{C}$ | random connection graph |

${E}_{\mathcal{C}}$ | edge set of $\mathcal{C}$ |

${V}_{\mathcal{C}}$ | vertex set of $\mathcal{C}$, also equal to $\left[n\right]$ |

$\mathit{C}$ | random connection matrix |

F | cluster formation random variable |

$\mathcal{F}$ | set of all possible cluster formations, i.e., $\left\{{F}_{i}\right\}$ |

${p}_{F}\left({F}_{i}\right)$ | probability of true cluster formation is ${F}_{i}$ |

f | number of possible cluster formations, i.e., $\left|\mathcal{F}\right|$ |

${\sigma}_{i}$ | number of clusters in the cluster formation ${F}_{i}$ |

${S}_{j}^{i}$ | jth cluster in ${F}_{i}$ |

${\lambda}_{j}$ | number of unique clusters in $\mathcal{F}$ at and above the level ${F}_{j}$ |

${\lambda}_{{S}_{i}^{j}}$ | number of unique ancestor nodes of ${S}_{i}^{j}$ in $\mathcal{F}$ |

$\delta $ | size of the bottom level clusters in an exponentially split $\mathcal{F}$ |

Algorithm | |

${F}_{m}$ | sampling cluster formation chosen from $\mathcal{F}$ |

M | sampling function that selects individuals to be tested |

${U}^{\left(M\right)}$ | infection status vector of the selected individuals by M |

${S}^{\alpha}\left({M}_{i}\right)$ | the cluster in ${F}_{\alpha}$ that contains the ith selected individual by M |

${K}_{M}$ | set of infections among the selected individuals by M |

$\mathcal{P}\left({K}_{M}\right)$ | set of all possible infected sets that ${K}_{M}$ can be |

T | number of tests to be performed |

$\mathit{X}$ | $T\times {\sigma}_{m}$ test matrix |

${\mathit{X}}^{\left(i\right)}$ | ith column of $\mathit{X}$ |

y | test result vector of size T |

$\widehat{U}$ | estimated infection status of n individuals after test results |

${E}_{f,\alpha}$ | expected number of false classifications given $F={F}_{\alpha}$ |

${E}_{f}$ | expected number of false classifications |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Arasli, B.; Ulukus, S.
Group Testing with a Graph Infection Spread Model. *Information* **2023**, *14*, 48.
https://doi.org/10.3390/info14010048

**AMA Style**

Arasli B, Ulukus S.
Group Testing with a Graph Infection Spread Model. *Information*. 2023; 14(1):48.
https://doi.org/10.3390/info14010048

**Chicago/Turabian Style**

Arasli, Batuhan, and Sennur Ulukus.
2023. "Group Testing with a Graph Infection Spread Model" *Information* 14, no. 1: 48.
https://doi.org/10.3390/info14010048