# An Agent-Ensemble for Thresholded Multi-Target Classification

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Problem Description

## 3. Related Work

## 4. Proposed Agent Combiner

#### 4.1. Relationship to the Neyman-Pearson Combiner

**Definition**

**1.**

**Definition**

**2.**

**Theorem**

**1.**

**Proof.**

**Lemma**

**1.**

**Proof.**

**Lemma**

**2.**

**Proof.**

#### 4.2. Adapting to Target Population Drift

#### 4.3. Relationship to Meta-Classifiers

#### 4.4. Relationship to the Any-Combiner Rule and a Comparison of Normalization Functions

## 5. Experiments

#### 5.1. Simulated Data

#### 5.2. Pin-Less Verification with Yale Faces

#### 5.3. Classification of Ground Vehicles Using Acoustic Signatures

## 6. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Pearman, W.F.; Fountain, A.W. Classification of chemical and biological warfare agent simulants by surface-enhanced Raman spectroscopy and multivariate statistical techniques. Appl. Spectrosc.
**2006**, 60, 356–365. [Google Scholar] [CrossRef] [PubMed] - Wayman, J.; Jain, A.; Maltoni, D.; Maio, D. An introduction to biometric authentication systems. Biom. Syst.
**2005**, 1–20. [Google Scholar] - Kantchelian, A.; Afroz, S.; Huang, L.; Islam, A.C.; Miller, B.; Tschantz, M.C.; Greenstadt, R.; Joseph, A.D.; Tygar, J. Approaches to adversarial drift. In Proceedings of the ACM Workshop Artificial Intelligence and Security, Berlin, Germany, 4 November 2013; pp. 99–110. [Google Scholar]
- Malisiewicz, T.; Gupta, A.; Efros, A.A. Ensemble of exemplar-svms for object detection and beyond. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 89–96. [Google Scholar]
- Parrish, N.; Llorens, A.J. The any-combiner for multi-agent target classification. In Proceedings of the 16th International Conference on Information Fusion, Istanbul, Turkey, 9–12 July 2013; pp. 166–173. [Google Scholar]
- McClish, D.K. Analyzing a portion of the ROC curve. Med. Decis. Mak.
**1989**, 9, 190–195. [Google Scholar] [CrossRef] - Kuncheva, L.I. Combining Pattern Classifiers: Methods and Algorithms; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2004. [Google Scholar]
- Jacobs, R.A.; Jordan, M.I.; Nowlan, S.J.; Hinton, G.E. Adaptive mixtures of local experts. Neural Comput.
**1991**, 3, 79–87. [Google Scholar] [CrossRef] [PubMed] - Collobert, R.; Bengio, S.; Bengio, Y. A parallel mixture of SVMs for very large scale problems. Neural Comput.
**2002**, 14, 1105–1114. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Enzweiler, M.; Gavrila, D.M. A multilevel mixture-of-experts framework for pedestrian classification. IEEE Trans. Image Proc.
**2011**, 20, 2967–2979. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Ebrahimpour, R.; Kabir, E.; Yousefi, M.R. Improving mixture of experts for view-independent face recognition using teacher-directed learning. Mach. Vis. Appl.
**2011**, 22, 421–432. [Google Scholar] [CrossRef] - Yuksel, S.E.; Wilson, J.N.; Gader, P.D. Twenty years of mixture of experts. IEEE Trans. Neural Netw. Learn. Syst.
**2012**, 23, 1177–1193. [Google Scholar] [CrossRef] - Sakkis, G.; Androutsopoulos, I.; Paliouras, G.; Karkaletsis, V.; Spyropoulos, C.D.; Stamatopoulos, P. Stacking classifiers for anti-spam filtering of e-mail. arXiv
**2001**, arXiv:cs/0106040. [Google Scholar] - Wang, S.Q.; Yang, J.; Chou, K.C. Using stacked generalization to predict membrane protein types based on pseudo-amino acid composition. J. Theor. Biol.
**2006**, 242, 941–946. [Google Scholar] [CrossRef] - Domingos, P. A few useful things to know about machine learning. Commun. ACM
**2012**, 55, 78–87. [Google Scholar] [CrossRef] [Green Version] - Poh, N.; Bengio, S. An Investigation of F-Ratio Client-Dependent Normalisation on Biometric Authentication Tasks; Technical Report, Research Report 04-46; IDIAP: Martigny, Switzerland, 2004. [Google Scholar]
- Fierrez-Aguilar, J.; Ortega-Garcia, J.; Gonzalez-Rodriguez, J. Target dependent score normalization techniques and their application to signature verification. IEEE Trans. Syst. Man Cybern. C Appl. Rev.
**2005**, 35, 418–425. [Google Scholar] [CrossRef] - Poh, N.; Kittler, J. Incorporating variation of model-specific score distribution in speaker verification systems. IEEE Trans. Audio Speech Lang. Process.
**2008**, 16, 594–606. [Google Scholar] [CrossRef] - Poh, N.; Ross, A.; Lee, W.; Kittler, J. A user-specific and selective multimodal biometric fusion strategy by ranking subjects. Pattern Recognit.
**2013**, 46, 3341–3357. [Google Scholar] [CrossRef] - Kittler, J.; Hatef, M.; Duin, R.P.; Matas, J. On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell.
**1998**, 20, 226–239. [Google Scholar] [CrossRef] [Green Version] - Kelly, M.G.; Hand, D.J.; Adams, N.M. The impact of changing populations on classifier performance. In Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, 22–27 August 1999; pp. 367–371. [Google Scholar]
- Kolter, J.Z.; Maloof, M.A. Dynamic weighted majority: An ensemble method for drifting concepts. J. Mach. Learn. Res.
**2007**, 8, 2755–2790. [Google Scholar] - Klinkenberg, R.; Joachims, T. Detecting Concept Drift with Support Vector Machines. In Proceedings of the Seventeenth International Conference on Machine Learning, Stanford, CA, USA, 29 June–2 July 2000; pp. 487–494. [Google Scholar]
- Poh, N.; Tistarelli, M. Customizing biometric authentication systems via discriminative score calibration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2681–2686. [Google Scholar]
- Platt, J.C. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv. Large Margin Classif.
**1999**, 10, 61–74. [Google Scholar] - Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol.
**2011**, 2, 1–27. Available online: http://www.csie.ntu.edu.tw/~cjlin/libsvm (accessed on 10 February 2020). [CrossRef] - Doddington, G.; Liggett, W.; Martin, A.; Przybocki, M.; Reynolds, D. Sheep, Goats, Lambs and Wolves: A Statistical Analysis of Speaker Performance in the NIST 1998 Speaker Recognition Evaluation; Technical Report, DTIC Document; National Institutes of Science and Technology: Gaithersburg, MD, USA, 1998. [Google Scholar]
- Georghiades, A.; Belhumeur, P.; Kriegman, D. From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose. IEEE Trans. Pattern Anal. Mach. Intell.
**2001**, 23, 643–660. [Google Scholar] [CrossRef] [Green Version] - Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell.
**2009**, 31, 210–227. [Google Scholar] [CrossRef] [Green Version] - Naseem, I.; Togneri, R.; Bennamoun, M. Linear regression for face recognition. IEEE Trans. Pattern Anal. Mach. Intell.
**2010**, 32, 2106–2112. [Google Scholar] [CrossRef] [PubMed] - Pham, T.; Srour, N. ACIDS; Technical Report; US Army Research Laboratory: Adelphi, MD, USA, 1998. [Google Scholar]
- Wu, H.; Mendel, J.M. Classification of battlefield ground vehicles using acoustic features and fuzzy logic rule-based classifiers. IEEE Trans. Fuzzy Syst.
**2007**, 15, 56–72. [Google Scholar] [CrossRef] - Robertson, J.A.; Mossing, J.C.; Weber, B.A. Artificial neural networks for acoustic target recognition. In Proceedings of the SPIE Symposium. OE/Aerospace Sensing and Dual Use Photonics; International Society for Optics and Photonics: Bellingham, WA, USA, 1995; pp. 939–950. [Google Scholar]

**Figure 2.**Agent ROC curves for the first simulation in Section 5.1. Each line shows a ROC curve for one of the 10 random agents. The ROC curves show that the simulation mimics a scenario where some targets are much harder to distinguish from clutter than others.

**Figure 3.**Normalized magnitude spectrogram, in dB, for the first vehicular event in the ACIDS dataset.

**Table 1.**Average partial area under the curve (PAUC) for the first simulation with conditionally non-discriminative agents. Columns give the results for differing number of combiner training samples, and the column titled ‘model’ gives the results when the combiners are provided with the model parameters. Boldface identifies results that are the best or statistically significantly tied for the best with 95% confidence.

100 | 1000 | 2000 | 10,000 | Model | |
---|---|---|---|---|---|

Samples | Samples | Samples | Samples | Params | |

Prpsd. Gaussian WLR | 0.813 | 0.855 | 0.856 | 0.855 | 0.856 |

Joint Gaussian | 0.208 | 0.799 | 0.831 | 0.851 | 0.856 |

Ind. Gaussian | 0.600 | 0.844 | 0.850 | 0.854 | 0.856 |

AC Gaussian WLR | 0.812 | 0.854 | 0.855 | 0.854 | 0.855 |

AC Z-norm | 0.809 | 0.829 | 0.832 | 0.831 | 0.832 |

AC F-norm | 0.716 | 0.753 | 0.751 | 0.756 | 0.757 |

AC EER-norm | 0.780 | 0.791 | 0.792 | 0.792 | 0.821 |

**Table 2.**Average PAUC for simulation two with 100 combiner training samples. Boldface identifies results that are the best or statistically significantly tied for the best with 95% confidence.

$\mathit{\alpha}=0.95$ | $\mathit{\alpha}=0.8$ | $\mathit{\alpha}=0.6$ | $\mathit{\alpha}=0.4$ | $\mathit{\alpha}=0.1$ | |
---|---|---|---|---|---|

Prpsd. Gaussian WLR | 0.464 | 0.473 | 0.476 | 0.500 | 0.508 |

Joint Gaussian | 0.371 | 0.383 | 0.402 | 0.432 | 0.483 |

Ind. Gaussian | 0.418 | 0.440 | 0.463 | 0.497 | 0.538 |

AC Gaussian WLR | 0.457 | 0.463 | 0.461 | 0.480 | 0.487 |

AC Z-norm | 0.461 | 0.466 | 0.474 | 0.490 | 0.503 |

AC F-norm | 0.452 | 0.460 | 0.471 | 0.483 | 0.490 |

AC EER-norm | 0.464 | 0.468 | 0.478 | 0.491 | 0.510 |

**Table 3.**Average PAUC for simulation two with 1000 combiner training samples. Boldface identifies results that are the best or statistically significantly tied for the best with 95% confidence.

$\mathit{\alpha}=0.95$ | $\mathit{\alpha}=0.8$ | $\mathit{\alpha}=0.6$ | $\mathit{\alpha}=0.4$ | $\mathit{\alpha}=0.1$ | |
---|---|---|---|---|---|

Prpsd. Gaussian WLR | 0.479 | 0.492 | 0.504 | 0.521 | 0.543 |

Joint Gaussian | 0.469 | 0.489 | 0.506 | 0.535 | 0.584 |

Ind. Gaussian | 0.444 | 0.467 | 0.496 | 0.534 | 0.589 |

AC Gaussian WLR | 0.471 | 0.482 | 0.490 | 0.504 | 0.521 |

AC Z-norm | 0.471 | 0.483 | 0.491 | 0.505 | 0.522 |

AC F-norm | 0.469 | 0.481 | 0.490 | 0.503 | 0.522 |

AC EER-norm | 0.465 | 0.474 | 0.481 | 0.492 | 0.510 |

**Table 4.**Average PAUC for the Yale Faces pin-less verification experiment. Column titles give the number of training examples from each client person. Boldface identifies results that are the best or statistically significantly tied for the best with 95% confidence.

5 | 10 | 20 | 30 | 40 | |
---|---|---|---|---|---|

Prpsd. Gaussian WLR | 0.453 | 0.545 | 0.688 | 0.735 | 0.754 |

Prpsd. Platt WLR | 0.503 | 0.597 | 0.761 | 0.821 | 0.846 |

Joint Gaussian | 0.358 | 0.534 | 0.698 | 0.775 | 0.798 |

Ind. Gaussian | 0.435 | 0.548 | 0.691 | 0.745 | 0.756 |

AC Gaussian WLR | 0.447 | 0.544 | 0.688 | 0.735 | 0.754 |

AC Z-norm | 0.507 | 0.552 | 0.662 | 0.699 | 0.711 |

AC F-norm | 0.452 | 0.610 | 0.751 | 0.821 | 0.841 |

AC EER-norm | 0.237 | 0.338 | 0.572 | 0.732 | 0.729 |

Meta-SVM | 0.452 | 0.568 | 0.680 | 0.771 | 0.820 |

**Table 5.**Average percent accuracy when using the maximum normalized agent-output to estimate the client that causes an alert when the threshold is set to give five percent probability of false alarm (PFA). Column titles give the number of training examples from each client person.

5 | 10 | 20 | 30 | 40 | |
---|---|---|---|---|---|

Prpsd. Gaussian WLR | 95 | 97 | 98 | 98 | 99 |

Prpsd. Platt WLR | 97 | 98 | 99 | 99 | 99 |

AC Gaussian WLR | 95 | 97 | 98 | 98 | 99 |

AC Z-norm | 97 | 98 | 99 | 98 | 98 |

AC F-norm | 95 | 97 | 98 | 99 | 99 |

AC EER-norm | 82 | 92 | 96 | 99 | 99 |

**Table 6.**Number of closest point of arrival (CPA) events for each type of vehicle in the ACIDS dataset.

Vehicle | Number of Events | Number of Scans |
---|---|---|

1 | 62 | 4960 |

2 | 37 | 2960 |

3 | 9 | 720 |

4 | 27 | 2160 |

5 | 39 | 3120 |

6 | 37 | 2960 |

7 | 7 | 560 |

8 | 35 | 2800 |

9 | 21 | 1680 |

**Table 7.**Average PAUC over the 0%–10% PFA operating region for the various combiner methods on scan-by-scan features and events from the ACIDS dataset. Boldface identifies results that are the best or statistically significantly tied for the best with 95% confidence.

Scans | Events | |
---|---|---|

Prpsd. Gaussian WLR | 0.563 | 0.794 |

Prpsd. Platt WLR | 0.575 | 0.847 |

Joint Gaussian | 0.554 | 0.786 |

Ind. Gaussian | 0.546 | 0.764 |

AC Gaussian WLR | 0.562 | 0.797 |

AC Z-norm | 0.556 | 0.784 |

AC F-norm | 0.467 | 0.753 |

AC EER-norm | 0.385 | 0.539 |

Meta-SVM | 0.558 | 0.814 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Parrish, N.H.; Llorens, A.J.; Driskell, A.E.
An Agent-Ensemble for Thresholded Multi-Target Classification. *Appl. Sci.* **2020**, *10*, 1376.
https://doi.org/10.3390/app10041376

**AMA Style**

Parrish NH, Llorens AJ, Driskell AE.
An Agent-Ensemble for Thresholded Multi-Target Classification. *Applied Sciences*. 2020; 10(4):1376.
https://doi.org/10.3390/app10041376

**Chicago/Turabian Style**

Parrish, Nathan H., Ashley J. Llorens, and Alex E. Driskell.
2020. "An Agent-Ensemble for Thresholded Multi-Target Classification" *Applied Sciences* 10, no. 4: 1376.
https://doi.org/10.3390/app10041376