Journal Description
Computer Sciences & Mathematics Forum
Computer Sciences & Mathematics Forum
is an open access journal dedicated to publishing findings resulting from academic conferences, workshops, and similar events in the area of computer science and mathematics. Each conference proceeding can be individually indexed, is citable via a digital object identifier (DOI), and is freely available under an open access license. The conference organizers and proceedings editors are responsible for managing the peer-review process and selecting papers for conference proceedings.
Latest Articles
Secure and Efficient Code-Based Cryptography for Multi-Party Computation and Digital Signatures
Comput. Sci. Math. Forum 2023, 6(1), 1; https://doi.org/10.3390/cmsf2023006001 - 26 May 2023
Abstract
Code-based cryptography is a promising candidate for post-quantum cryptography due to its strong security guarantees and efficient implementations. In this paper, we explore the use of code-based cryptography for multi-party computation and digital signatures, two important cryptographic applications. We present several efficient and
[...] Read more.
Code-based cryptography is a promising candidate for post-quantum cryptography due to its strong security guarantees and efficient implementations. In this paper, we explore the use of code-based cryptography for multi-party computation and digital signatures, two important cryptographic applications. We present several efficient and secure code-based protocols for these applications, based on the McEliece cryptosystem and its variants. Our protocols offer strong security guarantees against both classical and quantum attacks, and have competitive performance compared to other post-quantum cryptographic schemes. We also compare code-based cryptography with other post-quantum schemes, including lattice-based and hash-based cryptography, and discuss the advantages and disadvantages of each approach.
Full article
(This article belongs to the Proceedings of The 3rd International Day on Computer Science and Applied Mathematics)
Open AccessConference Report
Abstracts of the 1st International Conference on Trends and Innovations in Smart Technologies (ICTIST’22)
by
, , , , and
Comput. Sci. Math. Forum 2023, 5(1), 1; https://doi.org/10.3390/cmsf2023005001 - 20 Mar 2023
Abstract
The first edition of the International Conference on Trends and Innovations in Smart Technologies (ICTIST’22) was held on 7–8 October 2022, bringing together researchers and experts from the fields of communication networks and security, computational intelligence, and engineering. The conference provided a platform
[...] Read more.
The first edition of the International Conference on Trends and Innovations in Smart Technologies (ICTIST’22) was held on 7–8 October 2022, bringing together researchers and experts from the fields of communication networks and security, computational intelligence, and engineering. The conference provided a platform for participants to share their research and discuss the latest trends and innovations in these areas. The present report will start by providing an overview on the keynote speeches and the main axes around which the communication sessions revolved before moving on to more detailed abstracts, presenting each of the topics presented during the ICTIST’22 conference.
Full article
(This article belongs to the Proceedings of International Conference on Trends and Innovation in Smart Technologies)
Open AccessEditorial
Fractional Calculus in Mexico: The 5th Mexican Workshop on Fractional Calculus (MWFC)
Comput. Sci. Math. Forum 2022, 4(1), 7; https://doi.org/10.3390/cmsf2022004007 - 03 Feb 2023
Abstract
The Mexican Workshop on Fractional Calculus (MWFC) is a bi-annual international workshop and the largest Latin American technical event in the field of fractional calculus in Mexico [...]
Full article
(This article belongs to the Proceedings of The 5th Mexican Workshop on Fractional Calculus)
Open AccessProceeding Paper
Analyzing All the Instances of a Chaotic Map to Generate Random Numbers
Comput. Sci. Math. Forum 2022, 4(1), 6; https://doi.org/10.3390/cmsf2022004006 - 18 Jan 2023
Abstract
All possible configurations of a chaotic map without fixed points, called “nfp1”, in its implementation in fixed-point arithmetic are analyzed. As the multiplication on the computer does not follow the associative property, we analyze the number of forms in which the multiplications can
[...] Read more.
All possible configurations of a chaotic map without fixed points, called “nfp1”, in its implementation in fixed-point arithmetic are analyzed. As the multiplication on the computer does not follow the associative property, we analyze the number of forms in which the multiplications can be performed in this chaotic map. As chaos enhanced the small perturbations produced in the multiplications, it is possible to built different pseudorandom number generators using the same chaotic map.
Full article
(This article belongs to the Proceedings of The 5th Mexican Workshop on Fractional Calculus)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Further Remarks on Irrational Systems and Their Applications
Comput. Sci. Math. Forum 2022, 4(1), 5; https://doi.org/10.3390/cmsf2022004005 - 22 Dec 2022
Abstract
Irrational Systems (ISs) are transfer functions that include terms with irrational exponents. Since such systems are ubiquitous and can be seen when solving partial differential equations, fractional-order differential equations, or non-linear differential equations; their nature seems to be strongly linked with a low-order
[...] Read more.
Irrational Systems (ISs) are transfer functions that include terms with irrational exponents. Since such systems are ubiquitous and can be seen when solving partial differential equations, fractional-order differential equations, or non-linear differential equations; their nature seems to be strongly linked with a low-order description of distributed parameter systems. This makes ISs an appealing option for model-reduction applications and controls. In this work, we review some of the fundamental concepts behind a set of ISs that are of core importance in their stability analysis and control design. Specifically, we introduce the notion of multivalued functions, branch points, time response, and stability regions, as well as some practical applications where these systems can be encountered. The theory is accompanied by some numerical examples or simulations.
Full article
(This article belongs to the Proceedings of The 5th Mexican Workshop on Fractional Calculus)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Abelian Groups of Fractional Operators
Comput. Sci. Math. Forum 2022, 4(1), 4; https://doi.org/10.3390/cmsf2022004004 - 19 Dec 2022
Abstract
Taking into count the large number of fractional operators that have been generated over the years, and considering that their number is unlikely to stop increasing at the time of writing this paper due to the recent boom of fractional calculus, everything seems
[...] Read more.
Taking into count the large number of fractional operators that have been generated over the years, and considering that their number is unlikely to stop increasing at the time of writing this paper due to the recent boom of fractional calculus, everything seems to indicate that an alternative that allows to fully characterize some elements of fractional calculus is through the use of sets. Therefore, this paper presents a recapitulation of some fractional derivatives, fractional integrals, and local fractional operators that may be found in the literature, as well as a summary of how to define sets of fractional operators that allow to fully characterize some elements of fractional calculus, such as the Taylor series expansion of a scalar function in multi-index notation. In addition, it is presented a way to define finite and infinite Abelian groups of fractional operators through a family of sets of fractional operators and two different internal operations. Finally, using the above results, it is shown one way to define commutative and unitary rings of fractional operators.
Full article
(This article belongs to the Proceedings of The 5th Mexican Workshop on Fractional Calculus)
Open AccessProceeding Paper
Patterns in a Time-Fractional Predator–Prey System with Finite Interaction Range
Comput. Sci. Math. Forum 2022, 4(1), 3; https://doi.org/10.3390/cmsf2022004003 - 07 Dec 2022
Abstract
Diffusive predator–prey systems are well known to exhibit spatial patterns obtained by using the Turing instability mechanism. reaction–diffusion systems were already studied by replacing the time derivative with a fractional order derivative, finding the conditions under which spatial patterns could be formed in
[...] Read more.
Diffusive predator–prey systems are well known to exhibit spatial patterns obtained by using the Turing instability mechanism. reaction–diffusion systems were already studied by replacing the time derivative with a fractional order derivative, finding the conditions under which spatial patterns could be formed in such systems. The recent interest in fractional operators is due to the fact that many biological, chemical, physical, engineering, and financial systems can be well described using these tools. This contribution presents a diffusive predator–prey model with a finite interaction scale between species and introduces temporal fractional derivatives associated with species behaviors. We show that the spatial scale of the species interaction affects the range of unstable modes in which patterns can appear. Additionally, the temporal fractional derivatives further modify the emergence of spatial patterns.
Full article
(This article belongs to the Proceedings of The 5th Mexican Workshop on Fractional Calculus)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Dynamic Analysis for the Physically Correct Model of a Fractional-Order Buck-Boost Converter
by
, , , and
Comput. Sci. Math. Forum 2022, 4(1), 2; https://doi.org/10.3390/cmsf2022004002 - 22 Nov 2022
Abstract
This work proposes a fractional-order mathematical model of a Buck-Boost converter performing in continuous conduction mode. To do so, we employ the average duty-cycle representation in state space, driven by the nonadimensionalize approach to avoid unit inconsistencies in the model. We also consider
[...] Read more.
This work proposes a fractional-order mathematical model of a Buck-Boost converter performing in continuous conduction mode. To do so, we employ the average duty-cycle representation in state space, driven by the nonadimensionalize approach to avoid unit inconsistencies in the model. We also consider a Direct Current (DC) analysis through the fractional Riemann–Liouville (R-L) approach. Moreover, the fractional order Buck-Boost converter model is implemented in the Matlab/Simulink setting, which is also powered by the Fractional-order Modeling and Control (FOMCON) toolbox. When modifying the fractional model order, we identify significant variations in the dynamic converter response from this simulated scenario. Finally, we detail how to achieve a fast dynamic response without oscillations and an adequate overshoot, appropriately varying the fractional-order coefficient. The numerical results have allowed us to determine that with the decrease of the fractional order, the model presents minor oscillations, obtaining an output voltage response six times faster with a significant overshoot reduction of 67%, on average.
Full article
(This article belongs to the Proceedings of The 5th Mexican Workshop on Fractional Calculus)
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Fractional Approach to the Study of Damped Traveling Disturbances in a Vibrating Medium
Comput. Sci. Math. Forum 2022, 4(1), 1; https://doi.org/10.3390/cmsf2022004001 - 22 Nov 2022
Abstract
The Cauchy problem of a time–space fractional partial differential equation which has as a particular case the damped wave equation is solved for the Dirac delta initial condition. The solution is obtained in terms of H-Fox functions and models the travel of a
[...] Read more.
The Cauchy problem of a time–space fractional partial differential equation which has as a particular case the damped wave equation is solved for the Dirac delta initial condition. The solution is obtained in terms of H-Fox functions and models the travel of a disturbance in a vibrating medium.
Full article
(This article belongs to the Proceedings of The 5th Mexican Workshop on Fractional Calculus)
►▼
Show Figures

Figure 1
Open AccessEditorial
Statement of Peer Review
by
and
Comput. Sci. Math. Forum 2022, 3(1), 12; https://doi.org/10.3390/cmsf2022003012 - 31 May 2022
Abstract
In submitting conference proceedings to Computer Sciences & Mathematics Forum, the volume editors of the proceedings certify to the publisher that all papers published in this volume have been subjected to peer review administered by the volume editors [...]
Full article
(This article belongs to the Proceedings of AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD))
Open AccessProceeding Paper
Age Should Not Matter: Towards More Accurate Pedestrian Detection via Self-Training
by
, , , , and
Comput. Sci. Math. Forum 2022, 3(1), 11; https://doi.org/10.3390/cmsf2022003011 - 24 May 2022
Cited by 1
Abstract
Why is there disparity in the miss rates of pedestrian detection between different age attributes? In this study, we propose to (i) improve the accuracy of pedestrian detection using our pre-trained model; and (ii) explore the causes of this disparity. In order to
[...] Read more.
Why is there disparity in the miss rates of pedestrian detection between different age attributes? In this study, we propose to (i) improve the accuracy of pedestrian detection using our pre-trained model; and (ii) explore the causes of this disparity. In order to improve detection accuracy, we extend a pedestrian detection pre-training dataset, the Weakly Supervised Pedestrian Dataset (WSPD), by means of self-training, to construct our Self-Trained Person Dataset (STPD). Moreover, we hypothesize that the cause of the miss rate is due to three biases: (1) the apparent bias towards “adults” versus “children”; (2) the quantity of training data bias against “children”; and (3) the scale bias of the bounding box. In addition, we constructed an evaluation dataset by manually annotating “adult” and “child” bounding boxes to the INRIA Person Dataset. As a result, we confirm that the miss rate was reduced by up to 0.4% for adults and up to 3.9% for children. In addition, we discuss the impact of the size and appearance of the bounding boxes on the disparity in miss rates and provide an outlook for future research.
Full article
(This article belongs to the Proceedings of AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD))
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Long-Tail Zero and Few-Shot Learning via Contrastive Pretraining on and for Small Data
Comput. Sci. Math. Forum 2022, 3(1), 10; https://doi.org/10.3390/cmsf2022003010 - 20 May 2022
Cited by 3
Abstract
Preserving long-tail, minority information during model compression has been linked to algorithmic fairness considerations. However, this assumes that large models capture long-tail information and smaller ones do not, which raises two questions. One, how well do large pretrained language models encode long-tail information?
[...] Read more.
Preserving long-tail, minority information during model compression has been linked to algorithmic fairness considerations. However, this assumes that large models capture long-tail information and smaller ones do not, which raises two questions. One, how well do large pretrained language models encode long-tail information? Two, how can small language models be made to better capture long-tail information, without requiring a compression step? First, we study the performance of pretrained Transformers on a challenging new long-tail, web text classification task. Second, to train small long-tail capture models we propose a contrastive training objective that unifies self-supervised pretraining, and supervised long-tail fine-tuning, which markedly increases tail data-efficiency and tail prediction performance. Third, we analyze the resulting long-tail learning capabilities under zero-shot, few-shot and full supervision conditions, and study the performance impact of model size and self-supervision signal amount. We find that large pretrained language models do not guarantee long-tail retention and that much smaller, contrastively pretrained models better retain long-tail information while gaining data and compute efficiency. This demonstrates that model compression may not be the go-to method for obtaining good long-tail performance from compact models.
Full article
(This article belongs to the Proceedings of AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD))
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Extracting Salient Facts from Company Reviews with Scarce Labels
by
, , , , and
Comput. Sci. Math. Forum 2022, 3(1), 9; https://doi.org/10.3390/cmsf2022003009 - 29 Apr 2022
Abstract
In this paper, we propose the task of extracting salient facts from online company reviews. Salient facts present unique and distinctive information about a company, which helps the user in deciding whether to apply to the company. We formulate the salient fact extraction
[...] Read more.
In this paper, we propose the task of extracting salient facts from online company reviews. Salient facts present unique and distinctive information about a company, which helps the user in deciding whether to apply to the company. We formulate the salient fact extraction task as a text classification problem, and leverage pretrained language models to tackle the problem. However, the scarcity of salient facts in company reviews causes a serious label imbalance issue, which hinders taking full advantage of pretrained language models. To address the issue, we developed two data enrichment methods: first, representation enrichment, which highlights uncommon tokens by appending special tokens, and second, label propagation, which interactively creates pseudopositive examples from unlabeled data. Experimental results on an online company review corpus show that our approach improves the performance of pretrained language models by up to an F1 score of 0.24. We also confirm that our approach competitively performs well against the state-of-the-art data augmentation method on the SemEval 2019 benchmark even when trained with only 20% of training data.
Full article
(This article belongs to the Proceedings of AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD))
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Dual Complementary Prototype Learning for Few-Shot Segmentation
Comput. Sci. Math. Forum 2022, 3(1), 8; https://doi.org/10.3390/cmsf2022003008 - 29 Apr 2022
Cited by 1
Abstract
Few-shot semantic segmentation aims to transfer knowledge from base classes with sufficient data to represent novel classes with limited few-shot samples. Recent methods follow a metric learning framework with prototypes for foreground representation. However, they still face the challenge of segmentation of novel
[...] Read more.
Few-shot semantic segmentation aims to transfer knowledge from base classes with sufficient data to represent novel classes with limited few-shot samples. Recent methods follow a metric learning framework with prototypes for foreground representation. However, they still face the challenge of segmentation of novel classes due to inadequate representation of foreground and lack of discriminability between foreground and background. To address this problem, we propose the Dual Complementary prototype Network (DCNet). Firstly, we design a training-free Complementary Prototype Generation (CPG) module to extract comprehensive information from the mask region in the support image. Secondly, we design a Background Guided Learning (BGL) as a complementary branch of the foreground segmentation branch, which enlarges difference between the foreground and its corresponding background so that the representation of novel class in the foreground could be more discriminative. Extensive experiments on PASCAL- and COCO- demonstrate that our DCNet achieves state-of-the-art results.
Full article
(This article belongs to the Proceedings of AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD))
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Super-Resolution for Brain MR Images from a Significantly Small Amount of Training Data
Comput. Sci. Math. Forum 2022, 3(1), 7; https://doi.org/10.3390/cmsf2022003007 - 27 Apr 2022
Abstract
We propose two essential techniques to effectively train generative adversarial network-based super-resolution networks for brain magnetic resonance images, even when only a small number of training samples are available. First, stochastic patch sampling is proposed, which increases training samples by sampling many small
[...] Read more.
We propose two essential techniques to effectively train generative adversarial network-based super-resolution networks for brain magnetic resonance images, even when only a small number of training samples are available. First, stochastic patch sampling is proposed, which increases training samples by sampling many small patches from the input image. However, sampling patches and combining them causes unpleasant artifacts around patch boundaries. The second proposed method, an artifact-suppressing discriminator, suppresses the artifacts by taking two-channel input containing an original high-resolution image and a generated image. With the introduction of the proposed techniques, the network achieved generation of natural-looking MR images from only ~40 training images, and improved the area-under-curve score on Alzheimer’s disease from 76.17% to 81.57%.
Full article
(This article belongs to the Proceedings of AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD))
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Quantifying Bias in a Face Verification System
Comput. Sci. Math. Forum 2022, 3(1), 6; https://doi.org/10.3390/cmsf2022003006 - 20 Apr 2022
Abstract
Machine learning models perform face verification (FV) for a variety of highly consequential applications, such as biometric authentication, face identification, and surveillance. Many state-of-the-art FV systems suffer from unequal performance across demographic groups, which is commonly overlooked by evaluation measures that do not
[...] Read more.
Machine learning models perform face verification (FV) for a variety of highly consequential applications, such as biometric authentication, face identification, and surveillance. Many state-of-the-art FV systems suffer from unequal performance across demographic groups, which is commonly overlooked by evaluation measures that do not assess population-specific performance. Deployed systems with bias may result in serious harm against individuals or groups who experience underperformance. We explore several fairness definitions and metrics, attempting to quantify bias in Google’s FaceNet model. In addition to statistical fairness metrics, we analyze clustered face embeddings produced by the FV model. We link well-clustered embeddings (well-defined, dense clusters) for a demographic group to biased model performance against that group. We present the intuition that FV systems underperform on protected demographic groups because they are less sensitive to differences between features within those groups, as evidenced by clustered embeddings. We show how this performance discrepancy results from a combination of representation and aggregation bias.
Full article
(This article belongs to the Proceedings of AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD))
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
DAP-SDD: Distribution-Aware Pseudo Labeling for Small Defect Detection
Comput. Sci. Math. Forum 2022, 3(1), 5; https://doi.org/10.3390/cmsf2022003005 - 20 Apr 2022
Abstract
Detecting defects, especially when they are small in the early manufacturing stages, is critical to achieving a high yield in industrial applications. While numerous modern deep learning models can improve detection performance, they become less effective in detecting small defects in practical applications
[...] Read more.
Detecting defects, especially when they are small in the early manufacturing stages, is critical to achieving a high yield in industrial applications. While numerous modern deep learning models can improve detection performance, they become less effective in detecting small defects in practical applications due to the scarcity of labeled data and significant class imbalance in multiple dimensions. In this work, we propose a distribution-aware pseudo labeling method (DAP-SDD) to detect small defects accurately while using limited labeled data effectively. Specifically, we apply bootstrapping on limited labeled data and then utilize the approximated label distribution to guide pseudo label propagation. Moreover, we propose to use the t-distribution confidence interval for threshold setting to generate more pseudo labels with high confidence. DAP-SDD also incorporates data augmentation to enhance the model’s performance and robustness. We conduct extensive experiments on various datasets to validate the proposed method. Our evaluation results show that, overall, our proposed method requires less than 10% of labeled data to achieve comparable results of using a fully-labeled (100%) dataset and outperforms the state-of-the-art methods. For a dataset of wafer images, our proposed model can achieve above 0.93 of AP (average precision) with only four labeled images (i.e., 2% of labeled data).
Full article
(This article belongs to the Proceedings of AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD))
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
The Details Matter: Preventing Class Collapse in Supervised Contrastive Learning
Comput. Sci. Math. Forum 2022, 3(1), 4; https://doi.org/10.3390/cmsf2022003004 - 15 Apr 2022
Abstract
Supervised contrastive learning optimizes a loss that pushes together embeddings of points from the same class while pulling apart embeddings of points from different classes. Class collapse—when every point from the same class has the same embedding—minimizes this loss but loses critical information
[...] Read more.
Supervised contrastive learning optimizes a loss that pushes together embeddings of points from the same class while pulling apart embeddings of points from different classes. Class collapse—when every point from the same class has the same embedding—minimizes this loss but loses critical information that is not encoded in the class labels. For instance, the “cat” label does not capture unlabeled categories such as breeds, poses, or backgrounds (which we call “strata”). As a result, class collapse produces embeddings that are less useful for downstream applications such as transfer learning and achieves suboptimal generalization error when there are strata. We explore a simple modification to supervised contrastive loss that aims to prevent class collapse by uniformly pulling apart individual points from the same class. We seek to understand the effects of this loss by examining how it embeds strata of different sizes, finding that it clusters larger strata more tightly than smaller strata. As a result, our loss function produces embeddings that better distinguish strata in embedding space, which produces lift on three downstream applications: 4.4 points on coarse-to-fine transfer learning, 2.5 points on worst-group robustness, and 1.0 points on minimal coreset construction. Our loss also produces more accurate models, with up to 4.0 points of lift across 9 tasks.
Full article
(This article belongs to the Proceedings of AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD))
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Measuring Gender Bias in Contextualized Embeddings
Comput. Sci. Math. Forum 2022, 3(1), 3; https://doi.org/10.3390/cmsf2022003003 - 11 Apr 2022
Abstract
Transformer models are now increasingly being used in real-world applications. Indiscriminately using these models as automated tools may propagate biases in ways we do not realize. To responsibly direct actions that will combat this problem, it is of crucial importance that we detect
[...] Read more.
Transformer models are now increasingly being used in real-world applications. Indiscriminately using these models as automated tools may propagate biases in ways we do not realize. To responsibly direct actions that will combat this problem, it is of crucial importance that we detect and quantify these biases. Robust methods have been developed to measure bias in non-contextualized embeddings. Nevertheless, these methods fail to apply to contextualized embeddings due to their mutable nature. Our study focuses on the detection and measurement of stereotypical biases associated with gender in the embeddings of T5 and mT5. We quantify bias by measuring the gender polarity of T5’s word embeddings for various professions. To measure gender polarity, we use a stable gender direction that we detect in the model’s embedding space. We also measure gender bias with respect to a specific downstream task and compare Swedish with English, as well as various sizes of the T5 model and its multilingual variant. The insights from our exploration indicate that the use of a stable gender direction, even in a Transformer’s mutable embedding space, can be a robust method to measure bias. We show that higher status professions are associated more with the male gender than the female gender. In addition, our method suggests that the Swedish language carries less bias associated with gender than English, and the higher manifestation of gender bias is associated with the use of larger language models.
Full article
(This article belongs to the Proceedings of AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD))
►▼
Show Figures

Figure 1
Open AccessProceeding Paper
Measuring Embedded Human-Like Biases in Face Recognition Models
Comput. Sci. Math. Forum 2022, 3(1), 2; https://doi.org/10.3390/cmsf2022003002 - 11 Apr 2022
Cited by 2
Abstract
Recent works in machine learning have focused on understanding and mitigating bias in data and algorithms. Because the pre-trained models are trained on large real-world data, they are known to learn implicit biases in a way that humans unconsciously constructed for a long
[...] Read more.
Recent works in machine learning have focused on understanding and mitigating bias in data and algorithms. Because the pre-trained models are trained on large real-world data, they are known to learn implicit biases in a way that humans unconsciously constructed for a long time. However, there has been little discussion about social biases with pre-trained face recognition models. Thus, this study investigates the robustness of the models against racial, gender, age, and an intersectional bias. We also present the racial bias with a different ethnicity other than white and black: Asian. In detail, we introduce the Face Embedding Association Test (FEAT) to measure the social biases in image vectors of faces with different race, gender, and age. It measures social bias in the face recognition models under the hypothesis that a specific group is more likely to be associated with a particular attribute in a biased manner. The presence of these biases within DeepFace, DeepID, VGGFace, FaceNet, OpenFace, and ArcFace critically mitigate the fairness in our society.
Full article
(This article belongs to the Proceedings of AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD))
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
