Matrix Factorization for Signal Processing and Machine Learning

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Computational and Applied Mathematics".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 2367

Special Issue Editors

Department of Electrical and Computer Engineering, Concordia University, Montreal, QC H3G 1M8, Canada
Interests: signal processing; machine learning; neural networks; wireless communications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong SAR, China
Interests: communication theory; signal detection; sequence design; lattices; coding theory

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Concordia University, Montreal, QC H3G 1M8, Canada
Interests: signal processing; image processing; video signal processing; circuit theory; active filters

Special Issue Information

Dear Colleagues,

Compressed sensing is a recent sampling method proposed by Candes and Donoho in 2006.  It is an alternative to Shannon/Nyquist sampling for the acquisition of sparse or compressible signals. This paradigm immediately aroused a wide range of research activities in the signal processing and machine learning communities, and many new research topics have been proposed. The research topics are generally dealt with by solving a matrix factorization problem subject to some form of an L-norm constraint.

This Special Issue will focus on recent theoretical and application studies of matrix factorization, with a focus on signal processing, image processing, machine learning, data mining, and knowledge discovery. Topics include, but are not limited to, the following: 

  1. Singular value decomposition;
  2. Factor analysis;
  3. Principal component analysis;
  4. Independent component analysis;
  5. Blind source separation;
  6. Clustering based on matrix operation;
  7. Compressed sensing;
  8. Sparse recovery;
  9. Sparse coding and dictionary learning;
  10. Matrix completion;
  11. Matrix decomposition;
  12. Low-rank representation;
  13. Matrix approximation;
  14. Nonnegative matrix factorization;
  15. Concept factorization;
  16. CX decomposition;
  17. CUR decomposition;
  18. Latent semantic indexing;
  19. Theoretical analysis of related methods;
  20. Application of related methods.

Dr. Ke-Lin Du
Prof. Dr. Wai Ho Mow
Prof. Dr. M. N. S. Swamy
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • matrix factorization
  • matrix decomposition
  • matrix approximation
  • sparse approximation
  • compressed sensing
  • dictionary learning
  • Nyström method
  • clustering

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 444 KiB  
Article
Generalized Matrix Spectral Factorization with Symmetry and Construction of Quasi-Tight Framelets over Algebraic Number Fields
by Ran Lu
Mathematics 2024, 12(6), 919; https://doi.org/10.3390/math12060919 - 20 Mar 2024
Viewed by 396
Abstract
The rational field Q is highly desired in many applications. Algorithms using the rational number field Q algebraic number fields use only integer arithmetics and are easy to implement. Therefore, studying and designing systems and expansions with coefficients in Q or algebraic number [...] Read more.
The rational field Q is highly desired in many applications. Algorithms using the rational number field Q algebraic number fields use only integer arithmetics and are easy to implement. Therefore, studying and designing systems and expansions with coefficients in Q or algebraic number fields is particularly interesting. This paper discusses constructing quasi-tight framelets with symmetry over an algebraic field. Compared to tight framelets, quasi-tight framelets have very similar structures but much more flexibility in construction. Several recent papers have explored the structure of quasi-tight framelets. The construction of symmetric quasi-tight framelets directly applies the generalized spectral factorization of 2×2 matrices of Laurent polynomials with specific symmetry structures. We adequately formulate the latter problem and establish the necessary and sufficient conditions for such a factorization over a general subfield F of C, including algebraic number fields as particular cases. Our proofs of the main results are constructive and thus serve as a guideline for construction. We provide several examples to demonstrate our main results. Full article
(This article belongs to the Special Issue Matrix Factorization for Signal Processing and Machine Learning)
16 pages, 980 KiB  
Article
A Maximally Split and Adaptive Relaxed Alternating Direction Method of Multipliers for Regularized Extreme Learning Machines
by Zhangquan Wang, Shanshan Huo, Xinlong Xiong, Ke Wang and Banteng Liu
Mathematics 2023, 11(14), 3198; https://doi.org/10.3390/math11143198 - 21 Jul 2023
Cited by 2 | Viewed by 747
Abstract
One of the significant features of extreme learning machines (ELMs) is their fast convergence. However, in the big data environment, the ELM based on the Moore–Penrose matrix inverse still suffers from excessive calculation loads. Leveraging the decomposability of the alternating direction method of [...] Read more.
One of the significant features of extreme learning machines (ELMs) is their fast convergence. However, in the big data environment, the ELM based on the Moore–Penrose matrix inverse still suffers from excessive calculation loads. Leveraging the decomposability of the alternating direction method of multipliers (ADMM), a convex model-fitting problem can be split into a set of sub-problems which can be executed in parallel. Using a maximally splitting technique and a relaxation technique, the sub-problems can be split into multiple univariate sub-problems. On this basis, we propose an adaptive parameter selection method that automatically tunes the key algorithm parameters during training. To confirm the effectiveness of this algorithm, experiments are conducted on eight classification datasets. We have verified the effectiveness of this algorithm in terms of the number of iterations, computation time, and acceleration ratios. The results show that the method proposed by this paper can greatly improve the speed of data processing while increasing the parallelism. Full article
(This article belongs to the Special Issue Matrix Factorization for Signal Processing and Machine Learning)
Show Figures

Figure 1

Back to TopTop