Trends and Prospects in Computer Vision and Pattern Recognition Technology

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 July 2024 | Viewed by 1012

Special Issue Editors


E-Mail Website
Guest Editor
1. Department of Informatics, John Paul II University, 21-500 Biala Podlaska, Poland
2. Intelligent Information Technologies Department, Brest State Technical University, 224017 Brest, Belarus
Interests: artificial intelligence; neural networks; deep learning; artificial immune systems; image processing and recognition; intelligent signal processing

E-Mail Website
Guest Editor
UPEC - Laboratoire Images, Signaux et Systèmes Intelligents (LISSI) - EA 3956, University of Paris-Est Créteil Val de Marne, 94000 Créteil, France
Interests: data-driven analytics; Industry 4.0; deep learning; AI
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Informatics, Kazimierz Pulaski University of Technology and Humanities in Radom, 26600 Radom, Poland
Interests: mobile computing; human machine interaction

Special Issue Information

Dear Colleagues,

Computer vision and pattern recognition technology are great of importance for the evolution of artificial intelligence, as well as for the development of applications in various areas, including medical diagnosis, robotics, autonomous driving, 3D reconstruction, sentiment and emotion analysis, decision-making systems and many other domains.

This Special Issue aims to present and discuss the recent advancements, trends and applications in the broad field of computer vision and pattern recognition, as well as to review our present perspectives. It will cover fundamental and applied aspects of this topic.

We invite researchers from various fields to contribute to this Special Issue. We aim to inspire new approaches and applications of computer vision and pattern recognition.

In this Special Issue, we will publish high-quality papers in the overlapping fields of:

  • Medical image processing;
  • Machine learning theory;
  • 3D reconstruction;
  • Object detection and image segmentation;
  • Video processing;
  • Robotics;
  • Decision-making systems;
  • Semantic analysis.

Prof. Dr. Vladimir A. Golovko
Prof. Dr. Kurosh Madani
Prof. Dr. Anatoliy A. Sachenko
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • pattern recognition
  • image processing
  • machine learning
  • neural networks
  • artificial intelligence
  • neural-symbolic analysis

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2910 KiB  
Article
Multi-Scale Indoor Scene Geometry Modeling Algorithm Based on Segmentation Results
by Changfa Wang, Tuo Yao and Qinghua Yang
Appl. Sci. 2023, 13(21), 11779; https://doi.org/10.3390/app132111779 - 27 Oct 2023
Viewed by 697
Abstract
Due to the numerous objects with regular structures in indoor environments, identifying and modeling the regular objects in scenes aids indoor robots in sensing unknown environments. Typically, point cloud preprocessing can obtain highly complete object segmentation results in scenes which can be utilized [...] Read more.
Due to the numerous objects with regular structures in indoor environments, identifying and modeling the regular objects in scenes aids indoor robots in sensing unknown environments. Typically, point cloud preprocessing can obtain highly complete object segmentation results in scenes which can be utilized as the objects for geometric analysis and modeling, thus ensuring modeling accuracy and speed. However, due to the lack of a complete object model, it is not possible to recognize and model segmented objects through matching methods. To achieve a greater understanding of scene point clouds, this paper proposes a direct geometric modeling algorithm based on segmentation results, which focuses on extracting regular geometries in the scene, rather than objects with geometric details or combinations of multiple primitives. This paper suggests using simpler geometric models to describe the corresponding point cloud data. By fully utilizing the surface structure information of segmented objects, the paper analyzes the types of faces and their relationships to classify regular geometric objects into two categories: planar and curved. Different types of geometric objects are fitted using random sampling consistency algorithms with type classification results as prior knowledge, and segmented results are modeled through a combination of size information associated with directed bounding boxes. For indoor scenes with occlusion and stacking, utilizing a higher-level semantic expression can effectively simplify the scene, complete scene abstraction and structural modeling, and aid indoor robots’ understanding and further operation in unknown environments. Full article
Show Figures

Figure 1

Back to TopTop