Artificial Intelligence in Digital Humanities

A special issue of Big Data and Cognitive Computing (ISSN 2504-2289).

Deadline for manuscript submissions: 30 June 2024 | Viewed by 11529

Special Issue Editor


E-Mail Website
Guest Editor
Athena Research Center, University Campus at Kimmeria, GR-67100 Xanthi, Greece
Interests: digital image and multimedia technologies; content analysis and retrieval applications; machine learning and artificial intelligence; human–machine interaction; intelligent interactive environments; multi-sensory environments; ubiquitous and ambient intelligence; 3D digitization; extended reality
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

It is our pleasure to announce a new Special Issue “Artificial Intelligence in Digital Humanities” of the journal Big Data and Cognitive Computing.

Recent advances in specialized equipment and computational methods have had a significant impact on the digital humanities and, particularly, cultural heritage and archaeology research. Nowadays, digital technology applications contribute on a daily basis to recording, preservation, research and dissemination in the digital humanities. Digitization is the defining practice that bridges science and technology with the humanities, either in tangible or in the intangible forms. The digital replicas enable a wide range of studies, opening new horizons in humanities research. Advances in artificial intelligence and its successful application in core technical domains brings new possibilities to support humanities research in particularly demanding and challenging tasks.

AI applications in humanities research have a significant impact on multi-model and multi-dimensional information sharing and the representation of knowledge, enabling a reflection on historical trends, culture and identity. AI has already been used in a diverse set of applications, ranging from effective asset organization and knowledge representation to virtual and cyber archaeology, advanced and extended visualization, asset and context interpretation, intelligent tools, personalized access, and to gamification and public dissemination.

This Special Issue focuses on the forthcoming future of artificial intelligence applications in digital humanities, including recent developments ranging from deep and reinforcement learning approaches to recommendation technologies in the extended reality domain.

AI is currently reshaping humanities research, and the following research areas are just broad indicative cases of this evolution: digitization and preventive preservation with AI; interpretation and restoration with AI; predictive modelling with AI; heritage analytics with AI; dissemination with AI; personalization and inclusive design with AI.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  • Advanced, multi-scale, multi-modal and automated digitization;
  • Big data approaches in digital humanities;
  • From digital to cyber humanities;
  • Preventive preservation;
  • Climate change and heritage protection;
  • Decoding of ancient epigraph marks;
  • Deciphering of ancient languages, texts, epigraphs;
  • Automatic restoration of lost texts and images;
  • Predictive modeling in humanities research;
  • Digital resources with open linked data and semantic web capacity;
  • Advanced analysis and annotation of artifacts;
  • AI approaches in heritage science and physicochemical analysis;
  • Authentication, traceability and prevention of illicit trafficking;
  • Citizen science, and citizen involvement;
  • Extended (virtual, augmented, etc.) reality applications;
  • Advanced personalization and recommender technologies;
  • Inclusive design.

Dr. George Pavlidis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Big Data and Cognitive Computing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • digital humanities
  • computational archaeology
  • computational approaches
  • heritage science

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 16445 KiB  
Article
Ensemble-Based Short Text Similarity: An Easy Approach for Multilingual Datasets Using Transformers and WordNet in Real-World Scenarios
by Isabella Gagliardi and Maria Teresa Artese
Big Data Cogn. Comput. 2023, 7(4), 158; https://doi.org/10.3390/bdcc7040158 - 25 Sep 2023
Viewed by 1516
Abstract
When integrating data from different sources, there are problems of synonymy, different languages, and concepts of different granularity. This paper proposes a simple yet effective approach to evaluate the semantic similarity of short texts, especially keywords. The method is capable of matching keywords [...] Read more.
When integrating data from different sources, there are problems of synonymy, different languages, and concepts of different granularity. This paper proposes a simple yet effective approach to evaluate the semantic similarity of short texts, especially keywords. The method is capable of matching keywords from different sources and languages by exploiting transformers and WordNet-based methods. Key features of the approach include its unsupervised pipeline, mitigation of the lack of context in keywords, scalability for large archives, support for multiple languages and real-world scenarios adaptation capabilities. The work aims to provide a versatile tool for different cultural heritage archives without requiring complex customization. The paper aims to explore different approaches to identifying similarities in 1- or n-gram tags, evaluate and compare different pre-trained language models, and define integrated methods to overcome limitations. Tests to validate the approach have been conducted using the QueryLab portal, a search engine for cultural heritage archives, to evaluate the proposed pipeline. Full article
(This article belongs to the Special Issue Artificial Intelligence in Digital Humanities)
Show Figures

Figure 1

15 pages, 2331 KiB  
Article
Crafting a Museum Guide Using ChatGPT4
by Georgios Trichopoulos, Markos Konstantakis, George Caridakis, Akrivi Katifori and Myrto Koukouli
Big Data Cogn. Comput. 2023, 7(3), 148; https://doi.org/10.3390/bdcc7030148 - 04 Sep 2023
Cited by 4 | Viewed by 2621
Abstract
This paper introduces a groundbreaking approach to enriching the museum experience using ChatGPT4, a state-of-the-art language model by OpenAI. By developing a museum guide powered by ChatGPT4, we aimed to address the challenges visitors face in navigating vast collections of artifacts and interpreting [...] Read more.
This paper introduces a groundbreaking approach to enriching the museum experience using ChatGPT4, a state-of-the-art language model by OpenAI. By developing a museum guide powered by ChatGPT4, we aimed to address the challenges visitors face in navigating vast collections of artifacts and interpreting their significance. Leveraging the model’s natural-language-understanding and -generation capabilities, our guide offers personalized, informative, and engaging experiences. However, caution must be exercised as the generated information may lack scientific integrity and accuracy. To mitigate this, we propose incorporating human oversight and validation mechanisms. The subsequent sections present our own case study, detailing the design, architecture, and experimental evaluation of the museum guide system, highlighting its practical implementation and insights into the benefits and limitations of employing ChatGPT4 in the cultural heritage context. Full article
(This article belongs to the Special Issue Artificial Intelligence in Digital Humanities)
Show Figures

Figure 1

19 pages, 1214 KiB  
Article
Analyzing Online Fake News Using Latent Semantic Analysis: Case of USA Election Campaign
by Richard G. Mayopu, Yi-Yun Wang and Long-Sheng Chen
Big Data Cogn. Comput. 2023, 7(2), 81; https://doi.org/10.3390/bdcc7020081 - 20 Apr 2023
Cited by 3 | Viewed by 4548
Abstract
Recent studies have indicated that fake news is always produced to manipulate readers and that it spreads very fast and brings great damage to human society through social media. From the available literature, most studies focused on fake news detection and identification and [...] Read more.
Recent studies have indicated that fake news is always produced to manipulate readers and that it spreads very fast and brings great damage to human society through social media. From the available literature, most studies focused on fake news detection and identification and fake news sentiment analysis using machine learning or deep learning techniques. However, relatively few researchers have paid attention to fake news analysis. This is especially true for fake political news. Unlike other published works which built fake news detection models from computer scientists’ viewpoints, this study aims to develop an effective method that combines natural language processing (NLP) and latent semantic analysis (LSA) using singular value decomposition (SVD) techniques to help social scientists to analyze fake news for discovering the exact elements. In addition, the authors analyze the characteristics of true news and fake news. A real case from the USA election campaign in 2016 is employed to demonstrate the effectiveness of our methods. The experimental results could give useful suggestions to future researchers to distinguish fake news. This study finds the five concepts extracted from LSA and that they are representative of political fake news during the election. Full article
(This article belongs to the Special Issue Artificial Intelligence in Digital Humanities)
Show Figures

Figure 1

Back to TopTop