Cite this as
Pavlidis G (2022) AI trends in digital humanities research. Trends Comput Sci Inf Technol 7(2): 026-034. DOI: 10.17352/tcsit.000048Copyright License
© 2022 Pavlidis G. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Recent advances in specialised equipment and computational methods had a significant impact in the Humanities and, particularly, cultural heritage and archaeology research. Nowadays, digital technology applications contribute in a daily basis to the recording, preservation, research and dissemination of cultural heritage. Digitisation is the defining practice that bridges science and technology with the Humanities, either in the tangible or in the intangible forms. The digital replicas support a wide range of studies and the opening of new horizons in the Humanities research. Furthermore, advances in artificial intelligence methods and their successful application in core technical domains opened up new possibilities to support Humanities research in particularly demanding and challenging tasks. This paper focuses on the forthcoming future of intelligent applications in archaeology and cultural heritage, by reviewing recent developments ranging from deep and reinforcement learning approaches to recommendation technologies in the extended reality domain.
The beginning of the 21st century marks an era of ubiquitous digital technology, big data, and artificial intelligence (AI). Even traditionally technology-hostile domains are now strongly assisted by innovations of advanced digital technology. Archaeology and cultural heritage belong to a particular domain that, although once technology-phobic, has rapidly become digital and advanced technology assisted. There has been a significant financing and support of research and development activities towards this transition in the recent decades, a fact that is so apparent by the large volume of published work and data, and successful project stories. A digital day-to-day practice has already been established in recording, research, and dissemination in a multi-dimensional manner. Uninterrupted access, deep and augmented study, advanced multi-dimensional visualisation, digital restoration, digital and physical reconstruction, annotation, context creation and semantics, automated information extraction, geo-localisation, are just some encoded key-phrases for the benefits of this practice. Humanities are steadily moving towards the digital twin paradigm of the industry.
Textbook definitions of AI present it as the intelligence exhibited by artificial, humanmade, systems. As a domain of research, it may be seen as the bridging of engineering, computer science, neuroscience and social science. AI has been presented in the past as the approach to mimic biological intelligence in problem solving, adaptation and evolution. This research domain is highly active and is constantly redefining itself in purpose, means, even in its very essence, and a large number of diverging definitions may be found in the relevant literature [1-6]. In principle, AI seeks to model physical processes to develop a generalised domain understanding and make accurate prediction of trends and future events.
Anyone now understands that a transition to the digital twin era begins with digitisation, which is technically the process of recording quantized values of discrete-time and -space measurements of some physical quantity. Digital replicas, the results of digitisation, are simply sets of rounded values of systematically sampled physical quantities, represented in binary form (digits 0 and 1). A significant part of digitisation applied in archaeology and cultural heritage concerns physical (tangible) objects, and refers to the recording of the geometric and spectral structure of the digitised objects, typically termed 3D digitisation [7-10]. 3D digitisation may be focusing on the surface of physical objects (more like a 2.5D approach), or even on the inner structure of the materials (real 3D). The common usage of the term 3D digitisation refers to the 2.5D case (which is further discussed in the following), whereas tomography is suggested for the case of the real 3D case. The creation of digital replicas of real-world objects is a challenging task, escalating with the increasing geometric complexity [7,10-12]. Various methods and approaches have been proposed over the recent decades, mostly based on the modulation and detec-tion of electromagnetic waves, usually visible or infrared light, giving them the general characterisation of optical documentation methods. In any case, the goal for a successful digitisation would be the faithful digital replication, which, technically, translates to a high accuracy, high resolution digital object by a method that guarantees high precision. The classic approach in 3D digitisation includes the point-wise measurement of an object’s surface geometric structure and spectral response. Measuring those quantities result in at-least six (6) dimensional data for each of the points on the object’s surface, including the three (3) spatial coordinates in an arbitrary Cartesian coordinate system, and the three (3) spectral coordinates that characterise the visible spectrum and thus the perceived surface colours. If other spectra are also considered, the dimensionality can get even higher. Overall, the set of all those point-wise multi-dimensional data, form what is known as a point cloud, which may be readily in a 1 : 1 scale, or not. Apparently, any point cloud represents an approximation of a measured object’s surface, limited by the measurement accuracy and density of the applied method.
Tangible heritage consists of a wide variety of objects with various materials, which may be small and movable or rather large and immovable. Movable object examples are pottery and vases, utensils, statuettes, paintings, jewellery and folk art, which may be digitised in a specialised laboratory [7], whereas immovable object examples include large statues, buildings, architectural ensembles, urban areas, sites and excavations, which can only be digitised on-site [7,13-17]. Other methods and systems, along with other workflows are ideally applied for each case to attain the best digitisation result, which, nevertheless may be directly affected by the nature of the object itself. As a classic example, optical documentation of marble, a material with translucence and uneven surface roughness, results in point clouds with increased measurement noise [18].
A digital replica, the product of digitisation, is a valuable asset for research and dissemination. Its value becomes even more pronounced as technological infrastructures of high performance computing become common and accessible. Big data approaches and intelligent methods, tools, and applications naturally emerge, with the capacity to tackle highly challenging and demanding research and dissemination tasks. In this new digital ecosystem, AI finds a natural environment to flourish, and this is why the present is markedly characterised by an increased AI penetration in diverse industries and technological innovations, such as, complex data analysers, personal assistants and chatting systems, recommenders, intelligent robots and self-driving vehicles, and more. In our group, at the Athena Research Center, Greece, we have been working on AI applications in the Digital Humanities sector for around two decades. This paper focuses on briefly presenting our innovations in this domain, accompanied by very important works by other research groups, in an attempt to highlight the wide range of applications and the forthcoming impact of AI in the archaeology and cultural heritage research and dissemination.
AI applications in Humanities research have a significant impact on multi-model and multi-dimensional information sharing and knowledge representing, enabling a reflection on historical trends, culture and identity. AI has already appeared in a diverse set of applications, ranging from an effective asset organisation and knowledge representation, to virtual and cyber archaeology, to advanced and extended visualisation, to asset and context interpretation, to intelligent tools, to personalised access, to gamification and public dissemination1 [19]. This section reviews recent innovations achieved by AI applications in the Humanities.
---------------------
As stated in the previous texts, digitisation is a rather complex, time consuming, computationally demanding, high-cost process. Automation of the process is a major goal in the relevant research that has been pursued for many years. In 2012 the full automation of digitisation was proposed based on a robotic approach [20], further detailed a year later ([21], which used a turntable and a 3D scanning device operated by a robotic arm. The result was a decrease in the digitisation time between 2× to 5× compared to manual scanning of the same object. Later, in 2017, the ORION prototype system was presented [22], as a low-cost automated image-based 3D digitisation method, using a micro controller and a turntable, the ability to integrate multiple cameras and projected patterns. In 2020, another turntable and robotic arm-based method was presented [23] as a versatile desktop photogrammetry solution for small objects, which estimates the required camera poses at each reconstruction step. The approach is based on optimisation, which balances digi tisation speed, quality, and safety. In 2021, a low-cost robotic automated laser scanning or photogrammetry system was presented [24], using the notion of the turntable and of a hemispherical topology for scanning locations. Commercial or open source solutions available use multiple sensors, like the various photogrammetry rigs, with a disadvantage of high cost. Furthermore, robotic solutions have also been made commercially available, based on improved implementations of the previous publications.
Apart from automation, current trends in digitisation move towards an advanced or extended multi-dimensional and multi-modal future. It was as early as 2002 that in a series of publications [25-27] a novel framework was proposed for the extended digitisation, integrating optical documentation, GIS and archaeometry on the Web. The researchers in those papers proposed the creation of a multi-modal GIS on digitised objects, an early version of a cultural digital twin, long before this concept was perceived. This same idea was applied in 2014 for a complete cultural digital twin framework that used a 18-dimensional feature space, capable for advanced visualisations and comparative research [28]. Advancing preservation was also a major challenge in the cultural heritage community, which escalated due to the climate change impact on tangible heritage. Around 2008, major organisations undertook the commitment to address this impact [29], by identifying key topics, including
The natural next step was the emergence of the concept, ideology and methods for preventive preservation. About ten years after that identification, a large-scale study on the adaptation of cultural heritage to climate change risks, including the stakeholders, the research community, the governments and authorities, recognised the major implemen tation driving factors and best practices, along with the practical requirements, but also the foreseen barriers, and highlighted that even more research and practical solutions are needed [30]. This fact was also highlighted in the 2019 report of the Climate Change and Heritage Working Group of ICOMOS [31], in which the required adaptation was categorised in relation to the Paris Agreement2. Following this intuition and directives, current AI research and applications focus on the preventive aspects, and the improvement of the resilience of heritage. On this front, the EU project WARMEST3, proposed an AI method to monitor, inspect and assess potential deterioration on 3D digitised monument surfaces [32]. The deterioration may be the result of ageing, weathering or erosion and the proposed approach used deep learning to extract saliency maps and analyse surface structures to highlight potential regions of interest. Furthermore, in project ESTIA4, image-based recognition and segmentation of 3D digitised urban areas was presented for early warning and disaster aversion applications. The advanced digitisation method proposed in this work incorporated eight-band multi-spectral airborne imagery for multi-dimensional reconstruction and a deep learning method to identify distinct types of building materials. Another interesting approach in the direction of extended and advanced digitisation was proposed by cyber-archaeology, a concept that formally appeared in 2012 [33]. This concept encompasses the overall digital life-cycle of archaeological findings, practically, laying the foundations for a complete ecosystem of data, tools and services for digital management, study and dissemination of cultural heritage.
Support for new or/and deeper interpretation is among the major contributions of AI in diverse application domains. With digitised heritage becoming massive, the amount of digital data becoming available is invaluable. These data are a stable basis on which to build interpretation and restoration applications based on AI technology. In particular, image-based methods have already been applied successfully to the deciphering of ancient languages and to the decoding of epigraphic marks. There are also success stories on the front of the restoration of missing parts of texts. This section serves to briefly introduce those success stories.
At the beginning of the 21st century, Terras and Robertson created a rather complex AI methodology to assist in the interpretation of the Vindolanda writing tablets [34]. The researchers used the reinforcement learning framework, the idea of using the minimum description length as the model selection principle, and a fusion of a language and an image model. This research proved how fusing stroke with constraining linguistic knowledge can generate plausible readings of tablets. In addition, the methodology can trade off interpretation accuracy against time, when time-sensitive applications are required.
More recently, a group of researchers at the University of Chicago started working on a project focusing on deciphering cuneiform tablets using techniques of AI and computer vision. Adopting the paradigm of handwritten text recognition, preliminary results published in the project’s website suggest an 83% success5.
-----------------------
Hamdany, et al. proposed in 2021 an image-based AI approach for the identification of Sumerian cuneiform symbols and, furthermore, their transliteration to English letters [35]. Their approach was based on a relatively simple neural network architecture. The researchers applied data augmentation strategies to increase the number of training samples and help the AI method to generalise, and reported successful results.
Assael, et al. in 2019, presented PYTHIA as a system capable to recover missing parts of texts using deep learning, and particularly a bidirectional LSTM approach [36]. The PHI Greek Inscriptions corpus6 was used for the model training, along with synthesised ground-truth data, and a success rate of the order of 75% was reported7.
PYTHIA generates and suggests to Top-20 predictions sorted in order of confidence, to better support the recovery task. As PYTHIA works on a character-level basis and it is difficult to model the word-level context, the researchers designed PYTHIA’s encoder to take an additional input stream of word embeddings.
Fetaya, et al. proposed in 2020 a method to restore Babylonian texts with recurrent neural networks [37]. They particularly focused on the restoration of digitised texts using the Late Babylonian dialect of Akkadian. This method targets fitting a probabilistic model for token sequences, with tokens being the unit of any series of symbols composing the language. To train their system, they gathered a corpus of 1,400 Late Babylonian transliterated texts from Achaemenid period Babylonia8. The researchers designed a tokenisation method for Akkadian transliterations and trained the LSTM recurrent network and an n-gram baseline model on this corpus. They reported that the new LSTM system significantly outperformed the n-gram baseline approach9.
In a series of publications, Balla, et al. since 2012 reshaped archaeological predictive modelling by introducing a novel framework and practical workflow towards the creation of models to predict the presence of burial sites, which was modular and generalisable enough to be applicable to other cases also ( [38-40].
-----------------------
The proposed model used multi-dimensional archaeological and geospatial data, gathered by a large-scale literature review, subsequently analysed by classic feature selection strategies, and was able to adjust the weights of various criteria in order to accommodate for variations in the research question. The outcome of each prediction was a colourcoded map of the study region, either supporting ambitious or conservative predictions for the location most probable to find burial sites.
The model was extensively tested using variations in the selection criteria to identify different application scenarios, and proved to be rather dependable for two cases, (a) the proposal for new excavations and (b) the preservation against damage due to urban development. This is among the first complete cases of predictive modelling in archaeology targeting excavation archaeology and cultural management, laying foundations for reshaping this field of research.
Advancing digitisation into a multi-dimensional and multi-model process, as discussed in a previous section, brings new benefits towards a deeper analysis of heritage objects, which may take the form of geometric, spectral, physical and chemical analysis. This section focuses on innovations in this direction that reshape the field of heritage object analysis.
More than a decade ago, Koutsoudis, et al. presented novel 3D shape analysis concepts, approaches and results, directly applicable to digitised artefacts [41-44]. In addition, they incorporated those ideas into 3D content-based search and retrieval engines for 3D artefact databases. The presented research took advantage of symmetries and unique features in objects and created compact mathematical descriptions of their shape, resulting in enhanced automated shape understanding, applicable for database search tasks and comparative studies. Important innovations brought by this research were (a) the enabling of query-by-sketch capabilities, by which a simple sketch of an object’s surface is sufficient as a query for similar digitised artefacts, and (b) the enabling of content-based navigation capabilities in virtual environments supporting virtual exhibitions and museums, by which navigation in exhibitions can be significantly supported by shape similarity preferences.
Bogacz and Mara, in 2018, presented a method for an OCR-like10 text extraction from cuneiform tablets [45]. The method analyses the structure of 3D digitised cuneiform tablets and is able to identify wedges (cuneiform signs) and words, and to extract the corresponding text. To accomplish this, the method exploits a novel 12-dimensional descriptor for the wedges, encoding the endpoints and the intersection areas. The researchers tested their system against ground truth data produced manually by professional Assyriologists, and against other methods and showed how their new method outperformed previous methods by around 10%.
-------------------
Sevetlidis and Pavlidis, in 2018, proposed tree-based methods for the effective Raman spectra identification and material characterisation in archaeometry [46,47]. Archaeometry is increasingly being supported by Raman spectroscopy. This method is based on the interactions of monochromatic light with the molecular vibrations of materials, and is able to provide data about vibrational, rotational or other low-frequency modes. The approach followed in this research used an extremely randomised trees classifier, and was tested using the standard RRUFF dataset. Although the method was simple and straightforward, due to the nature of the involved data it was able to come close or even outperform more complex previous approaches.
In 2018 Ioannakis, et al. proposed a novel descriptor applicable to the classification of digitised artefacts, based on 3D mesh extrema and curvatures [48]. Specifically, the proposed method encodes the 3D data of the geometry of a digitised artefact into a two-dimensional (2D) image, which the researchers named CurvMap, that is based on mesh extrema and the principal curvature. This new descriptor has been tested for object classification tasks using typical deep learning approaches. The researchers reported a classification accuracy in the order of 90% for CurvMaps and convolutional neural network architectures.
Recently, Davoudi, et al. proposed an architecture based on autoencoder technology that uses sparse latent variables to solve problems relating to the ancient handwritten document layout analysis [49]. This is an unsupervised method, thus no training with large amounts of ground-truth data is required. In the evaluation experiments, the system showed around 97% classification accuracy, and a significantly high layout extraction performance. The researchers reported that the new method outperforms other unsupervised learning methods, or being comparable to the state-ofthe-art supervised learning approaches.
Authentication of archaeological objects and works of art is typically done by domain experts, still, with limitations. AI approaches in artefact authentication appeared recently as solutions that target authentication performance beyond the capabilities of human experts. In 2018, Elgammal, et al. presented a method for the stroke analysis in line drawings [50]. The motivation for this research was the solution of the attribution problem for drawings of unknown artists. The proposed method was based on quantifying the characteristics of individual strokes and comparing these characteristics to a large number of strokes by different artists using statistical inference and machine learning. The researchers collected nearly 300 drawings and commissioned artists to make similar drawings to serve as the fakes in the experiments. They devised a new stroke segmentation algorithm, a complex feature selection method consisting of manual and of deep learning steps. SVM11 was used to combine the manually selected with the learned features. At the output of the system, any given drawing is classified by aggregating the outcomes of the classification of its strokes. The researchers reported a 70%-90% accuracy in classifying individual strokes and above 80% in drawings, with a perfect 100% in detecting fakes. Recently, Mai, et al. presented a method for learning art styles, by focusing on their psychological effects and the conceptualised differences in the Eastern and the Western art [51]. The researchers analysed the concepts of art and beauty and presented examples of how now AI approaches imitate or create art by using the deep learning architecture of GANs12 and particularly that of Cycle GANs. Based on those approaches, they devised a new framework to couple the notion of art and perceived beauty, and were able to create art representations based on a learned style and according to the purpose of any predetermined relevant psychological experiment. Subsequently they conducted psychological experiments using human subjects and presented interesting results regarding the perceived aesthetics artworks and the differences (or not) between Western and Eastern art.
------------------------
The massive digital data in the Humanities is the driving force for the AI applications targeting their study, analysis and interpretation. To enable this capacity, data annotation on a small or large scale is required to supply with ground-truth data and empower the learning process. In particular, a challenging task that still attracts research attention is the creation of 3D data annotation tools to assist in the annotation of massive 3D data of digitised heritage. On this front, Arampatzakis, et al. recently proposed a novel user-friendly heritage objects annotation tool [52]. The system, named Art3mis, uses contemporary computer graphics approaches and adopts international interoperability standards for the metadata representations. The motivation for this system was to tackle some of the major challenges in 3D annotation systems. IT is able to apply direct-on-surface annotation, based on ray-polygon intersection, from the computer graphics sector. The system uses the WYSIWYG interaction model and supports multiple annotation per 3D object.
The reshaping of the dissemination practices in the Humanities is expected to stem primarily from the AI applications within the framework of virtual, augmented, extended realities and gamification. This is an interesting domain, which although amounts to massive bibliography in various application fields, has, still, limited contribution to cultural heritage. Extended realities and gamification has been applied for dissemination to the public but is lacking in contribution to heritage experts.
Although virtual exhibitions and museums have a long history, advanced virtual museums research has a significantly shorter history. In the early years, Koutsoudis, et al. in 2012, proposed an integration of formerly unconnected technologies, VR, gaming and content-based 3D object retrieval [53]. The proposed system, built on game engine technology, used 3D shape analysis of exhibits and enabled the integration of content-based navigation into the framework of a virtual museum experience, supporting, at the same time, state-of-the-art visualisations and real-time interactions. Content-based navigation enabled an object similarity-based guide.
Later, in 2014, Knabb, et al. established how useful immersive VR can be in archaeological research by using a CAVE system for the study of an excavation, including the total amount of data provided by a digital excavation basis [54].
The researchers presented diverse VR solutions for advanced visualisation of archaeological/excavation sites. This research group showcased, recently, another interesting system for VR display of archaeological data to the public, the CAVEkiosk technology [55].
In 2016, Kiourt, et al. presented Dynamus, a general-purpose fully dynamic virtual exhibition framework [56]. Dynamus applies the WYSIWYG virtualisation and interaction model, and enables the creation of exhibitions on the Web. It was built on state-of-the-art gaming technology and provides linked open data functionality (connection with Europeana and Google). The researchers tested Dynamus in cultural and educational settings and released it as a free to use platform on the Web.
Kiourt, et al. in 2017, focused on realism in virtual environments for cultural heritage applications, and provided a systematic and mathematical analysis of relevant concepts [57]. The researchers detailed on the computer graphics foundation technologies, and highlighted their effects. In addition, the researchers analysed how AI and particularly intelligent virtual agents can be applied to VR museums. By connecting digitisation, gaming technology and concepts of play, they provided a theoretical framework for the formation of the domain of serious games.
Building on the serious games paradigm, Kiourt, et al. focused on virtual environments supported by multiple intelligent virtual agents in competitive and cooperative modes [58]. This was an attempt to probe into the potential of multi-agent systems as social organisations towards the development of dynamic virtual environments. The researchers redefined the cultural VR experience design as a three-dimensional process, consisting of content generation, knowledge modelling and game-play.
In the big data and networking era, focusing on relevant content is becoming extremely challenging. The research domain that has undertaken the task of the automatic content selection is closely connected to what was already called personalisation. personalisation has been pursued since the advent of the Web, but it is now becoming a pressing demand due to the online availability of massive data. In essence, personalisation represents the attempt to model a person’s preferences and needs and to limit the exposure only to the most relevant information.
The field of AI associated with personalisation is that of recommenders, which are sophisticated algorithms and intelligent software engines providing personalisation in a diverse range of domain, ranging from music and movies, to social networking, tourism, search engines, and more. recommenders are not new to cultural applications, as they have already been proposed primarily for cultural tourism applications. Pavlidis, in 2019, presented the history of recommenders in cultural heritage, along with an in-depth presentation of the alternative approaches of this technology and their mathematical foundation, arguing about the benefits and highlighting the limitations [59]. This review predicted future trends and proposed future developments.
Following the serious games paradigm defined in [58] a set of rules, a theory and best practice directives were presented by Kiourt, et al. in 2018, for the development of personalised dynamic virtual experiences in cultural heritage applications [60]. This work enhanced the content-knowledge-play framework previously set by the same researchers by including user modelling towards the definition of the personalised experience. The researchers presented illustrative case studied and provided insightful results.
Pavlidis, in a set of publications during 2018-2019, used the analysis and recommendations in [59] and proposed a complete new framework, on which to develop recommenders for cultural tourism [60-64]. In this work, the problem of personalisation was approached as a visitor satisfaction and visit optimisation problem. A new satisfaction model was proposed and a novel recommender was built upon this model to provide meaningful recommendations during a cultural visit in various scales, ranging from a museum to a historical urban area. Pavlidis designed large-scale simulations to assess the efficacy of this approach, by creating large amounts of simulated persons with characteristics taken from global data resources. Evaluation of this technology resulted improvements in comparison to the typical, naive, popularitybased recommendation approach, and proved it is able to provide insight to cultural visit designers.
Sidiropoulos, et al. in 2021, presented a simulated environment, in which intelligent agents are programmed to perform actions based on historical context and evolve in a competitive-cooperative learning style [65]. In this reinforcement learning framework, the research question related to the shaping of the agents’ behaviours towards achieving particular goals. The experiments in a realistic virtual environment, where ancient warriors represented the intelligent agents, demonstrated how the agents can learn new behaviours based on the predetermined or evolving rules. Behaviour shaping is at the forefront of reinforcement learning research and is expected to aid in the development of highly adaptive personal assistant technologies both for the experts and the laymen.
Pistofidis, et al. in 2021, proposed a novel approach towards a more inclusive technology for cultural heritage applications [66]. The approach bridges various independent technological domains, like 3D digitisation, 3D printing, the Internet of Things and AI, and enables the intelligent tactile interaction with artefacts for the visually impaired. The design of this research was based on the living labs approach, and included design iterations with visually impaired persons of various levels. This research resulted in the creation of the concept of smart exhibits suitable for haptic experiences, consisting of printed replicas of artefacts packed with electronics and AI to provide intuitive interaction.
The rapid technological advancement of the recent decades begun to transform all the scientific sectors. The Humanities have already been significantly affected by computational approaches and a new domain emerged, called Digital Humanities. Although traditional research questions remain the same, artificial intelligence is providing new answers, even leading to new research questions. As in nearly all cases in human history, on one side there is a rush to embrace the new possibilities, whereas on the other side there is reluctance to change. The advent of AI brings an impressive potential for positive change in Humanities research and this has already been highlighted in a wide range of cultural applications, contributing to the recording, the preservation, the study and the dissemination.
This paper attempted to highlight how AI is reshaping Humanities research by using selected cases of recent research, ranging from advanced digitisation and preservation, to interpretation and restoration, to heritage analysis and predictive modelling, to extended reality and gamification, to personalisation, and to inclusive heritage. This research domain is highly active and is expected to bring even more interesting results in the near future. This review is far from being considered a complete presentation of the situation, and serves as a brief presentation of the AI trends in Digital Humanities, as viewed by around twenty years research at the Athena Research Center and other groups around the world.
Reshaping research in a particular domain using AI is a big step forward and it is useful to pause and reflect on the unfolding phenomena. In 2018, Pavlidis et al. attempted to codify current challenges for the Digital Humanities and to identify the future research agenda [66]. Two years later, Markantonatou et al. updated this list of challenges so that socially relevant topics may be included [67]. Reversing, in a way, the direction in which the challenges may be viewed, this work outlined the challenges in which the Digital Humanities can have a significant social impact.
So, what’s in the way forward? The easy way to respond would be to simply say anything is possible. AI has opened new horizons in many research domains that needed a tools and an aid to analyse vast amounts of data, to uncover hidden patterns and to restore missing links. All topics briefly reviewed in this paper are certainly expecting to be further supported by AI as well as topics not yet emerged. In this largely cross-disciplinary endeavour, knowledge discovery is advanced and bridges among seemingly unrelated domains are being built, towards a new era of philosophy, as originally regarded.
This research has been co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE (project code: T2EDK-01018).
Subscribe to our articles alerts and stay tuned.
PTZ: We're glad you're here. Please click "create a new query" if you are a new visitor to our website and need further information from us.
If you are already a member of our network and need to keep track of any developments regarding a question you have already submitted, click "take me to my Query."